Monday, December 23, 2019

Collecting and Analyzing Dumps with .Net Core on Linux

Welcome file
Over the past 18 years or so of .NET’s life span an entire ecosystem of tools have been developed to assist in troubleshooting application faults and problems. One common tool used to debug is called WinDBG and has historically been used heavily to triage application faults. However, in the past few years, with the advent of .Net Core, more and more .Net applications are running on Linux! This has opened up an entire market in which .Net can be leveraged.

So let’s paint a hypothetical scenario: you have spent the past 3 months building your product with .Net Core 3.1 and you are using a docker base image of Alpine 3.10.x. You find your application works great, so your team decides its time to ship to production. You deploy your application to your docker host (AKS, ACI, Docker Swarm, etc). You open the flood gates and your users begin to use the application. All seems well until you find your application seems to be locking up.

Now traditionally you might capture a dump using DebugDiag or perhaps procdump, or you can also trigger it right from Task Manager in Windows. But how might you collect a dump for analysis in .Net Core?

Much of the information covered here can also be found in an article on GitHub found here. Save it for your reference.

Collecting Dumps Using Dotnet Dump

We have a new “global” tool called “dotnet-dump” that can be used to collect and analyze dumps for crashes and memory concerns. It runs both on Windows and Linux and allows you to run SOS commands with the caveat that it is not a native debugger, so you won’t be able to inspect native stack frames.
So how do we get started? First, as long as you have the .NET Core SDK installed, the tool can be installed right from your dotnet CLI by running the following command:
dotnet tool install -g dotnet-dump
Once installed, all you need to do is identify the PID (process id) of the dotnet application running. You can do this using the following command:
$ dotnet-dump ps  
1 dotnet /usr/share/dotnet/dotnet
2147 dotnet-dump /root/.dotnet/tools/dotnet-dump
The above lists the dotnet process as PID #1. So from there we can request a dump be collected by running the following:
dotnet-dump collect -p  1  
Writing minidump with heap to /app/core_20191223_160819  
Complete

If you don’t have the SDK installed on your container

Many install just the runtime on their containers to keep their container size footprint down. That being said in order to use this tool you will need the SDK. What you can do in this case is either build a new debug image that uses the “SDK” base image that Microsoft publishes, or you can download the SDK directly into the live container. On Debian 10 I used the following commands:
$ curl https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb --output packages-microsoft-prod.deb
$ dpkg -i packages-microsoft-prod.deb
$ apt-get install apt-transport-https
$ apt-get update
$ apt-get install dotnet-sdk-3.1
$ dotnet tool install -g dotnet-dump
At this point the SDK should be installed. Keep in mind that it may not have created the symlink or dotnet-dump might not be added to path. In this case you need to reference it directly at /root/.dotnet/tools/dotnet-dump.

Analyzing the dump

At this point, analyzing will start to feel more familiar if you are comfortable in WinDBG and SOS. You can analyze the dump collected directly on the container or you can copy it locally. Keep in mind you need to analyze on the same OS/Distro on which you collected, collecting from Alpine and analyzing on Ubuntu was not possible.
To get started analyzing you can run the dotnet-dump analyze <path_to_dump>.
From here you are provided an interactive console where you can run your familiar SOS commands:
Usage:
  dotnet-dump [command]

Commands:
  exit, quit                           Exit interactive mode.
  help, soshelp <command>              Display help for a command.
  lm, modules                          Displays the native modules in the process.
  threads, setthread <threadid>        Sets or displays the current thread id for the SOS commands.
  clrstack <arguments>                 Provides a stack trace of managed code only.
  clrthreads <arguments>               List the managed threads running.
  dumpasync <arguments>                Displays info about async state machines on the garbage-collected heap.
  dumpassembly <arguments>             Displays details about an assembly.
  dumpclass <arguments>                Displays information about a EE class structure at the specified address.
  dumpdelegate <arguments>             Displays information about a delegate.
  dumpdomain <arguments>               Displays information all the AppDomains and all assemblies within the domains.
  dumpheap <arguments>                 Displays info about the garbage-collected heap and collection statistics about objects.
  dumpil <arguments>                   Displays the Microsoft intermediate language (MSIL) that is associated with a managed method.
  dumplog <arguments>                  Writes the contents of an in-memory stress log to the specified file.
  dumpmd <arguments>                   Displays information about a MethodDesc structure at the specified address.
  dumpmodule <arguments>               Displays information about a EE module structure at the specified address.
  dumpmt <arguments>                   Displays information about a method table at the specified address.
  dumpobj <arguments>                  Displays info about an object at the specified address.
  dso, dumpstackobjects <arguments>    Displays all managed objects found within the bounds of the current stack.
  eeheap <arguments>                   Displays info about process memory consumed by internal runtime data structures.
  finalizequeue <arguments>            Displays all objects registered for finalization.
  gcroot <arguments>                   Displays info about references (or roots) to an object at the specified address.
  gcwhere <arguments>                  Displays the location in the GC heap of the argument passed in.
  ip2md <arguments>                    Displays the MethodDesc structure at the specified address in code that has been JIT-compiled.
  name2ee <arguments>                  Displays the MethodTable structure and EEClass structure for the specified type or method in the specified module.
  pe, printexception <arguments>       Displays and formats fields of any object derived from the Exception class at the specified address.
  syncblk <arguments>                  Displays the SyncBlock holder info.
  histclear <arguments>                Releases any resources used by the family of Hist commands.
  histinit <arguments>                 Initializes the SOS structures from the stress log saved in the debuggee.
  histobj <arguments>                  Examines all stress log relocation records and displays the chain of garbage collection relocations that may have led to the address passed in as an argument.
  histobjfind <arguments>              Displays all the log entries that reference an object at the specified address.
  histroot <arguments>                 Displays information related to both promotions and relocations of the specified root.
  setsymbolserver <arguments>          Enables the symbol server support
  soshe

Other Tools

Another fine tool for collecting CPU performance stats of an application is PerfCollect. Historically this has been a “Windows” application. However we have made significant strides in bringing a similar experience to Linux. For more information on this head on over to our article on collecting cpu samples via a sidecar container.

Tuesday, February 26, 2019

Remediation of old versions of Visual Studio Isolated Shell

Premier Support for Developers is used in all manner of ways to support your organization's needs. Sometimes we work with IT organizations to help in updating or removing older, out of support versions of our software from production across entire enterprise landscapes. At times this means working with hundreds of thousands of computers.

In some cases it can be a challenge to identify what software is installed and present on machines across customer networks. Today we have one specific example, Visual Studio Isolated Shell. If you're wondering what VS Isolated Shell is, take a quick read through this primer.

Why are old versions of VS Isolated Shell a challenge to upgrade/remove? VS Isolated Shell is just a component of a product, used by Microsoft and other third party vendors, to write software. That being said, if you are like many enterprise customers, you might not have a firm grasp of what products are leveraging this component. 

For Microsoft products the answer is easy, we use VS Shell with SQL Server Management Studio, Visual Studio, just to name a few products. But there are an untold number of vendors that could also be using VS Isolated Shell.


How do you identify what software dependent on Visual Studio Isolated Shell?

At first, it was a challenge to identify consistent methods for identifying dependency on VS Isolated Shell. Some applications will register in the registry, their dependence of VS Isolated Shell. But this is not reliably done by third party vendors as it is not a required step for installation.

After research with our product engineering teams we realized a simple way of identifying what software is dependent on Visual Studio Isolated Shell. During the course of operation, an executable that is dependent on VS Isolated Shell will have a .pkgdef file alongside the executable.

So let's say you have MyCustomApplication.exe, if this application is dependent on VS Isolated Shell there will be a MyCustomApplication.pkgdef file alongside the executable.

With this little tidbit of information, we were able to write a powershell script that iterates through a given folder recursively and identifies executables that are dependent on VS Isolated Shell.

Armed with a list of software that may be outdated or out of support, enterprise customers can more easily identify what software needs to be upgraded or removed based on older versions of VS Isolated Shell.


Monday, September 18, 2017

Publishing and Managing your NPM Packages with Visual Studio Team Services (VSTS)

Writing packages in the javascript world historically has been troublesome. The javascript community has been troubled with shifting and numerous standards for package development. In recent years the introduction of NodeJS has breathed life into the javascript community and provided the community with the Node Package Manager, NPM.

Anyone who has been doing any work with front end development (or Node for that matter) knows full well the benefits of having a repository of packages that a development team can draw on. Some companies publish their common packages openly on NPM to be shared with the community. Many companies are required by their security needs to keep their packages from being published on the public npm registry.

With Visual Studio Team Services (VSTS), publishing NPM packages has never been easier! As a case study let's take for example an open source package developed by myself, CaravanDb, a concise tool to allow you to easily create database migration scripts and apply them to a given environment.

The Goal

To store my source in Visual Studio Team Services (VSTS) Git. When a pull request is created for the dev branch, run the available unit tests and automatically merge it into dev. When merged into dev, the package should be built and published to NPM under the tag "@dev".

Let's Begin!


Step 1 - Configuring the Build

Assuming we have our application already in Visual Studio Team Services (VSTS) Git, we now need to define a build definition. This will control how our application is built and tested.

Imgur

Imgur

Imgur

Notice that we have some options for Agent Queue. For this we will use the 'Hosted' or 'Hosted Linux' agents, you can always use your own hosted agent as well.

Add Tasks

We need a few tasks to get us started. Let's begin with getting the latest source code, in this case we are using the Dev branch to control publishing of the latest beta version (from Dev).

Imgur

Now let's add a "Node Tool Installer" task. This allows us to configure the set of Node tooling being used by our build definition. In this case we need Node 8.x (since Caravan uses async/await).

Imgur

The next step is to add an npm task to download and install the latest packages from our NPM registry.

Imgur

Imgur

Notice above that we have chosen npm install. This npm command runs against the configured registry. We have a few options here. We can use .npmrc, npm's configuration file to choose the registry and credentials to use.

We can also use a managed NPM feed as well. More on that later. For now we will use .npmrc. You can click on Manage, near the Registries and Authentication section. This will bring you to the Add new npm connection dialog. He we will add our npmjs.org credentials to publish to the public npm registry.

Imgur

By doing this we are configuring our npm client to install directly from the public npm registry.

The next step is to "build" our source. In our case Caravan is built with Typescript and therefore needs to be transpiled before we run our unit tests and publish.

Imgur

We are running a custom npm command called "Build". This points to our package.json's scripted command "build", which runs the typescript compiler.

The next step is to run our unit tests. We will again add an npm task that runs our Jasmine unit tests.

Imgur

Now that we've run and hopefully have passed our unit tests we can safely publish our npm package with the tag "beta".

Note that first, the package needs to be published from console to your feed or to npmjs.org.

To do so we simply add another task that runs the publish command. In our case we are using a customer publish command. In fact we are using the npm package called "publish". This package allows some flexibility in that it blocks all publishing until we've incremented our package version number in package.json.

For more info feel free to read more at the packages npm page here.

Imgur

Step 2 - Apply Branch Policies

At this stage we have a build definition. But what we want is to run this build definition automatically when a pull request is opened against the Dev branch. To do so, from the Branches screen right click on the branch desired and go to Branch Policies.

Notice the many options we have to configure the policy in which pull requests are validated and merged.

Imgur

In our case, we are going to focus on Build Validation's Build Policy options. Let's go ahead and add a build policy called Merges to Dev Policy. This policy will configure how we will allow merges to be handled as they are created. In our case we want to automatically trigger this policy when a pull request is created against Dev (from a feature branch for example).

The policy requirement is that the build must succeed in order for the given pull request to be merged.

We've chosen to expire or invalidate the build when any subsequent changes are made.

Imgur

Step 3 - Test it out!

Now that we have a build definition created and a policy applied to our dev branch let's go ahead and create a feature branch that we will attempt to merge into Dev.

First make sure to move from master to dev.
C:\Users\Andrew\src\caravan-db (master) (caravan-db@0.3.10) $ git branch feature-test-file $ git checkout feature-test-file Switched to branch 'feature-test-file'
Now create a test.txt file and add to our staged changes, commit, and push to VSTS. Also we want to increment the version number of our package by 0.0.1.
$ git add test.txt $ git commit -m "Added test file" [feature-test-file 0412174] Added test file 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 test.txt $ git push origin feature-test-file Counting objects: 25, done. Delta compression using up to 8 threads. Compressing objects: 100% (23/23), done. Writing objects: 100% (25/25), 12.29 KiB | 0 bytes/s, done. Total 25 (delta 13), reused 0 (delta 0) remote: Analyzing objects... (25/25) (899 ms) remote: Storing packfile... done (25 ms) remote: Storing index... done (53 ms) To ssh://caravan-db.visualstudio.com:22/_git/caravan-db * [new branch] feature-test-file -> feature-test-file
Now that we've commited a change to our feature branch. We can go to Visual Studio Team Services (VSTS) and create a pull request to merge our changes to Dev.

Imgur

Imgur
Once our build is kicked off some time will pass and our pull request will be automatically merged (given our build passes).

Imgur

Now our changes have been merged successfully and our package has been published to npm!

Wait, what about internal only enterprise packages?

Of course some packages are not for public use for many reasons, security being one of the largest. Many enterprises are not able to openly publish their packages on the public registry. Visual Studio Team Services (VSTS) has a solution for this, available in it's extensions marketplace!

Package Management introduces the concept of multiple feeds that can be used to organize and control access to your packages. If you're familiar with using packages from NuGet.org or modules from npmjs, you can think of those places each as a single feed. You may also have heard feeds called package repositories.

Now with Package Management you can create your own Enterprise npm registry to publish your packages to. You can even require enterprise developers to pull from the enterprise controlled feed of packages.
This gives enterprises another level of control and security around what packages are made public and what packages are being used across their enterprise.

Great, so how do I set this up?

First go to the marketplace page for Package Management. Here we can "Get" this extension and install it into our instance of VSTS.

Once installed we can now go to Builds > Packages

Imgur
Here we can create a new "feed".

Imgur

Once a feed is created, there are several ways you can make sure your developers are using the appropriate registry. They can simply add the given registry key to their local .npmrc file (not to be checked in of course). They can use the vsts-npm-auth command line tool to help configure their .npmrc file. Also available is the option to copy your credentials token directly to your .npmrc file (of course this is security sensitive as that token is equivalent to a password).

Imgur

Now we need to make sure our build definition is pulling from our custom npm feed. Back in our build definition's npm tasks we have the option to choose our registry and authentication options. Instead of choosing .npmrc we can now choose Registry | select here and choose our new corporate-npm feed.

Imgur

Now our packages will publish to our enterprise configured and controlled npm registry.

Conclusion

Visual Studio Team Services (VSTS) is a fantastic and highly configurable tool! We can use it to automate the tedious tasks in our ALM process. It has a broad range of support for many open source products and support for other build tools like NPM. It also allows for establishing policies that help provide structure and governance over our ALM process.

Collecting and Analyzing Dumps with .Net Core on Linux

Welcome file Over the past 18 years or so of .NET’s life span an entire ecosystem of tools have been developed to assi...