Monday, September 18, 2017

Publishing and Managing your NPM Packages with Visual Studio Team Services (VSTS)

Writing packages in the javascript world historically has been troublesome. The javascript community has been troubled with shifting and numerous standards for package development. In recent years the introduction of NodeJS has breathed life into the javascript community and provided the community with the Node Package Manager, NPM.

Anyone who has been doing any work with front end development (or Node for that matter) knows full well the benefits of having a repository of packages that a development team can draw on. Some companies publish their common packages openly on NPM to be shared with the community. Many companies are required by their security needs to keep their packages from being published on the public npm registry.

With Visual Studio Team Services (VSTS), publishing NPM packages has never been easier! As a case study let's take for example an open source package developed by myself, CaravanDb, a concise tool to allow you to easily create database migration scripts and apply them to a given environment.

The Goal

To store my source in Visual Studio Team Services (VSTS) Git. When a pull request is created for the dev branch, run the available unit tests and automatically merge it into dev. When merged into dev, the package should be built and published to NPM under the tag "@dev".

Let's Begin!


Step 1 - Configuring the Build

Assuming we have our application already in Visual Studio Team Services (VSTS) Git, we now need to define a build definition. This will control how our application is built and tested.

Imgur

Imgur

Imgur

Notice that we have some options for Agent Queue. For this we will use the 'Hosted' or 'Hosted Linux' agents, you can always use your own hosted agent as well.

Add Tasks

We need a few tasks to get us started. Let's begin with getting the latest source code, in this case we are using the Dev branch to control publishing of the latest beta version (from Dev).

Imgur

Now let's add a "Node Tool Installer" task. This allows us to configure the set of Node tooling being used by our build definition. In this case we need Node 8.x (since Caravan uses async/await).

Imgur

The next step is to add an npm task to download and install the latest packages from our NPM registry.

Imgur

Imgur

Notice above that we have chosen npm install. This npm command runs against the configured registry. We have a few options here. We can use .npmrc, npm's configuration file to choose the registry and credentials to use.

We can also use a managed NPM feed as well. More on that later. For now we will use .npmrc. You can click on Manage, near the Registries and Authentication section. This will bring you to the Add new npm connection dialog. He we will add our npmjs.org credentials to publish to the public npm registry.

Imgur

By doing this we are configuring our npm client to install directly from the public npm registry.

The next step is to "build" our source. In our case Caravan is built with Typescript and therefore needs to be transpiled before we run our unit tests and publish.

Imgur

We are running a custom npm command called "Build". This points to our package.json's scripted command "build", which runs the typescript compiler.

The next step is to run our unit tests. We will again add an npm task that runs our Jasmine unit tests.

Imgur

Now that we've run and hopefully have passed our unit tests we can safely publish our npm package with the tag "beta".

Note that first, the package needs to be published from console to your feed or to npmjs.org.

To do so we simply add another task that runs the publish command. In our case we are using a customer publish command. In fact we are using the npm package called "publish". This package allows some flexibility in that it blocks all publishing until we've incremented our package version number in package.json.

For more info feel free to read more at the packages npm page here.

Imgur

Step 2 - Apply Branch Policies

At this stage we have a build definition. But what we want is to run this build definition automatically when a pull request is opened against the Dev branch. To do so, from the Branches screen right click on the branch desired and go to Branch Policies.

Notice the many options we have to configure the policy in which pull requests are validated and merged.

Imgur

In our case, we are going to focus on Build Validation's Build Policy options. Let's go ahead and add a build policy called Merges to Dev Policy. This policy will configure how we will allow merges to be handled as they are created. In our case we want to automatically trigger this policy when a pull request is created against Dev (from a feature branch for example).

The policy requirement is that the build must succeed in order for the given pull request to be merged.

We've chosen to expire or invalidate the build when any subsequent changes are made.

Imgur

Step 3 - Test it out!

Now that we have a build definition created and a policy applied to our dev branch let's go ahead and create a feature branch that we will attempt to merge into Dev.

First make sure to move from master to dev.
C:\Users\Andrew\src\caravan-db (master) (caravan-db@0.3.10) $ git branch feature-test-file $ git checkout feature-test-file Switched to branch 'feature-test-file'
Now create a test.txt file and add to our staged changes, commit, and push to VSTS. Also we want to increment the version number of our package by 0.0.1.
$ git add test.txt $ git commit -m "Added test file" [feature-test-file 0412174] Added test file 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 test.txt $ git push origin feature-test-file Counting objects: 25, done. Delta compression using up to 8 threads. Compressing objects: 100% (23/23), done. Writing objects: 100% (25/25), 12.29 KiB | 0 bytes/s, done. Total 25 (delta 13), reused 0 (delta 0) remote: Analyzing objects... (25/25) (899 ms) remote: Storing packfile... done (25 ms) remote: Storing index... done (53 ms) To ssh://caravan-db.visualstudio.com:22/_git/caravan-db * [new branch] feature-test-file -> feature-test-file
Now that we've commited a change to our feature branch. We can go to Visual Studio Team Services (VSTS) and create a pull request to merge our changes to Dev.

Imgur

Imgur
Once our build is kicked off some time will pass and our pull request will be automatically merged (given our build passes).

Imgur

Now our changes have been merged successfully and our package has been published to npm!

Wait, what about internal only enterprise packages?

Of course some packages are not for public use for many reasons, security being one of the largest. Many enterprises are not able to openly publish their packages on the public registry. Visual Studio Team Services (VSTS) has a solution for this, available in it's extensions marketplace!

Package Management introduces the concept of multiple feeds that can be used to organize and control access to your packages. If you're familiar with using packages from NuGet.org or modules from npmjs, you can think of those places each as a single feed. You may also have heard feeds called package repositories.

Now with Package Management you can create your own Enterprise npm registry to publish your packages to. You can even require enterprise developers to pull from the enterprise controlled feed of packages.
This gives enterprises another level of control and security around what packages are made public and what packages are being used across their enterprise.

Great, so how do I set this up?

First go to the marketplace page for Package Management. Here we can "Get" this extension and install it into our instance of VSTS.

Once installed we can now go to Builds > Packages

Imgur
Here we can create a new "feed".

Imgur

Once a feed is created, there are several ways you can make sure your developers are using the appropriate registry. They can simply add the given registry key to their local .npmrc file (not to be checked in of course). They can use the vsts-npm-auth command line tool to help configure their .npmrc file. Also available is the option to copy your credentials token directly to your .npmrc file (of course this is security sensitive as that token is equivalent to a password).

Imgur

Now we need to make sure our build definition is pulling from our custom npm feed. Back in our build definition's npm tasks we have the option to choose our registry and authentication options. Instead of choosing .npmrc we can now choose Registry | select here and choose our new corporate-npm feed.

Imgur

Now our packages will publish to our enterprise configured and controlled npm registry.

Conclusion

Visual Studio Team Services (VSTS) is a fantastic and highly configurable tool! We can use it to automate the tedious tasks in our ALM process. It has a broad range of support for many open source products and support for other build tools like NPM. It also allows for establishing policies that help provide structure and governance over our ALM process.

Computer Vision Made Easy with Cognitive Services

Computer vision is truly amazing technology! It can be used to distill rich information from images that can help you breathe life into your applications. Microsoft's Cognitive Services - Computer Vision API gives developers the power to develop their software with the ability to truly see and comprehend the world around it.

When I come upon a new technology I try to make application in my life. I ask myself how could I see myself using this? There are many ways in which we can use Computer Vision to make our day to day lives easier!

Context: Enter the family dog

One problem in my home we always struggle with is keeping our family dog out of our bedrooms. As much as we train him, he always sneaks in to nap on our beds. How can we write software to help us train our family friend?

Problem: I need to be alerted when our dog enters a room This is where I applied my new found knowledge of the Computer Vision API. What if every time my dog entered our bedroom I was alerted via SMS message?

Prerequisites
Assuming you have .NET Core installed, let's start by creating a new .NET Core Console Application:
From console:

dotnet new console
dotnet restore

This will create our initial application with a Program.cs and a dog-watcher.csproj
In Program.cs I added the needed variables that will be passed into our application via Environment. Other means of storing and working with configuration are also fine. I just happen to prefer environment variables.

static string IP_CAMERA_SNAPSHOT_URL = System.Environment.GetEnvironmentVariable("IP_CAMERA_SNAPSHOT_URL");

static string IP_CAMERA_USER = System.Environment.GetEnvironmentVariable("IP_CAMERA_USER");

static string IP_CAMERA_PASSWORD = System.Environment.GetEnvironmentVariable("IP_CAMERA_PASSWORD");

In this case I am using an IP based camera that provides you with a REST API for getting a snapshot from the camera via HTTP.

Now that we have our needed configuration collected, let's begin by creating logic to fetch our snapshots. Our IP camera uses HTTP GET with basic auth.

First, lets add System.Net.Http;

dotnet add package System.Net.Http

Now you can go ahead and use the System.Net.Http namespace.

Program.cs
static byte[] GetSnapshotFromIPCamera(string url) {
        Client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Basic", System.Convert.ToBase64String(System.Text.ASCIIEncoding.ASCII.GetBytes(IP_CAMERA_USER + ":" + IP_CAMERA_PASSWORD)));
        return Client.GetByteArrayAsync(url).Result;
    }

Now that we have logic to collect images, let's create some classes that match with the response from the Computer Vision API. This is only mapping Tags since that is the data element we are working with. In this case tags will provide us with a list of descriptors that computer vision has identified and with what level of confidence it feels those descriptors are identified.

using Newtonsoft.Json;

namespace Vision {
    public class Response {
        [JsonProperty("tags")]
        public Tag[] Tags {get;set;}
    }
    public class Tag {
        [JsonProperty("name")]
        public string Name {get;set;}
        [JsonProperty("confidence")]
        public float Confidence {get;set;}
    }
}

Now let's post those images to Cognitive Services for analysis. Notice the use of the variable AZURE_VISION_URL and AZURE_VISION_KEY these are coming directly from the Azure Portal after we've gone and added our Coginitive Services subscription.

static string AZURE_VISION_URL = System.Environment.GetEnvironmentVariable("AZURE_VISION_URL");

static string AZURE_VISION_KEY = System.Environment.GetEnvironmentVariable("AZURE_VISION_KEY");
static Vision.Response GetVisionAnalysisResponse(byte[] image) {
    Client.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", AZURE_VISION_KEY);
    using (var content = new System.Net.Http.ByteArrayContent(image))
    {
        content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/octet-stream");
        string res = Client.PostAsync(AZURE_VISION_URL + @"/analyze?visualFeatures=Tags, Description", content).Result.Content.ReadAsStringAsync().Result;
        return JsonConvert.DeserializeObject(res);
    }
}

Now let's wire up the loop that watches our room.

static int INTERVAL = String.IsNullOrEmpty(System.Environment.GetEnvironmentVariable("INTERVAL")) ? 3000 : Convert.ToInt32(System.Environment.GetEnvironmentVariable("INTERVAL"));

static void Main(string[] args)
{
    while(true) {
        try {
            var res = GetVisionAnalysisResponse(GetSnapshotFromIPCamera(IP_CAMERA_SNAPSHOT_URL));

            // Only trigger alert logic when CV is 90% sure a dog has been spotted!
            if (res.Tags
                    .Where(t => t.Name.ToUpper().Equals("DOG") && t.Confidence > 0.90)
                    .Count() > 0) {
                // TODO: ADD LOGIC FOR WHEN DOG IS FOUND IN THE ROOM
            }

        } catch (Exception ex) {
            Console.WriteLine(ex);
        }
        Console.WriteLine("Waiting for next interval");
        System.Threading.Thread.Sleep(INTERVAL);
    }
}

With Twilio, sending SMS messages from C# is very easy. Simply setup a trial account and install the needed nuget packages.

Note: that trial Twilio accounts can only send SMS messages to verified devices.

dotnet add package Twilio

Program.cs
static string TWILIO_API_KEY = System.Environment.GetEnvironmentVariable("TWILIO_API_KEY");

static string TWILIO_API_TOKEN = System.Environment.GetEnvironmentVariable("TWILIO_API_TOKEN");

static string TWILIO_API_PHONE = System.Environment.GetEnvironmentVariable("TWILIO_API_PHONE");

static void Main(string[] args)
{
    while(true) {
        try {
            var res = GetVisionAnalysisResponse(GetSnapshotFromIPCamera(IP_CAMERA_SNAPSHOT_URL));

            // Only trigger alert logic when CV is 90% sure a dog has been spotted!
            if (res.Tags
                    .Where(t => t.Name.ToUpper().Equals("DOG") && t.Confidence > 0.90)
                    .Count() > 0) {
                Twilio.TwilioClient.Init(TWILIO_API_KEY, TWILIO_API_TOKEN);
                Twilio.Rest.Api.V2010.Account.MessageResource.Create(
                    new Twilio.Types.PhoneNumber(TARGET_PHONE), 
                    from: new Twilio.Types.PhoneNumber(TWILIO_API_PHONE), 
                    body: "It looks like Jack has made his way into your room!"
                );
            }

        } catch (Exception ex) {
            Console.WriteLine(ex);
        }
        Console.WriteLine("Waiting for next interval");
        System.Threading.Thread.Sleep(INTERVAL);
    }
}

Now we can simply adjust our camera for ideal viewing angle and run our application.

Conclusion

Working with Microsoft's Cognitive Services is an easy way to build intelligent applications. Additionally, Azure provides a seemless hosting environment for applications that interact with Cognitive Services.

Collecting and Analyzing Dumps with .Net Core on Linux

Welcome file Over the past 18 years or so of .NET’s life span an entire ecosystem of tools have been developed to assi...