Shawn Cicoria - CedarLogic

Perspectives and Observations on Technology

Recent Posts





Email Notifications

Blogs I Read



Use OpenDNS

AngularJS intellisense NuGet package added

Using the work of John Bledsoe, a NuGet package has been added that takes a dependency on AngularJS.Core – and provides the angular.intellisense.js file to your project.


Referenced here:

This approach takes the ‘per’ project approach and puts this into the /scripts directory of your project.

_references.js – auto-update

In addition, the package delivers a default _references.js file that allows for auto-update. If you want to at any time regernated the references, in Visual Studio open _references.js and then choose “Update JavaScript References”.

/// <autosync enabled="true" />
/// <reference path="angular.js" />
/// <reference path="angular-mocks.js" />
/// <reference path="project.js" />

The source for the intellisense is here: 

Thanks to Jim Bledsoe for his hard work on this…

Posted: 02-27-2015 2:09 PM by cicorias | with no comments
Filed under: ,
Troubleshooting tool–Azure WebJob TCP Ping

I’ve dropped a quick Visual Studio solution that has a simple Azure WebJob intended to run Continuously that does a Socket open (TCPPing) for a specific IP address and Port – intended to aid in identifying any network transient errors over time.

Located on Github here:

Azure Web Job - TCP Ping


Visual Studio 2013 Solution

The solution file contains several things:

  1. JobRunnerShell - simple wrapper class that handles some of the basic management of the process/job for Azure Web Job Classes/Assemblies
  2. TcpPing - an implementation of a simple Azure Web Job - intended to be run continuously - that will do a basic TcpPing (open a socket) every second.
  3. SimpleTcpServer - a very basic Tcp Listener service that echo's back a simple string (1 line) in reverse.


The intent of the solution is to provide a very basic diagnostic tool that can be run continuously in an Azure WebSite deployment that will 'ping' (open a socket) to a server -- this is intended for testing availability of a server using IPv4 addresses (ie. across Virtual Networks (VNET) in Azure.

This can be used against any Server listener service - as it only does a Socket.Open() - of course, the Server should be resilient to these Socket Opens and immediate close.

NOTE: *Make sure you Open the Windows Server firewall if using Windows Server as your 'host' for this.

Reporting is done to the Azure Web Jobs dashboard and is also visible via Azure WebSite's Streaming Logs

The easiest way is just go the the Azure Portal or use Visual Studio Azure Explorer - which comes with the Azure Tools for Visual Studio.


Azure WebJob

The Azure WebJob - 'TcpPing' utilizes the NuGet packaging that 'lights up' the "Publish as Azure Webjob" tooling in Visual Studio. Otherwise, this can be deployed using alternate methods - see How to Deploy Azure WebJobs to Azure Websites


Within the TcpPing project - examine the "app.confg" you will find 'appSettings' and 'connectionStrings' that you should review. The connectionStrings are dependant upon your Azure Storage Account information which you can retrieve from the Azure Portal


The following settings are used to open the socket - adjust to your need.

  <add key="sqlIp" value=""/>
  <add key="sqlPort" value="8999"/>
Connection Strings

Make sure you put in your 'connectionString' - which comes from the Azure Portal for the Storage Account.

<!--WEBJOBS_RESTART_TIME  - please set in porta to seconds like 60.-->
<!--WEBJOBS_STOPPED   setting to 1 means stopped-->
<!-- The format of the connection string is "DefaultEndpointsProtocol=https;AccountName=NAME;AccountKey=KEY" -->
<!-- For local execution, the value can be set either in this config file or through environment variables -->
<add name="AzureWebJobsDashboard"
 connectionString="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>" />
<add name="AzureWebJobsStorage"
 connectionString="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>" />
Simple TCP Server

The solution also contains a simple TCP Server that is intended to be installed on within the Virtual Network - for example in an IaaS instance -- that you are attempting to validate connectivity and continuous reporting on.

Again, you should be able


There is only 1 setting in the app.config under appSettings. If this is absent, the Simple TCP Server listens on IPv4 addresses only (actually all the time) and uses port 8999

 <add key="serverPort" value="8999"/>
Posted: 02-15-2015 4:27 PM by cicorias | with no comments
Filed under: ,
Azure Resource Manager–Creating an IaaS VM within a VNET

NOTE: Azure Resource manager is in Preview. Thus, anything posted here may change. However, the approach for identifying what resources are available for update and registered for Subscriptions should be the same.

Here are the prior posts:

For this walkthrough I’m going to build up a Linux VM instance off of a VHD that I have within a storage account. I use the ARM REST API calls direct, bypassing the Templates that are coming to ARM.

Azure Resource Manager Templates

The REST API calls that I’m illustrating below are NOT using Azure Resource Manager (ARM) Templates. You can review some of the articles below for more information on ARM Templates.


Currently, ARM Templates are in preview and as of this writing, only 3 templates are available. Those are listed in the tooling. in the links above.

ARM Templates Basics

ARM Templates provided a template language that establishes the dependencies amongst the composition of supporting resources. In addition, the backend to ARM Templates provides the management and control over provisioning all these dependency upon submission of the ARM Template provision request. Ultimately, it is built upon ARM – which for this post is accessible via the ARM REST API calls.

Creating a VM using ARM REST API – not using Templates

This blog post is NOT about ARM Templates. I cover the underlying ARM REST API directly and create composition through a series of client side REST API calls (if that makes any sense).

Preparation steps:


$blob1 = Start-AzureStorageBlobCopy -srcUri $srcUri `
	-SrcContext $srcContext `
	-DestContainer $containerName `
	-DestBlob "testcopy1.vhd" `
	-DestContext $destContext 



Resource Manager Composition

If you examine an existing VM via the REST API you will see within the JSON response several sections contained within the properties JSON object.

Any of these, for example ‘domainName’, ‘networkProfile/virtualNetworks’, ‘storageProfile/operatingSystemDisk/storageAccount’ are additional resources that you must compose or create prior to making the REST API call to create (PUT) the VM that you want to provision. If you refer back to the prior posts that lists the /providers for a subscription, you will find providers as follows:

  • Networks - Microsoft.ClassicNetwork – with resource types of ‘virtualNetworks’, ‘reservedIps’, ‘quotas’, and ‘gatewaySupportedDevices’
  • Domain Name - Microsoft.ClassicCompute – with resource providers of ‘domainNames’, ‘virtualMachines’, ‘capabilities, ‘quotas’, etc.


You will see ‘storageAccount’ listed in the GET response for each disk – OS and data disks – that are used by the existing VM. Note that there is an ‘id’ property. That’s the ‘id’ or reference that will be used in the final PUT request at the end of the post for each of the associated resources.

Prior Posts

In prior posts, I cover the creation of a Resource Group and a Storage Account.  Here is a screen shot of the Resource Group creating using Postman (I won’t repeat the Storage Account creation).


Create Domain Name

The domain name represents the ‘cloud service’ – which essentially represents the wrapper and associated public IP address that the VM when created be behind – think firewall. In the new portal ( these show as Domains (thus that is what ARM uses). In the current production portal ( they appear as Cloud Services – a term that anybody doing Worker and Web Roles in PaaS are quite familiar with.


The PUT request contains a JSON body that is quite simple.


Content-Type: application/json
Authorization: Bearer: <token>

     "properties": {
         "label": "scicoriacentosnew",
         "hostName": ""
     "name": "scicoriacentosnew",
     "type": "Microsoft.ClassicCompute/domainNames",
     "location": "eastus2"


Create Domain Response

For this call, the HTTP response comes back as ‘201 – created’ – you’ll see in the other requests, as they are longer running, you will get a ‘202 – Accepted’ – and with that response headers that you can obtain the operation request ID and ask Azure for the status of the request. That is key to identifying any issues beyond the simple serialization issues for bad JSON PUT payloads.

Create Virtual Network

For a VNET (virtual network) I’m going to create with my ‘demo2’ resource group a VNET with –well, the JSON below should be fairly explanatory (that’s what’s nice about JSON and REST of these things).


Content-Type: application/json
Authorization: Bearer <token>

    "properties": {
        "addressSpace": {
            "addressPrefixes": [
        "subnets": [
                "name": "Subnet-1",
                "addressPrefix": ""
                "name": "Subnet-2",
                "addressPrefix": ""
    "id": "/subscriptions/<subscriptionId>/resourceGroups/demo2/providers/Microsoft.ClassicNetwork/virtualNetworks/scicoriacentosnew",
    "name": "scicoriacentosnew",
    "type": "Microsoft.ClassicNetwork/virtualNetworks",
    "location": "eastus2"


For those that aren’t familiar, the VNET will be created covering a CIDR range of addresses 10.1.*.*/16 – and, in addition, within that top-level range, I’ve created a 2 subnets covering 10.1.0.*/24 & 10.1.1.*/24.

Additional subnets can be specified within the JSON array [] if needed. Validation will occur at submission and provisioning time – so, you need to check for a ‘202 – Accepted’ response, and with that operations ID, validate status.. I could’ve also specified additional ranges for the address prefixes as well – just as you can do in the Azure Management portal.



Create Virtual Machine

Now that we have the following, we’re ready to issue an ARM REST API PUT request to create the virtual machine.:

  1. Storage Account with a VHD ready to use
  2. Resource Group
  3. Domain Name
  4. Virtual Network


This one is rather lengthy. You should note the ‘nested’ referred to resource that were created in the prior steps. Again, once submitted and no deserialization issues, URI issues, etc., you should get back a ‘202 – Accepted’ – from that response you have to check the Operation Status using the provided status ID:

    "properties": {
        "hardwareProfile": {
            "platformGuestAgent": true,
            "size": "Basic_A2",
            "deploymentName": "scicoriacentosnew",
            "deploymentLabel": "scicoriacentosnew",
        "domainName": {
            "id": "/subscriptions/<subscriptionId>resourceGroups/demo2/providers/Microsoft.ClassicCompute/domainNames/scicoriacentosnew",
            "name": "scicoriacentosnew",
            "type": "Microsoft.ClassicCompute/domainNames"
        "storageProfile": {
            "operatingSystemDisk": {
                "diskName": "scicoriacentosnew-os-20150212",
                "caching": "ReadWrite",
                "operatingSystem": "Linux",
                "ioType": "Standard",
                //"sourceImageName": "5112500ae3b842c8b9c604889f8753c3__OpenLogic-CentOS-65-20140926",
                "vhdUri": "",
                "storageAccount": {
                    "id": "/subscriptions/<subscriptionId>resourceGroups/demo/providers/Microsoft.ClassicStorage/storageAccounts/scicoriademo",
                    "name": "scicoriademo",
                    "type": "Microsoft.ClassicStorage/storageAccounts"
        "networkProfile": {
            "inputEndpoints": [
                    "endpointName": "SSH",
                    "privatePort": 22,
                    "publicPort": 22,
                    "protocol": "tcp",
                    "enableDirectServerReturn": false
            "virtualNetwork": {
                "subnetNames": [
                "id": "/subscriptions/<subscriptionId>resourceGroups/demo/providers/Microsoft.ClassicNetwork/virtualNetworks/scicoriacentos",
                "name": "scicoriacentos",
                "type": "Microsoft.ClassicNetwork/virtualNetworks"
    "location": "eastus2",
    "name": "scicoriacentosnew"


If all is OK from a formatting and basic validation, you should see an ‘202 – Accepted’ – from that obtain the operation ID – and use the API call to check that operation’s status.


Checking Operation Status


Take a look at the documentation for the structure of that call.

A Succeeded Operation


An InProgress Operation



An Error Operation Status


Azure Resource Manager – Creating Storage Accounts

NOTE: Azure Resource manager is in Preview. Thus, anything posted here may change. However, the approach for identifying what resources are available updatable, and registered for Subscriptions should be the same.

In a prior post I walked through adding an SSL certificate then associating that certificate with an Azure Websites. While some sample C# code was provided, for this post it will entirely via using a REST tool – Fiddler or PostMan suffices for this.

The last post I walked through adding a VNET. To cleanup, remember that with REST an HTTP DELETE is all you need to cleanup…

Getting Available Resource Providers

Again, from the prior posts, if you want to see the list of resource providers for a subscription, issue an authenticated call to the /providers resource:

I’ve glossed over authentication quite a bit in the prior posts, take a look here: which uses the ADAL library for Managed code.  Again, you can do the calls via REST as well – I’ll try to cover that in a future post.

Creating a Storage Account

Again, the best way to ‘learn’ the representation of these resources is to review an existing one.

Here, issuing a GET request to the following gives me the resource properties.


    "properties": {
        "provisioningState": "Succeeded",
        "status": "Created",
        "endpoints": [
        "accountType": "Standard-LRS",
        "geoPrimaryRegion": "East US",
        "statusOfPrimaryRegion": "Available",
        "geoSecondaryRegion": "",
        "statusOfSecondaryRegion": "",
        "creationTime": "2014-12-19T19:18:59Z"
    "id": "/subscriptions/<subscriptionId>/resourceGroups/somegroup/providers/Microsoft.ClassicStorage/storageAccounts/<accountName>",
    "name": "<accountName>",
    "type": "Microsoft.ClassicStorage/storageAccounts",
    "location": "eastus2"

Creating a Locally Redundant Storage Account (LRS)

Ok, we trim back the JSON properties to what we just need to create. Note that when you’re in the portal, there’s really not too many options to set other than the Name and the Pricing level. Same for the JSON properties here.


Authorization: Bearer <token>
Content-Type: application/json

    "properties": {
        "accountType": "Standard-LRS",
    "name": "<reourceName>",
    "type": "Microsoft.ClassicStorage/storageAccounts",
    "location": "eastus2"


Here’s the screenshot from Postman – note the 202 – Accepted



Azure Resource Manager– Creating a Resource Group and a VNET

NOTE: Azure Resource manager is in Preview. Thus, anything posted here may change. However, the approach for identifying what resources are available updatable, and registered for Subscriptions should be the same.

In a prior post I walked through adding an SSL certificate then associating that certificate with an Azure Websites. While some sample C# code was provided, for this post it will entirely via using a REST tool – Fiddler or PostMan suffices for this.

Getting a Token

I’m not going to go into the token acquisition process here. The easiest way to obtain a token for this walkthrough is to just open a session to then view the network traffic as you open up some Blades – for example, open up the “Resource Group” blade – look for an “Authorization” header.  It should show up as “Bearer ….”.  It’s a JWT which if you’d like (WARNING you’re giving your token to a 3rd party to decipher)  - this site is managed by the folks.

If you want to decode this yourself, note that the JWT token is presented in 3 parts, separated by a ‘.’ and each part base64 encoded. The parts are header, payload, and signature.

Getting your subscription ID

In the prior post and the sample code here:  there’s some sample code in helper classes to list subscription ID’s for a login. I’m not reviewing that here.

You can logon to the and then go to subscriptions. Click on the subscription that you will be using and then you’ll see a lower case Guid for that subscription.

Available Providers and Capabilities

Not everything is available now, but, you can do a GET request as follows to see what sub-capabilities within that Resource Provider is available.


Authorization: Bearer <token>

You can take a look at the results from 1 of my subscriptions here:

Resource Provider – Microsoft

From the response, let’s look at the Virtual Network provider and it’s manageable resources:

            "id": "/subscriptions/<subscriptionId>/providers/Microsoft.ClassicNetwork",
            "namespace": "Microsoft.ClassicNetwork",
            "resourceTypes": [
                    "resourceType": "virtualNetworks",
                    "locations": [
                        "East US",
                        "East US 2",
                        "West US",
                        "North Central US (Stage)"
                    "apiVersions": [
                    "resourceType": "reservedIps",
                    "locations": [
                        "East Asia",
                    "apiVersions": [
                    "resourceType": "quotas",
                    "locations": [],
                    "apiVersions": [
                    "resourceType": "gatewaySupportedDevices",
                    "locations": [],
                    "apiVersions": [
            "registrationState": "Registered"


Within the ‘resourceTypes’ array, we can see that ‘virtualNetworks’ is available.

Updating – first review an existing VNET

Resource Manager is in early preview; thus, documentation is very limited. However, this is REST – so, the conventions of REST (for the HTTP verbs) and the shape of the JSON for updating can be somewhat determined through reviewing existing resources.

    "value": [
            "properties": {
                "provisioningState": "Succeeded",
                "status": "Created",
                "siteId": "<siteId>",
                "inUse": false,
                "addressSpace": {
                    "addressPrefixes": [
                "subnets": [
                        "name": "Subnet-1",
                        "addressPrefix": ""
            "id": "/subscriptions/<subscriptionId>/resourceGroups/demo2/providers/Microsoft.ClassicNetwork/virtualNetworks/myVnet",
            "name": "myVnet",
            "type": "Microsoft.ClassicNetwork/virtualNetworks",
            "location": "eastus2"


From the above, you can see the shape of the VNET resource, and also take note of the ‘id’ property as it illustrates the existence of the VNET within the resource group – here ‘demo2’.  Also note that the URI has the Resource name on the URL itself – this will be important when we PUT a new VNET.

Create a Resource Group

Let’s first create a new Resource Group using a PUT


Content-Type: application/json
Authorization:Bearer <token>

"name": "demo2",
"location": "eastus2",


This should give you a HTTP 201 – Created response.



Creating a VNET

If you saw above, the shape of the VNET resource has a set of properties.  The REST call is shaped as follows:<subscriptionId>/resourceGroups/demo2/providers/Microsoft.ClassicNetwork/virtualNetworks/myVnet?api-version=2014-06-01

Authorization: Bearer <token>
Content-Type: application/json
	"name": "myVnet",
	"type": "Microsoft.ClassicNetwork/virtualNetworks",
	"location": "eastus2",
	"properties": {
        "addressSpace": {
            "addressPrefixes": [
        "subnets": [
                "name": "Subnet-1",
                "addressPrefix": ""


For this VNET – called ‘myVnet’ under the ‘demo2’ resource group, I’ll be using the address space (CIDR format) along with defining a single subnet – called ‘Subnet-1’ that is a segment

Again, once this runs, you receive a HTTP 201 – Created if all is OK.



Now, you an switch back to the Portal to take a look at your VNET and review the settings.




NOTE: I want to stress again that not all aspects of each service within Azure is available today through Resource Manager. It is still in preview and as capabilities are added they will appear under the various /providers that are associated with your subscriptions.

Registered Resource Providers

One last note, review the /providers result and identify IF your subscription is even “Registered” for that resource provider. For my subscription as an example, the status is as follows:<subscriptionId>/providers?api-version=2015-01-01



Not registered:


Resource Provider Registration

For registering a subscription with a Resource provider, check the Azure Resource Manager REST API Reference:

Azure Resource Manager – Adding and Assigning Certificates to a Website


This post is going to cover working with Azure Resource Manager using the REST interfaces [1] and specifically the “Microsoft.Web/sites” and “Microsoft.Web/certificates” providers.

You can review the list of Resource Providers by issuing an authenticated REST call to the Uri below, replacing {subscriptionId} with your tenant id.{subscriptionId}/providers?api-version=2015-0101 [2]

For this sample, I’m going to make use of the Active Directory Authentication Library for .NET – primarily to make the REST calls for acquiring an Access Token [3]. You don’t have to use these libraries, but for this sample and to abbreviate the token dance with AAD, I’m using them.

It’s important to note that Certificates are now part of the Resource Group itself, and can be assigned to multiple web sites within that Resource Group.

Basic Steps

The basic steps for adding a certificate and assigning it to an Azure Website are as follows.

Note: All of these preparation steps can be done via script or REST calls as well; this sample is just demonstrating certificate upload and assignment to an existing Azure Web Site that already has DNS names (custom) assigned to them. You will also incur additional charges for the custom domain and SSL as warned during the portal method – you will not see warnings via code. Please review pricing information to understand the impact.


1. Using an AAD credential that is part of the AAD Domain that the Resource Group is part of – for this example, I add a credential for the AAD user store.

2. Creation of an Application in the AAD Domain for the Resource Group

3. Assigning permissions to the credential for the Resource Group via RBAC

4. Have a Web site running already with custom DNS names already assigned; this will be in a Resource Group that is protected by Role Based Access Control (RBAC)

5. Creation of a SSL Certificate – for this I used ‘makecert.exe’ and created a wildcard certificate

Uploading and Assigning Certificate

6. Make a call to the /certificates resource provider to ‘ADD (PUT)’ a new Certificate to the Resource Group

7. Make a call to the /sites resource provider to ‘Update (PUT)’ the assignment of the cer5iridate to the DNS name

And that’s it. So, for steps 1 – 5, let’s review some of the setup steps:

1. Adding an AAD Credential for this sample – since we’re going to use Username / Password authentication to acquire a token, I’ll need the Password. This will require an initial sign on. The easiest way to do this is once you create a user, just login via a private browser sessions with that credential to or

2. Creation of an Application in your AAD domain – same one where the credential is.

1) Sign in to the Azure management portal

2) Click on Active Directory in the left hand nav.

3) Click the directory tenant where you wish to register the sample application.

4) Click the Applications tab.

5) In the drawer, click Add.

6) Click "Add an application my organization is developing".

7) Enter a friendly name for the application, for example "AADDemoCertificates", select "Native Client Application", and click next.

8) For the sign-on URL, enter the base URL for the sample, you’ll need this for the sample later: https://localhost:8080/login

After done, we need to retrieve the ClientID; for that app:

9) In the Azure portal, click configure

10) Retrieve the ClientID and save it

3. Next, in the “New Portal” - we need to assign the user permissions to the respective Resource Group

1) Click Browse

2) Find “Resource Groups”

3) Locate the Resource Group that the Azure Web Site is in that we will be assigning the certificate to.

4) In the “Blade” go to the bottom tile labeled “Access” and click on “Owner”

5) Another blade opens showing any existing Owners

6) Click on “+ Add”

7) You should see existing Users in the domain; find the User or enter the ‘user@domain’ in the Search box

8) Select that user, then click “Select” at the bottom of the blade – this will add that user to the group

4. Looking at your Web site in Azure – ensure and jot down:

a. Name of the Resource Group (should be same as above step)

b. Name of the Site

c. DNS names – add a custom DNS domain – see the Azure portal for instructions

i. This is under “Custom Domains and SSL” – you have to choose a “Basic” plan or higher for Custom Domains and SSL

5. For making a self-signed cert, these are the commands I used:

REM make the root
makecert -n "CN=Development Test Authority" -cy authority -a sha1 -sv "DevelopmentTestAuthority.pvk" -r "DevelopmentTestAuthority.cer"

REM makecert -n "CN=*" -ic "DevelopmentTestAuthority.cer" -iv "DevelopmentTestAuthority.pvk" -a sha1 -sky exchange -pe -sv "wildcard.cicoriadevnet.pvk" "wildcard.cicoriadevnet.cer"

makecert -n "CN=*" -ic "DevelopmentTestAuthority.cer" -iv "DevelopmentTestAuthority.pvk" -a sha1 -sky exchange -pe -sv "wildcard.cicoriadevnet.pvk" "wildcard.cicoriadevnet.cer"

pvk2pfx -pvk "wildcard.cicoriadevnet.pvk" -spc "wildcard.cicoriadevnet.cer" -pfx "wildcard.cicoriadevnet.pfx" -pi pass@word1

Sample Code

For the sample code, you’ll see a call via the AADL library to use a Username & Password for obtaining an AuthenticationResult object – which contains an AccessToken. Note that the resource URI that the token is generated for is .

Adding a Certificate via REST

The sample code makes use of JSON.NET and anonymous objects for creating the PUT HTTP request bodies. Here is what the shape of the PUT request looks like for ‘adding’ a certificate to a Resource Group.



Content-Type: application/json
Authorization: Bearer {accessToken}
Content-Length: 3675

  "name": "{resourceName}",
  "type": "Microsoft.Web/certificates",
  "location": "{location}",
  "properties" : {
    "pfxBlob": {base64ByteArrayOfPfx},
    "password": "pass@word1"

Replacement Parameters

subscriptionId – this is the subscription that the Resource Group (and it’s web site) is contained within

resourceGroupName – this is the name of the resource group

resourceName – this is what the friendly name of the certificate WILL be – this is a PUT request, but the resourceName must be on the Uri in addition to the json request body – and they must match

accessToken – this is the token obtained from the AADL library call

location – for my sample, I used “East US” – which is the Azure Region. Note that not all Resource Providers are available or registered for your subscription in all regions. Review the response from the /providers REST call prior in this post to see what is available for each region, along with the ‘api-version’ that is supported.

base64ByteArrayOfPfx – this the pfx file in bytes, then converted to base64

password- this is the password of the pfx file that was used during pfx creation


The HTTP Response code is a 200 – this a content body that dumps out the certificate information. I’ve abbreviated most of the response in the following. Make note of the thumbprint if you haven’t already as this is what the assignment will use, along with the Site name, to bind the SSL certificate to the web site.

    "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/certificates/{resourceName}",
    "name": "{resourceName}",
    "type": "Microsoft.Web/certificates",
    "location": "{location}",
    "properties": {
        "friendlyName": "",
        "subjectName": "*",
        "hostNames": [
        "pfxBlob": null,
        "siteName": null,
        "selfLink": null,
        "issuer": "Development Test Authority",
        "issueDate": "2015-01-27T22:34:57+00:00",
        "expirationDate": "2039-12-31T23:59:59+00:00",
        "thumbprint": "DEA5DED6142EDECCDF952F4D431ED772F01D22D1",

Assigning a Certificate via REST

For the assignment, we make use of the Resource Manager “Microsoft.Web/sites”.



Content-Type: application/json
Authorization: Bearer {accessToken}
Content-Length: 567

  "name": "{resourceName}",
  "type": "Microsoft.Web/sites",
  "location": "{location}",
  "properties" : {
	  "hostNameSslStates": [
      "name": "",
      "sslState": 1,
      "thumbprint": "DEA5DED6142EDECCDF952F4D431ED772F01D22D1",
      "toUpdate": 1,

Replacement Parameters

subscriptionId – this is the subscription that the Resource Group (and it’s web site) is contained within

resourceGroupName – this is the name of the resource group

resourceName – this is what the friendly name of the certificate WILL be – this is a PUT request, but the resourceName must be on the Uri in addition to the json request body – and they must match

accessToken – this is the token obtained from the AADL library call

location – for my sample, I used “East US” – which is the Azure Region. Note that not all Resource Providers are available or registered for your subscription in all regions. Review the response from the /providers REST call prior in this post to see what is available for each region, along with the ‘api-version’ that is supported.

Thumbprint – this would be the thumbprint known for that certificate in Azure – again, it should always be the same locally, but if you have any issues assigning, this must match what Azure knows in /certificates.


The Response should show you the chosen site DNS name with the thumbprint associated, similar to the following:

        "hostNameSslStates": [
                "name": "",
                "sslState": 1,
                "ipBasedSslResult": null,
                "virtualIP": null,
                "thumbprint": "DEA5DED6142EDECCDF952F4D431ED772F01D22D1",
                "toUpdate": null,
                "toUpdateIpBasedSsl": null,
                "ipBasedSslState": 0,
                "hostType": 0



Sample Solution and Source Code

The source code is located on github: - or direct

[1] Azure Resource Manager REST API Reference

[2] Listing All Resource Providers

[3] Active Directory Authentication Library for .NET – github

Running ASP.NET 5 applications in Linux Containers with Docker

Ahmet’s (@ahmetalpbalkan) posted an official walkthrough on getting ASP.NET 5 running under Docker in Linux.  This takes you from a Docker client running on a Linux or OS X machine against an Docker image in Azure…

Take a look -

PDF Search Handler fix

Adobe keeps breaking my PDF search.  WHY WHY WHY…


reg ADD HKCR\.pdf\PersistentHandler /d {1AA9BF05-9A97-48c1-BA28-D9DCE795E93C} /f

Running the AspNet vNext MVC sample direct from Docker

In the post

Using the Docker client from Windows and getting AspNet vNext running in a Docker Container you had to step through downloading GO, building the docker.exe, etc.

I’ve updated the GitHub repo adding the hacked version of the Docker.exe along with their LICENSE.

And the whole thing has been published to the Docker hub registry.

So, all you need to do is run the following (assuming you have a Docker host running):

docker run -d -t -p 8080:5004 cicorias/dockermvcsample2

This will get you a running AspNet vNext on Linux and a Sample MVC app.

Note that temporary workaround in the approach is TAR all the files 1st – and use that in the Dockerfile.

Posted: 11-24-2014 8:51 AM by cicorias | with no comments
Filed under: , , ,
Using the Docker client from Windows and getting AspNet vNext running in a Docker Container

Update: 2015-01-15 – Note that Ahmet has posted an official Docker walkthrough for ASP.NET 5

Update: 2014-11-24 – Added links to HOWTO build Docker on Windows from Ahmet.

As Docker progress as a native application on Windows, and Asp.NET progresses direct from Microsoft for running on Linux, I wanted to see how far I could get using what’s out there today. While there are some challenges, there are a couple of simple steps that you can use to get around some initial blockers.

There are known issues in the Docker Windows implementation [Github pull request 9113] – specifically, the use of Path separators – in that in Linux we have ‘/’ and in Windows it’s ‘\’. While GO has a constant for this, the Docker client and server are not handling this platform translation just yet. The trick for this is just TAR up your directory first, then use the ADD Dockerfile command which can handle TAR files natively.

The other key change is downgrading the VERSION number so the client matches the Boot2Docker versions.  While I didn’t see any API changes that would impact this other than the version number.

Here’s an image of it running on a Docker host container (running on Hyper-V Windows 8.1).  Getting here was a bit challenging, but worth it Smile

github repo here:




Here are the general steps that I followed:

Follow boot2docker on Hyper-V setup steps

In the post here you can use that to get Docker via Boot2Docker running in HyperV. Again, all you need is a Docker host, but if you want to be all HyperV this is a way to do it.

Modify Docker client version ‘server 1.15’ (HACK)

Ahmet goes through the HOWTO on building the Docker client – here:

GO is from here:

Follow the steps to install GO, then clone the Docker git repo – and make a small change to the version number so you’ll be able to attach with the Native client (which is being built against the dev branch from Docker’s Github repo. The Boot2Docker server is still at the prior version.  See the comments in the pull request above where some folks have indicates similar approach.

const (
	APIVERSION        version.Version = "1.15"

Build Docker client with GO

Once you have the docker.exe built, you can put it away safely and kill the repo if you want.

Turn off TLS if you like a simple command line

I turn off TLS for development.  see

“disable it by adding DOCKER_TLS=no to your/var/lib/boot2docker/profile file on the persistent partition inside the Boot2Docker virtual machine (use boot2docker ssh sudo vi /var/lib/boot2docker/profile).”

if you don’t turn it off, you can use TLS and just copy over to your Windows machien the following files then reference them from the ‘docker’ command line or set the environment variables.

If using TLS ‘steal’ the following files from your boot2docker host

The following files sit on the Docker host in /var/lib/boot2docker

  • cert.pem
  • key.pem
  • ca.pem


If you need to SSH into the Docker image:

ssh docker@

Password: tcuser


Run docker client to verify access to your Docker host

Using the Docker client that you built from the GO source (and the hacked version #)

If you set an environment variable, you can avoid passing command line parms each time.

Note that the non-secure port is 2375 by default, and the secure port is 2376.

E:\gitrepos\dockerAspNet>set dock

If you’re running via TLS, you can use the Certificate files that are located on the Server and mentioned above:

docker --tls --tlscert="e:\\temp\\docker\\cert.pem" --tlskey="e:\\temp\\docker\\key.pem" --tlscacert="e:\\temp\\docker\\ca.pem" ps

Getting ASP.NET vNext running

Now for the fun part.

First, grab (clone) the github repo at:

git clone

Tar files into 1 archive

Then in the ./samples/HelloMvc directory using a tool (such as 7-zip) to ‘tar’ up all the files so you have a ‘HelloMvc.tar’ file. This step is needed until the Docker client/daemon properly addresses File Separator differences between Windows and Linux.

Create a ‘Dockerfile’ with the following:

FROM microsoft/aspnet
# copy the contents of the local directory to /app/ on the image
ADD HelloMvc.tar /app/

RUN ls -l
# set the working directory for subsequent commands
RUN ls -l
# fetch the NuGet dependencies for our application
RUN kpm restore
# set the working directory for subsequent commands
# expose TCP port 5004 from container
# Configure the image as an executable
# When the image starts it will execute the “k web” command
# effectively starting our web application
# (listening on port 5004 by default)
ENTRYPOINT ["k", "kestrel"]

Once this is done the directory should look like this:


Build the Docker package

Now, from the root of the repo (./dockerAspNet/samples in my example) execute the following:

docker build -t myapp samples/HelloMvc

At this point, you should see Asp.NET and all the supporting dependencies fly by in the build interactive console. It will take a bit a time the first time as it will install the ‘microsoft/aspnet’ docker package too. Once that is done, future updates will be faster just for you’re package.

After a bit, you should see something like the following. 



Startup the Container

Now we’re ready to start our MVC app on ASP.NET in our Docker Container on Linux!!!!

docker run -d -t -p 8080:5004 myapp


Navigate to your IP address of your Linux instance:

As Martha Stewart would say – “It’s a good thing…”


Posted: 11-23-2014 2:46 PM by cicorias | with no comments
Filed under: , , , ,
Useful Machine Learning and HDInsight / Hadoop Links Posts and Information


  • Initial Post: 2014-11-17

As many ramp up on Microsoft Azure Machine Learning, I wanted to start keeping a succinct list of many of the articles, blogs, videos, posts, etc. that have shown to be helpful in conveying the essence of the general practice of Machine Learning as well as the implementation within Microsoft Azure.

Machine Learning Center

R Programming

R for Beginners by Emmanuel Paradis

Introductory Statistics with R (Statistics and Computing), Peter Dalgaard 

R Succinctly, Barton Poulson, Syncfusion

An Introduction to Statistical Learning, with Applications in R, Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani


Analyzing Customer Churn using Microsoft Azure Machine Learning


Develop a predictive solution with Azure Machine Learning

Create a simple experiment in Azure Machine Learning Studio


Instructional Azure Machine Learning videos

Tools / Scripts

Creates a cluster with specified configuration.
Creates a HDInsight cluster configured with one storage account and default metastores. If storage account or container are not specified they are created
automatically under the same name as the one provided for cluster. If ClusterSize is not specified it defaults to create small cluster with 2 nodes.
User is prompted for credentials to use to provision the cluster.

During the provisioning operation which usually takes around 15 minutes the script monitors status and reports when cluster is transitioning through the
provisioning states.

Blog Posts

Benjamin Guinebertière (from Microsoft France) has a great blog that covers quite a few scenarios that many encounter when ramping and using Microsoft Azure Machine Learning

Azure Automation: What is running on my subscriptions - Benjamin Guinebertière

Remember you pay for what you use; ensure you keep track of these in-use clusters. In fact, the goal is to provision only when needed. Take a look at Kerrb for a commercial option to help you manage your spend:

Sample code: create an HDInsight cluster, run job, remove the cluster - Benjamin Guinebertière

Again, we want to keep our data in Blobs (or other persistence) then hydrate the cluster, process, save off our results, then kill the cluster.

How to upload an R package to Azure Machine Learning - Benjamin Guinebertière

Adding R scripts and packages can be achieved through this method.

How to retrieve R data visualization from Azure Machine Learning - Benjamin Guinebertière

R is a great point of extensibility. Here we see how to visualize the R output (images) that could be run as part of your R script.

Carl Nolan’s blog is also a great resource – much more than just ramblings:

Managing Your HDInsight Cluster using PowerShell – Update - Carl Nolan

Managing Your HDInsight Cluster and .Net Job Submissions using PowerShell - Carl Nolan

Hadoop .Net HDFS File Access – Carl Nolan


There is a book on Azure ML due out this week (2014-11-19)

Predictive Analytics with Microsoft Azure Machine Learning: Build and Deploy Actionable Solutions in Minutes, Valentine Fontama, Roger Barga, Wee Hyong Tok, ISBN-13: 978-1484204467 ISBN-10: 1484204468 Edition: 1st


Microsoft Azure Machine Learning Frequently Asked Questions (FAQ)


Machine Learning Preview Pricing Details

Data Factory

SharePoint 2013 Fixing WCAG F38 Failure–Images without ALT tags–using a Control Adapter–Display Templates

The WCAG (Web Content Accessibility Guidelines) provide a baseline for accessibility standards so various tools, such as screen readers, can provide a reasonable experience for those with accessibility challenges.

With regards to images, the guideline provides that all Image tabs <img…> should probably (I say probably here for various reasons) have an ALT tag.

In the case of filler images, or “decorative” that isn’t representative of content, according to F38 here: They should have an empty ALT tag – thus ‘alt=””’.

F38: Failure of Success Criterion 1.1.1 due to not marking up decorative images in HTML in a way that allows assistive technology to ignore them

The above reference specifically states for validation:



For any img element that is used for purely decorative content:

  1. Check whether the element has no role attribute or has a role attribute value that is not "presentation".

  2. Check whether the element has no alt attribute or has an alt attribute with a value that is not null.

Expected Results
  • If step #1 is true and if step #2 is true, this failure condition applies and content fails the Success Criterion.


How this Applies to SharePoint 2013

In SharePoint 2013, if using Display Templates, the generation of the master page is done by the Design Manager “parts”.

Inside of HTML version of the master pages, you will see the following:

        <!--SPM:<SharePoint:ImageLink runat="server"  />-->

This will translate to just using the ImageLink SharePoint Web Control, and will emit the following:

        <div id="imgPrefetch" style="display:none">
<img src="/_layouts/15/images/favicon.ico?rev=23" />
<img src="/_layouts/15/images/spcommon.png?rev=23" />
<img src="/_layouts/15/images/spcommon.png?rev=23" />
<img src="/_layouts/15/images/siteIcon.png?rev=23" />

So, we need to “add” an alt=”” tag to this “block” of HTML.

To do this, we can utilize a ControlAdapter – which is a Web Forms based concept, that allows interception at Render time for the control. In the past, ControlAdapters were used in SharePoint 2007 to provide all the rewriting of HTML Tables to more CSS friendly versions – ultimately at the time to help with the WCAG needs.


ControlAdapter on MSDN

The main part of the control adapter to do this re-rendering is within the Render statement.  Below are the primary methods that will do this rendering and fixup of the IMG tags:


namespace ImageLinkControlAdapter.Code
    public class ImageLinkAdapter : ControlAdapter
        protected override void Render(System.Web.UI.HtmlTextWriter writer)
            /// First we get the control's planned HTML that is emmitted...
            using (StringWriter baseStringWriter = new StringWriter())
            using (HtmlTextWriter baseWriter = new HtmlTextWriter(baseStringWriter))
                /// Now we have an HTML element...
                string baseHtml = baseStringWriter.ToString();
                /// now fixit up...

        internal string RebuildImgTag(string existingTagHtml)
            var pattern = @"<img\s[^>]*>";
            var rv = Regex.Replace(existingTagHtml, pattern, this.InsertAlt);
            return rv;


        internal string InsertAlt(Match match)
            return this.InsertAlt(match.ToString());

        internal string InsertAlt(string existingTag)
            if (!existingTag.StartsWith("<img", StringComparison.InvariantCultureIgnoreCase))
                return existingTag;

            if (existingTag.Contains("alt=", StringComparison.InvariantCultureIgnoreCase))
                return existingTag;

            var insertPoint = existingTag.IndexOf("/>");
            var rv = existingTag.Insert(insertPoint, "alt=\"\"");
            return rv;


    internal static class StringExtensions
        public static bool Contains(this string source, string toCheck, StringComparison comp)
            return source.IndexOf(toCheck, comp) >= 0;


Finally, the full Visual Studio 2013 Solution and source is located here:

As a bonus, there’s a Feature Receiver that will deploy the *.browser file to the Web Application’s App_Browsers directory as well.



Why you should never say “Turn ON Intranet Settings” in Internet Explorer IE

I recently checked into a hotel – connected to their guest wireless – and I start noticing odd things with some websites.

UPDATE: corrected title to conform to the message – thanks Mark…

If you’ve ever seen the following:

NEVER say “Turn on Intranet Settings”.

In my case, the hotel’s wireless (specifically their DHCP server) was returning a WPAD (Browser Proxy Autoconfiguration) with the following:

function FindProxyForURL(url, host)
  return "DIRECT";

Which for IE that means ALL sites will be mapped to the Intranet Zone automatically IF you’ve “Turned ON Intranet Settings”.  This is bad, bad, bad.

That means IE runs in Unprotected Mode for ALL internet sites.

If you have responded “incorrectly” – then you can reset it to auto as follows:


Finally, if you want to see the message where IE WOULD HAVE mapped the zone to Intranet you can turn back on the warning via regedit:

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings]

See  for more information

Getting Docker Running on Hyper-V 8.1/ 2012 R2

Running Docker locally on a Windows machine is generally not an issue; unless you've committed to using Hyper-V. Since the Docker install for Windows relies on Sun's Oracle's Virtual Box, you can't have both running (Hyper-V and Virtual Box).

There are ways to disable Hyper-V for a boot session (via bcdedit for example – here). However, I'd just like to run in Hyper-V.

Thankfully, Chris Swan has a nice post on getting started, using the Boot2Docker ISO, and setting up the data disk (via a differencing disks) so you can just re-use this config in future Docker instances. You can also see some of the details on the boot2docker requirements for naming of the data disk, username and password for SSH, etc. here:

Basic Steps

Download ISO – from github

Create the VM and just use the ISO for bootup – we'll add the disk in a moment

We'll create the VM as a Generation 1 – we need the legacy adapters etc. as the version of CoreOS used won't recognize the other adapter types.

Simple Settings:

Memory Size: 1,204 MB

Network Connection: Choose an interface that has Internet access and DHCP assignable addresses for ease:


Next, postpone the setup of the Hard Disk as we're going to setup a differencing disk and we'd like some control over the IDE adapter / Port to use.



Once you're done with the 'New Virtual Machine' Wizard, hop into settings for the VM

Modify the DVD settings to point to the ISO image that you downloaded above:

Boot the VM for the First Time

All goes right, you should see in the VM console the 'bootdocker' loader information, and eventually the linux prompt

Start a SSH session with your VM (if desired)

To get the IP address of the VM, run ifconfig eth0 to see the default adapter. You should get an address that is hopefully on the Network interface/LAN that you chose. This has to be accessible from your host OS if you want to use SSH – in fact, it also needs access to the internet in order to get to the Docker HUB for downloading images.


I use "github" Windows tools (which in turn sets up the 'poshgit' tools, etc.) so I can just run a SSH session from PowerShell.

Initiate the connection normally with SSH

ssh docker@<IP.address>

Note that the default username / password is : docker / tcuser - see the section on SSH at for more information.

Setup the Virtual Disk

Shutdown the VM.

The next step is following what Chris Swan did in his post – which is to setup the VHD – run through the initialization, then make a differencing disk based off of that VHD, then swap out the configuration settings on the VM to use the Differencing disk instead of the base.

Boot the VM again

Once it's started, choose SSH or the console to perform the disk preparation

Partition the drive

The steps below are slightly different than Chris' post – but are:

  1. Dump out the partition table just to be sure
    1. cat /proc/partions (if you chose IDE 0 / PORT 0 then it should be /dev/sda)
  2. run fdisk
    1. sudo fdisk /dev/sda
  3. Choose 'extended'
  4. Select partition '1'
  5. Choose the defaults for the first and last cylinder
  6. Once that is done, commit with the 'w' command

Setup the file system

The naming convention of the disk is also specified on the boot2docker github page – but it has to be 'boot2docker-data'

Next, format the drive with:

sudo mkfs.ext4 -L boot2docker-data /dev/sda


Note that you will be warned about formatting the entire device, and not a partition. For now, I just went with the above.

Create the Differencing Disk

Shut down the VM again

Go back into the Virtual Machine Wizard. Select the settings for the VM, then go to the Disk settings and create a "New Virtual Disk".

Make sure when prompted, you choose the "base" image you created before, but when you're done, your "Differencing" disk should be what's listed in the Hard Disk path for the Controller/Location as below.

Boot the VM – 3rd time

I think it's the 3rd time – don't remember at this point…

Now we're ready to "run" something. We'll use the same image that Chris posted about, just because it's a cool tool (Node-RED -

Access the image either through the console or via SSH

Do a 'docker run' specifying to download the image if needed (-d) as it's won't be in the image local library.

docker run –d –p 1880:1880 cpswan/node-red


If all is working, then you should see the image and all it's dependencies downloading – with the container – and at the end, docker launches the process.

Checkout if the Differencing disk is working

The "before" size

The "after" size – note the increase of the Differencing disl.

Launch the Application

Note that the port mapping is using the same port 1880 (Nat'd).

You should get the 'Node-Red' home page, which is the designer surface.

I quickly imported a simple "hello world" from the flows



Issues with OneDrive for business and Document Cache–Don’t mix C2R and MSI installs

With the latest updates to Office, an issue that rears it’s ugly head if you’ve mixed both C2R and MSI installs of any Office product (2013).  That means Office, Visio, Project, SharePoint designer, and OneDrive for Business Sync Client.


If you get into this mess, try to uninstall all the C2R or MSI – then get them all consistent;


#1 – can’t mix click-to-run and MSI installs on the same machine:

If any are mixed, you need to uninstall.

To start fresh:

1. Run

a. Both MSI then C2R

2. Run ROIScan to ensure nothing is left:


3. Once you’re all clean:

a. C2R

i. Use the

ii. SharePoint designer is under “Tools”

iii. Visio is there if your org has a license..

b. MSI

i. Get them all in FULL downloads from MSDN, or wherever you obtain your installs from

# for internal, step #1 – in toolbox there is OffScrub

More Posts Next page »