Shawn Cicoria - CedarLogic

Perspectives and Observations on Technology

Recent Posts

Sponsors

Tags

General





Community

Email Notifications

Blogs I Read

Archives

Other

Use OpenDNS

Cleaning up IIS Express sites v2

This one deals with a single site…

Again, I’m a tidy person.

$appCmd = "C:\Program Files (x86)\IIS Express\appcmd.exe"
$result = Invoke-Command -Command {& $appCmd 'list' 'sites' '/text:SITE.NAME' }

function deleteSite($site){

   Invoke-Command -Command {& $appCmd 'delete' 'site'  $site }
}


if ($result -is [system.array]){
     for ($i=0; $i -lt $result.length; $i++)
     {
         deleteSite($result[$i])
     }
}
else {
     deleteSite($result)
 }
Cheap and easy IP blocking in Azure Web Apps

Sometimes you just need to resort to something simple.

I don’t recommend this in anyway for a real security plan / hardening. You should be using something proper if you’re under these kinds of malicious attacks.

But, in Azure Web Apps you can add IP Address blocking via the IIS “Dynamic IP Restrictions” Module [1].

While I think this is the “wrong” way to do this (a real firewall should be in place), I was able to “block” IP addresses in Azure Web Apps (site) by using an applicationHost.xdt transform like so:

This becomes a “site” extension – and is deployed as a Site Extension https://github.com/projectkudu/kudu/wiki/Azure-Site-Extensions

This modifies the applicationHost.config for your Web App. https://azure.microsoft.com/en-us/documentation/articles/web-sites-transform-extend/

 

<?xml version="1.0"?>
<configuration xmlns:xdt="http://schemas.microsoft.com/XML-Document-Transform">
  <system.webServer>
    <security>
      <ipSecurity allowUnlisted="true" xdt:Transform="Replace">
         <add  ipAddress="191.237.6.194"/>
      </ipSecurity>
    </security>
  </system.webServer>
</configuration>

 

[1] http://www.iis.net/downloads/microsoft/dynamic-ip-restrictions

AngularJS intellisense NuGet package added

Using the work of John Bledsoe, a NuGet package has been added that takes a dependency on AngularJS.Core – and provides the angular.intellisense.js file to your project.

Via Nuget.org: https://www.nuget.org/packages/AngularJS.Intellisense/

Referenced here: http://blogs.msdn.com/b/visualstudio/archive/2015/02/05/using-angularjs-in-visual-studio-2013.aspx

This approach takes the ‘per’ project approach and puts this into the /scripts directory of your project.

_references.js – auto-update

In addition, the package delivers a default _references.js file that allows for auto-update. If you want to at any time regernated the references, in Visual Studio open _references.js and then choose “Update JavaScript References”.

/// <autosync enabled="true" />
/// <reference path="angular.js" />
/// <reference path="angular-mocks.js" />
/// <reference path="project.js" />

The source for the intellisense is here:

https://github.com/jmbledsoe/angularjs-visualstudio-intellisense 

Thanks to Jim Bledsoe for his hard work on this…  https://twitter.com/jmbledsoe

Posted: 02-27-2015 2:09 PM by cicorias | with no comments
Filed under: ,
Troubleshooting tool–Azure WebJob TCP Ping

I’ve dropped a quick Visual Studio solution that has a simple Azure WebJob intended to run Continuously that does a Socket open (TCPPing) for a specific IP address and Port – intended to aid in identifying any network transient errors over time.

Located on Github here: https://github.com/cicorias/webjobtcpping

Azure Web Job - TCP Ping


Overview

Visual Studio 2013 Solution

The solution file contains several things:

  1. JobRunnerShell - simple wrapper class that handles some of the basic management of the process/job for Azure Web Job Classes/Assemblies
  2. TcpPing - an implementation of a simple Azure Web Job - intended to be run continuously - that will do a basic TcpPing (open a socket) every second.
  3. SimpleTcpServer - a very basic Tcp Listener service that echo's back a simple string (1 line) in reverse.

Purpose

The intent of the solution is to provide a very basic diagnostic tool that can be run continuously in an Azure WebSite deployment that will 'ping' (open a socket) to a server -- this is intended for testing availability of a server using IPv4 addresses (ie. 10.0.1.1) across Virtual Networks (VNET) in Azure.

This can be used against any Server listener service - as it only does a Socket.Open() - of course, the Server should be resilient to these Socket Opens and immediate close.

NOTE: *Make sure you Open the Windows Server firewall if using Windows Server as your 'host' for this.

Reporting is done to the Azure Web Jobs dashboard and is also visible via Azure WebSite's Streaming Logs

The easiest way is just go the the Azure Portal or use Visual Studio Azure Explorer - which comes with the Azure Tools for Visual Studio.


Deployment

Azure WebJob

The Azure WebJob - 'TcpPing' utilizes the NuGet packaging that 'lights up' the "Publish as Azure Webjob" tooling in Visual Studio. Otherwise, this can be deployed using alternate methods - see How to Deploy Azure WebJobs to Azure Websites

Settings

Within the TcpPing project - examine the "app.confg" you will find 'appSettings' and 'connectionStrings' that you should review. The connectionStrings are dependant upon your Azure Storage Account information which you can retrieve from the Azure Portal

AppSettings

The following settings are used to open the socket - adjust to your need.

<appSettings>
  <add key="sqlIp" value="10.3.0.4"/>
  <add key="sqlPort" value="8999"/>
</appSettings>
Connection Strings

Make sure you put in your 'connectionString' - which comes from the Azure Portal for the Storage Account.

<connectionStrings>
<!--WEBJOBS_RESTART_TIME  - please set in porta to seconds like 60.-->
<!--WEBJOBS_STOPPED   setting to 1 means stopped-->
<!-- The format of the connection string is "DefaultEndpointsProtocol=https;AccountName=NAME;AccountKey=KEY" -->
<!-- For local execution, the value can be set either in this config file or through environment variables -->
<add name="AzureWebJobsDashboard"
 connectionString="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>" />
<add name="AzureWebJobsStorage"
 connectionString="DefaultEndpointsProtocol=https;AccountName=<accountName>;AccountKey=<accountKey>" />
</connectionStrings>
Simple TCP Server

The solution also contains a simple TCP Server that is intended to be installed on within the Virtual Network - for example in an IaaS instance -- that you are attempting to validate connectivity and continuous reporting on.

Again, you should be able

Settings

There is only 1 setting in the app.config under appSettings. If this is absent, the Simple TCP Server listens on IPv4 addresses only (actually all the time) and uses port 8999

<appSettings>
 <add key="serverPort" value="8999"/>
</appSettings>
Posted: 02-15-2015 4:27 PM by cicorias | with no comments
Filed under: ,
Azure Resource Manager–Creating an IaaS VM within a VNET

NOTE: Azure Resource manager is in Preview. Thus, anything posted here may change. However, the approach for identifying what resources are available for update and registered for Subscriptions should be the same.

Here are the prior posts:

For this walkthrough I’m going to build up a Linux VM instance off of a VHD that I have within a storage account. I use the ARM REST API calls direct, bypassing the Templates that are coming to ARM.

Azure Resource Manager Templates

The REST API calls that I’m illustrating below are NOT using Azure Resource Manager (ARM) Templates. You can review some of the articles below for more information on ARM Templates.

 

Currently, ARM Templates are in preview and as of this writing, only 3 templates are available. Those are listed in the tooling. in the links above.

ARM Templates Basics

ARM Templates provided a template language that establishes the dependencies amongst the composition of supporting resources. In addition, the backend to ARM Templates provides the management and control over provisioning all these dependency upon submission of the ARM Template provision request. Ultimately, it is built upon ARM – which for this post is accessible via the ARM REST API calls.

Creating a VM using ARM REST API – not using Templates

This blog post is NOT about ARM Templates. I cover the underlying ARM REST API directly and create composition through a series of client side REST API calls (if that makes any sense).

Preparation steps:

 

$blob1 = Start-AzureStorageBlobCopy -srcUri $srcUri `
	-SrcContext $srcContext `
	-DestContainer $containerName `
	-DestBlob "testcopy1.vhd" `
	-DestContext $destContext 

 

 

Resource Manager Composition

If you examine an existing VM via the REST API you will see within the JSON response several sections contained within the properties JSON object.

Any of these, for example ‘domainName’, ‘networkProfile/virtualNetworks’, ‘storageProfile/operatingSystemDisk/storageAccount’ are additional resources that you must compose or create prior to making the REST API call to create (PUT) the VM that you want to provision. If you refer back to the prior posts that lists the /providers for a subscription, you will find providers as follows:

  • Networks - Microsoft.ClassicNetwork – with resource types of ‘virtualNetworks’, ‘reservedIps’, ‘quotas’, and ‘gatewaySupportedDevices’
  • Domain Name - Microsoft.ClassicCompute – with resource providers of ‘domainNames’, ‘virtualMachines’, ‘capabilities, ‘quotas’, etc.

 

You will see ‘storageAccount’ listed in the GET response for each disk – OS and data disks – that are used by the existing VM. Note that there is an ‘id’ property. That’s the ‘id’ or reference that will be used in the final PUT request at the end of the post for each of the associated resources.

Prior Posts

In prior posts, I cover the creation of a Resource Group and a Storage Account.  Here is a screen shot of the Resource Group creating using Postman (I won’t repeat the Storage Account creation).

SNAGHTML1671163

Create Domain Name

The domain name represents the ‘cloud service’ – which essentially represents the wrapper and associated public IP address that the VM when created be behind – think firewall. In the new portal (https://portal.azure.com) these show as Domains (thus that is what ARM uses). In the current production portal (https://manage.windowsazure.com) they appear as Cloud Services – a term that anybody doing Worker and Web Roles in PaaS are quite familiar with.

 

The PUT request contains a JSON body that is quite simple.

PUT https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/demo2/providers/Microsoft.ClassicCompute/domainNames/scicoriacentosnew?api-version=2014-06-01

Content-Type: application/json
Authorization: Bearer: <token>

{
     "properties": {
         "label": "scicoriacentosnew",
         "hostName": "scicoriacentosnew.cloudapp.net"
     },
     "name": "scicoriacentosnew",
     "type": "Microsoft.ClassicCompute/domainNames",
     "location": "eastus2"
}

SNAGHTML16786d1

Create Domain Response

For this call, the HTTP response comes back as ‘201 – created’ – you’ll see in the other requests, as they are longer running, you will get a ‘202 – Accepted’ – and with that response headers that you can obtain the operation request ID and ask Azure for the status of the request. That is key to identifying any issues beyond the simple serialization issues for bad JSON PUT payloads.

Create Virtual Network

For a VNET (virtual network) I’m going to create with my ‘demo2’ resource group a VNET with –well, the JSON below should be fairly explanatory (that’s what’s nice about JSON and REST of these things).

PUT https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/demo2/providers/Microsoft.ClassicNetwork/virtualNetworks/scicoriacentosnew?api-version=2014-06-01

Content-Type: application/json
Authorization: Bearer <token>

{
    "properties": {
        "addressSpace": {
            "addressPrefixes": [
                "10.1.0.0/16"
            ]
        },
        "subnets": [
            {
                "name": "Subnet-1",
                "addressPrefix": "10.1.0.0/24"
            },
            {
                "name": "Subnet-2",
                "addressPrefix": "10.1.1.0/24"
            }
        ]
    },
    "id": "/subscriptions/<subscriptionId>/resourceGroups/demo2/providers/Microsoft.ClassicNetwork/virtualNetworks/scicoriacentosnew",
    "name": "scicoriacentosnew",
    "type": "Microsoft.ClassicNetwork/virtualNetworks",
    "location": "eastus2"
}

Explanation

For those that aren’t familiar, the VNET will be created covering a CIDR range of addresses 10.1.*.*/16 – and, in addition, within that top-level range, I’ve created a 2 subnets covering 10.1.0.*/24 & 10.1.1.*/24.

Additional subnets can be specified within the JSON array [] if needed. Validation will occur at submission and provisioning time – so, you need to check for a ‘202 – Accepted’ response, and with that operations ID, validate status.. I could’ve also specified additional ranges for the address prefixes as well – just as you can do in the Azure Management portal.

 

SNAGHTML169aba5

Create Virtual Machine

Now that we have the following, we’re ready to issue an ARM REST API PUT request to create the virtual machine.:

  1. Storage Account with a VHD ready to use
  2. Resource Group
  3. Domain Name
  4. Virtual Network

 

This one is rather lengthy. You should note the ‘nested’ referred to resource that were created in the prior steps. Again, once submitted and no deserialization issues, URI issues, etc., you should get back a ‘202 – Accepted’ – from that response you have to check the Operation Status using the provided status ID:

//PUT https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/demo2/providers/Microsoft.ClassicCompute/virtualMachines/scicoriacentosnew?api-version=2014-06-01
{
    "properties": {
        "hardwareProfile": {
            "platformGuestAgent": true,
            "size": "Basic_A2",
            "deploymentName": "scicoriacentosnew",
            "deploymentLabel": "scicoriacentosnew",
        },
        "domainName": {
            "id": "/subscriptions/<subscriptionId>resourceGroups/demo2/providers/Microsoft.ClassicCompute/domainNames/scicoriacentosnew",
            "name": "scicoriacentosnew",
            "type": "Microsoft.ClassicCompute/domainNames"
        },
        "storageProfile": {
            "operatingSystemDisk": {
                "diskName": "scicoriacentosnew-os-20150212",
                "caching": "ReadWrite",
                "operatingSystem": "Linux",
                "ioType": "Standard",
                //"sourceImageName": "5112500ae3b842c8b9c604889f8753c3__OpenLogic-CentOS-65-20140926",
                "vhdUri": "https://scicoriademo.blob.core.windows.net/vhds/testcopy1.vhd",
                "storageAccount": {
                    "id": "/subscriptions/<subscriptionId>resourceGroups/demo/providers/Microsoft.ClassicStorage/storageAccounts/scicoriademo",
                    "name": "scicoriademo",
                    "type": "Microsoft.ClassicStorage/storageAccounts"
                }
            }
        },
        "networkProfile": {
            "inputEndpoints": [
                {
                    "endpointName": "SSH",
                    "privatePort": 22,
                    "publicPort": 22,
                    "protocol": "tcp",
                    "enableDirectServerReturn": false
                }
            ],
            "virtualNetwork": {
                "subnetNames": [
                    "Subnet-1"
                ],
                "id": "/subscriptions/<subscriptionId>resourceGroups/demo/providers/Microsoft.ClassicNetwork/virtualNetworks/scicoriacentos",
                "name": "scicoriacentos",
                "type": "Microsoft.ClassicNetwork/virtualNetworks"
            }
        }
    },
    "location": "eastus2",
    "name": "scicoriacentosnew"
}


Response

If all is OK from a formatting and basic validation, you should see an ‘202 – Accepted’ – from that obtain the operation ID – and use the API call to check that operation’s status.

SNAGHTML16bd423

Checking Operation Status

 

Take a look at the documentation for the structure of that call.

https://msdn.microsoft.com/en-us/library/azure/ee460783.aspx

A Succeeded Operation

SNAGHTML1690757

An InProgress Operation

SNAGHTML1681c7a

 

An Error Operation Status

SNAGHTML16a181b 

Azure Resource Manager – Creating Storage Accounts

NOTE: Azure Resource manager is in Preview. Thus, anything posted here may change. However, the approach for identifying what resources are available updatable, and registered for Subscriptions should be the same.

In a prior post I walked through adding an SSL certificate then associating that certificate with an Azure Websites. While some sample C# code was provided, for this post it will entirely via using a REST tool – Fiddler or PostMan suffices for this.

The last post I walked through adding a VNET. To cleanup, remember that with REST an HTTP DELETE is all you need to cleanup…

Getting Available Resource Providers

Again, from the prior posts, if you want to see the list of resource providers for a subscription, issue an authenticated call to the /providers resource:

https://msdn.microsoft.com/en-us/library/azure/dn790572.aspx

I’ve glossed over authentication quite a bit in the prior posts, take a look here: https://msdn.microsoft.com/en-us/library/azure/dn790557.aspx which uses the ADAL library for Managed code.  Again, you can do the calls via REST as well – I’ll try to cover that in a future post.

Creating a Storage Account

Again, the best way to ‘learn’ the representation of these resources is to review an existing one.

Here, issuing a GET request to the following gives me the resource properties.

GET https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/somegroup/providers/Microsoft.ClassicStorage/storageAccounts/<resourceName>?api-version=2014-06-01

{
    "properties": {
        "provisioningState": "Succeeded",
        "status": "Created",
        "endpoints": [
            "https://<accountName>.blob.core.windows.net/",
            "https://<accountName>.queue.core.windows.net/",
            "https://<accountName>.table.core.windows.net/",
            "https://<accountName>.file.core.windows.net/"
        ],
        "accountType": "Standard-LRS",
        "geoPrimaryRegion": "East US",
        "statusOfPrimaryRegion": "Available",
        "geoSecondaryRegion": "",
        "statusOfSecondaryRegion": "",
        "creationTime": "2014-12-19T19:18:59Z"
    },
    "id": "/subscriptions/<subscriptionId>/resourceGroups/somegroup/providers/Microsoft.ClassicStorage/storageAccounts/<accountName>",
    "name": "<accountName>",
    "type": "Microsoft.ClassicStorage/storageAccounts",
    "location": "eastus2"
}

Creating a Locally Redundant Storage Account (LRS)

Ok, we trim back the JSON properties to what we just need to create. Note that when you’re in the portal, there’s really not too many options to set other than the Name and the Pricing level. Same for the JSON properties here.

PUT https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/demo2/providers/Microsoft.ClassicStorage/storageAccounts/<resourceName>?api-version=2014-06-01

Authorization: Bearer <token>
Content-Type: application/json

{
    "properties": {
        "accountType": "Standard-LRS",
    },
    "name": "<reourceName>",
    "type": "Microsoft.ClassicStorage/storageAccounts",
    "location": "eastus2"
}

 

Here’s the screenshot from Postman – note the 202 – Accepted

 

SNAGHTMLf393b6 

Azure Resource Manager– Creating a Resource Group and a VNET

NOTE: Azure Resource manager is in Preview. Thus, anything posted here may change. However, the approach for identifying what resources are available updatable, and registered for Subscriptions should be the same.

In a prior post I walked through adding an SSL certificate then associating that certificate with an Azure Websites. While some sample C# code was provided, for this post it will entirely via using a REST tool – Fiddler or PostMan suffices for this.

Getting a Token

I’m not going to go into the token acquisition process here. The easiest way to obtain a token for this walkthrough is to just open a session to https://portal.azure.com then view the network traffic as you open up some Blades – for example, open up the “Resource Group” blade – look for an “Authorization” header.  It should show up as “Bearer ….”.  It’s a JWT which if you’d like (WARNING you’re giving your token to a 3rd party to decipher) http://jwt.io/  - this site is managed by the http://auth0.com folks.

If you want to decode this yourself, note that the JWT token is presented in 3 parts, separated by a ‘.’ and each part base64 encoded. The parts are header, payload, and signature.

Getting your subscription ID

In the prior post and the sample code here: http://bit.ly/azrmsamples  there’s some sample code in helper classes to list subscription ID’s for a login. I’m not reviewing that here.

You can logon to the https://portal.azure.com and then go to subscriptions. Click on the subscription that you will be using and then you’ll see a lower case Guid for that subscription.

Available Providers and Capabilities

Not everything is available now, but, you can do a GET request as follows to see what sub-capabilities within that Resource Provider is available.

GET https://management.azure.com/subscriptions/<subscriptionId>/providers?api-version=2015-01-01

Authorization: Bearer <token>

You can take a look at the results from 1 of my subscriptions here:

https://gist.github.com/cicorias/604286f96c833f246a37

Resource Provider – Microsoft

From the response, let’s look at the Virtual Network provider and it’s manageable resources:

            "id": "/subscriptions/<subscriptionId>/providers/Microsoft.ClassicNetwork",
            "namespace": "Microsoft.ClassicNetwork",
            "resourceTypes": [
                {
                    "resourceType": "virtualNetworks",
                    "locations": [
                        "East US",
                        "East US 2",
                        "West US",
                        "North Central US (Stage)"
                    ],
                    "apiVersions": [
                        "2014-06-01",
                        "2014-01-01"
                    ]
                },
                {
                    "resourceType": "reservedIps",
                    "locations": [
                        "East Asia",
                    ],
                    "apiVersions": [
                        "2014-06-01",
                        "2014-01-01"
                    ]
                },
                {
                    "resourceType": "quotas",
                    "locations": [],
                    "apiVersions": [
                        "2014-06-01",
                        "2014-01-01"
                    ]
                },
                {
                    "resourceType": "gatewaySupportedDevices",
                    "locations": [],
                    "apiVersions": [
                        "2014-06-01",
                        "2014-01-01"
                    ]
                }
            ],
            "registrationState": "Registered"
        },

 

Within the ‘resourceTypes’ array, we can see that ‘virtualNetworks’ is available.

Updating – first review an existing VNET

Resource Manager is in early preview; thus, documentation is very limited. However, this is REST – so, the conventions of REST (for the HTTP verbs) and the shape of the JSON for updating can be somewhat determined through reviewing existing resources.

{
    "value": [
        {
            "properties": {
                "provisioningState": "Succeeded",
                "status": "Created",
                "siteId": "<siteId>",
                "inUse": false,
                "addressSpace": {
                    "addressPrefixes": [
                        "10.1.0.0/16"
                    ]
                },
                "subnets": [
                    {
                        "name": "Subnet-1",
                        "addressPrefix": "10.1.0.0/24"
                    }
                ]
            },
            "id": "/subscriptions/<subscriptionId>/resourceGroups/demo2/providers/Microsoft.ClassicNetwork/virtualNetworks/myVnet",
            "name": "myVnet",
            "type": "Microsoft.ClassicNetwork/virtualNetworks",
            "location": "eastus2"
        }
    ]
}

 

From the above, you can see the shape of the VNET resource, and also take note of the ‘id’ property as it illustrates the existence of the VNET within the resource group – here ‘demo2’.  Also note that the URI has the Resource name on the URL itself – this will be important when we PUT a new VNET.

Create a Resource Group

Let’s first create a new Resource Group using a PUT

PUT
https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/demo2?api-version=2015-01-01

Content-Type: application/json
Authorization:Bearer <token>

{
"name": "demo2",
"location": "eastus2",
}

 

This should give you a HTTP 201 – Created response.

newrg

 

Creating a VNET

If you saw above, the shape of the VNET resource has a set of properties.  The REST call is shaped as follows:

 

https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/demo2/providers/Microsoft.ClassicNetwork/virtualNetworks/myVnet?api-version=2014-06-01

Authorization: Bearer <token>
Content-Type: application/json
{
	"name": "myVnet",
	"type": "Microsoft.ClassicNetwork/virtualNetworks",
	"location": "eastus2",
	"properties": {
        "addressSpace": {
            "addressPrefixes": [
                "10.1.0.0/16"
            ]
        },
        "subnets": [
            {
                "name": "Subnet-1",
                "addressPrefix": "10.1.0.0/24"
            }
        ]
    }
}

 

For this VNET – called ‘myVnet’ under the ‘demo2’ resource group, I’ll be using the 10.1.0.0/16 address space (CIDR format) along with defining a single subnet – called ‘Subnet-1’ that is a segment 10.1.0.0/24.

Again, once this runs, you receive a HTTP 201 – Created if all is OK.

SNAGHTMLd27cd3

 

Now, you an switch back to the Portal to take a look at your VNET and review the settings.

 

image

 

NOTE: I want to stress again that not all aspects of each service within Azure is available today through Resource Manager. It is still in preview and as capabilities are added they will appear under the various /providers that are associated with your subscriptions.

Registered Resource Providers

One last note, review the /providers result and identify IF your subscription is even “Registered” for that resource provider. For my subscription as an example, the status is as follows:

https://management.azure.com/subscriptions/<subscriptionId>/providers?api-version=2015-01-01

providers/microsoft.batch
providers/microsoft.cache
providers/Microsoft.DataFactory
providers/Microsoft.DocumentDb
providers/Microsoft.Insights
providers/Microsoft.KeyVault
providers/Microsoft.OperationalInsights
providers/Microsoft.Search
providers/Microsoft.StreamAnalytics
providers/successbricks.cleardb
providers/Microsoft.ADHybridHealthService
providers/Microsoft.Authorization
/providers/Microsoft.Features
providers/Microsoft.Resources
providers/Microsoft.Scheduler
providers/Microsoft.Sql
providers/microsoft.visualstudio
providers/Microsoft.Web",
            

Classic:
providers/Microsoft.ClassicCompute
providers/Microsoft.ClassicNetwork
providers/Microsoft.ClassicStorage


Not registered:
providers/Microsoft.ApiManagement
providers/Microsoft.BizTalkServices
providers/Microsoft.IntelligentSystems
providers/microsoft.support
providers/NewRelic.APM

 

Resource Provider Registration

For registering a subscription with a Resource provider, check the Azure Resource Manager REST API Reference: https://msdn.microsoft.com/en-us/library/azure/dn790548.aspx

Azure Resource Manager – Adding and Assigning Certificates to a Website

Overview

This post is going to cover working with Azure Resource Manager using the REST interfaces [1] and specifically the “Microsoft.Web/sites” and “Microsoft.Web/certificates” providers.

You can review the list of Resource Providers by issuing an authenticated REST call to the Uri below, replacing {subscriptionId} with your tenant id.

https://management.azure.com/subscriptions/{subscriptionId}/providers?api-version=2015-0101 [2]

For this sample, I’m going to make use of the Active Directory Authentication Library for .NET – primarily to make the REST calls for acquiring an Access Token [3]. You don’t have to use these libraries, but for this sample and to abbreviate the token dance with AAD, I’m using them.

It’s important to note that Certificates are now part of the Resource Group itself, and can be assigned to multiple web sites within that Resource Group.

Basic Steps

The basic steps for adding a certificate and assigning it to an Azure Website are as follows.

Note: All of these preparation steps can be done via script or REST calls as well; this sample is just demonstrating certificate upload and assignment to an existing Azure Web Site that already has DNS names (custom) assigned to them. You will also incur additional charges for the custom domain and SSL as warned during the portal method – you will not see warnings via code. Please review pricing information to understand the impact.

Preparation

1. Using an AAD credential that is part of the AAD Domain that the Resource Group is part of – for this example, I add a credential for the AAD user store.

2. Creation of an Application in the AAD Domain for the Resource Group

3. Assigning permissions to the credential for the Resource Group via RBAC

4. Have a Web site running already with custom DNS names already assigned; this will be in a Resource Group that is protected by Role Based Access Control (RBAC)

5. Creation of a SSL Certificate – for this I used ‘makecert.exe’ and created a wildcard certificate

Uploading and Assigning Certificate

6. Make a call to the /certificates resource provider to ‘ADD (PUT)’ a new Certificate to the Resource Group

7. Make a call to the /sites resource provider to ‘Update (PUT)’ the assignment of the cer5iridate to the DNS name

And that’s it. So, for steps 1 – 5, let’s review some of the setup steps:

1. Adding an AAD Credential for this sample – since we’re going to use Username / Password authentication to acquire a token, I’ll need the Password. This will require an initial sign on. The easiest way to do this is once you create a user, just login via a private browser sessions with that credential to https://portal.azure.com or https://manage.windowsazure.com

2. Creation of an Application in your AAD domain – same one where the credential is.

1) Sign in to the Azure management portal https://manage.windowsazure.com.

2) Click on Active Directory in the left hand nav.

3) Click the directory tenant where you wish to register the sample application.

4) Click the Applications tab.

5) In the drawer, click Add.

6) Click "Add an application my organization is developing".

7) Enter a friendly name for the application, for example "AADDemoCertificates", select "Native Client Application", and click next.

8) For the sign-on URL, enter the base URL for the sample, you’ll need this for the sample later: https://localhost:8080/login

After done, we need to retrieve the ClientID; for that app:

9) In the Azure portal, click configure

10) Retrieve the ClientID and save it

3. Next, in the “New Portal” - https://portal.azure.com we need to assign the user permissions to the respective Resource Group

1) Click Browse

2) Find “Resource Groups”

3) Locate the Resource Group that the Azure Web Site is in that we will be assigning the certificate to.

4) In the “Blade” go to the bottom tile labeled “Access” and click on “Owner”

5) Another blade opens showing any existing Owners

6) Click on “+ Add”

7) You should see existing Users in the domain; find the User or enter the ‘user@domain’ in the Search box

8) Select that user, then click “Select” at the bottom of the blade – this will add that user to the group

4. Looking at your Web site in Azure – ensure and jot down:

a. Name of the Resource Group (should be same as above step)

b. Name of the Site

c. DNS names – add a custom DNS domain – see the Azure portal for instructions

i. This is under “Custom Domains and SSL” – you have to choose a “Basic” plan or higher for Custom Domains and SSL

5. For making a self-signed cert, these are the commands I used:

REM make the root
makecert -n "CN=Development Test Authority" -cy authority -a sha1 -sv "DevelopmentTestAuthority.pvk" -r "DevelopmentTestAuthority.cer"

REM makecert -n "CN=*.cicoriadev.net" -ic "DevelopmentTestAuthority.cer" -iv "DevelopmentTestAuthority.pvk" -a sha1 -sky exchange -pe -sv "wildcard.cicoriadevnet.pvk" "wildcard.cicoriadevnet.cer"

makecert -n "CN=*.cicoriadev.net" -ic "DevelopmentTestAuthority.cer" -iv "DevelopmentTestAuthority.pvk" -a sha1 -sky exchange -pe -sv "wildcard.cicoriadevnet.pvk" "wildcard.cicoriadevnet.cer"

pvk2pfx -pvk "wildcard.cicoriadevnet.pvk" -spc "wildcard.cicoriadevnet.cer" -pfx "wildcard.cicoriadevnet.pfx" -pi pass@word1

Sample Code

For the sample code, you’ll see a call via the AADL library to use a Username & Password for obtaining an AuthenticationResult object – which contains an AccessToken. Note that the resource URI that the token is generated for is https://management.azure.com/ .

Adding a Certificate via REST

The sample code makes use of JSON.NET and anonymous objects for creating the PUT HTTP request bodies. Here is what the shape of the PUT request looks like for ‘adding’ a certificate to a Resource Group.

Request

PUT https://management.azure.com/subscriptions/{subscriptionId}/
   resourceGroups/{resourceGroupName/providers/
   Microsoft.Web/certificates/{resourceName}?api-version=2014-11-01

Content-Type: application/json
Authorization: Bearer {accessToken}
Content-Length: 3675

{
  "name": "{resourceName}",
  "type": "Microsoft.Web/certificates",
  "location": "{location}",
  "properties" : {
    "pfxBlob": {base64ByteArrayOfPfx},
    "password": "pass@word1"
   }
}

Replacement Parameters

subscriptionId – this is the subscription that the Resource Group (and it’s web site) is contained within

resourceGroupName – this is the name of the resource group

resourceName – this is what the friendly name of the certificate WILL be – this is a PUT request, but the resourceName must be on the Uri in addition to the json request body – and they must match

accessToken – this is the token obtained from the AADL library call

location – for my sample, I used “East US” – which is the Azure Region. Note that not all Resource Providers are available or registered for your subscription in all regions. Review the response from the /providers REST call prior in this post to see what is available for each region, along with the ‘api-version’ that is supported.

base64ByteArrayOfPfx – this the pfx file in bytes, then converted to base64

password- this is the password of the pfx file that was used during pfx creation

Response

The HTTP Response code is a 200 – this a content body that dumps out the certificate information. I’ve abbreviated most of the response in the following. Make note of the thumbprint if you haven’t already as this is what the assignment will use, along with the Site name, to bind the SSL certificate to the web site.

{
    "id": "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/certificates/{resourceName}",
    "name": "{resourceName}",
    "type": "Microsoft.Web/certificates",
    "location": "{location}",
    "properties": {
        "friendlyName": "",
        "subjectName": "*.cicoriadev.net",
        "hostNames": [
            "*.cicoriadev.net"
        ],
        "pfxBlob": null,
        "siteName": null,
        "selfLink": null,
        "issuer": "Development Test Authority",
        "issueDate": "2015-01-27T22:34:57+00:00",
        "expirationDate": "2039-12-31T23:59:59+00:00",
        "thumbprint": "DEA5DED6142EDECCDF952F4D431ED772F01D22D1",
    }
}

Assigning a Certificate via REST

For the assignment, we make use of the Resource Manager “Microsoft.Web/sites”.

Request

PUT https://management.azure.com/subscriptions/{subscriptionId}/
   resourceGroups/{resourceGroupName/providers/
   Microsoft.Web/sites/{resourceName}?api-version=2014-11-01


Content-Type: application/json
Authorization: Bearer {accessToken}
Content-Length: 567

{
  "name": "{resourceName}",
  "type": "Microsoft.Web/sites",
  "location": "{location}",
  "properties" : {
	  "hostNameSslStates": [
    {
      "name": "azw.cicoriadev.net",
      "sslState": 1,
      "thumbprint": "DEA5DED6142EDECCDF952F4D431ED772F01D22D1",
      "toUpdate": 1,
    }
  ]
}
}

Replacement Parameters

subscriptionId – this is the subscription that the Resource Group (and it’s web site) is contained within

resourceGroupName – this is the name of the resource group

resourceName – this is what the friendly name of the certificate WILL be – this is a PUT request, but the resourceName must be on the Uri in addition to the json request body – and they must match

accessToken – this is the token obtained from the AADL library call

location – for my sample, I used “East US” – which is the Azure Region. Note that not all Resource Providers are available or registered for your subscription in all regions. Review the response from the /providers REST call prior in this post to see what is available for each region, along with the ‘api-version’ that is supported.

Thumbprint – this would be the thumbprint known for that certificate in Azure – again, it should always be the same locally, but if you have any issues assigning, this must match what Azure knows in /certificates.

Response

The Response should show you the chosen site DNS name with the thumbprint associated, similar to the following:

        "hostNameSslStates": [
            {
                "name": "azw.cicoriadev.net",
                "sslState": 1,
                "ipBasedSslResult": null,
                "virtualIP": null,
                "thumbprint": "DEA5DED6142EDECCDF952F4D431ED772F01D22D1",
                "toUpdate": null,
                "toUpdateIpBasedSsl": null,
                "ipBasedSslState": 0,
                "hostType": 0
            },

 

 

Sample Solution and Source Code

The source code is located on github: http://bit.ly/azrmsamples - or direct https://github.com/cicorias/AzureResourceManagerSamples

[1] Azure Resource Manager REST API Reference https://msdn.microsoft.com/en-us/library/azure/dn790568.aspx

[2] Listing All Resource Providers https://msdn.microsoft.com/en-us/library/azure/dn790524.aspx

[3] Active Directory Authentication Library for .NET – github https://github.com/AzureAD/azure-activedirectory-library-for-dotnet

Running ASP.NET 5 applications in Linux Containers with Docker

Ahmet’s (@ahmetalpbalkan) posted an official walkthrough on getting ASP.NET 5 running under Docker in Linux.  This takes you from a Docker client running on a Linux or OS X machine against an Docker image in Azure…

Take a look -

http://blogs.msdn.com/b/webdev/archive/2015/01/14/running-asp-net-5-applications-in-linux-containers-with-docker.aspx

PDF Search Handler fix

Adobe keeps breaking my PDF search.  WHY WHY WHY…

 

reg ADD HKCR\.pdf\PersistentHandler /d {1AA9BF05-9A97-48c1-BA28-D9DCE795E93C} /f

Running the AspNet vNext MVC sample direct from Docker

In the post

Using the Docker client from Windows and getting AspNet vNext running in a Docker Container you had to step through downloading GO, building the docker.exe, etc.

I’ve updated the GitHub repo adding the hacked version of the Docker.exe along with their LICENSE.

And the whole thing has been published to the Docker hub registry.

So, all you need to do is run the following (assuming you have a Docker host running):

docker run -d -t -p 8080:5004 cicorias/dockermvcsample2

This will get you a running AspNet vNext on Linux and a Sample MVC app.

Note that temporary workaround in the approach is TAR all the files 1st – and use that in the Dockerfile.

https://github.com/cicorias/dockerMvcSample

https://registry.hub.docker.com/u/cicorias/dockermvcsample2/

Posted: 11-24-2014 8:51 AM by cicorias | with no comments
Filed under: , , ,
Using the Docker client from Windows and getting AspNet vNext running in a Docker Container

Update: 2015-01-15 – Note that Ahmet has posted an official Docker walkthrough for ASP.NET 5 http://blogs.msdn.com/b/webdev/archive/2015/01/14/running-asp-net-5-applications-in-linux-containers-with-docker.aspx

Update: 2014-11-24 – Added links to HOWTO build Docker on Windows from Ahmet.

As Docker progress as a native application on Windows, and Asp.NET progresses direct from Microsoft for running on Linux, I wanted to see how far I could get using what’s out there today. While there are some challenges, there are a couple of simple steps that you can use to get around some initial blockers.

There are known issues in the Docker Windows implementation [Github pull request 9113] – specifically, the use of Path separators – in that in Linux we have ‘/’ and in Windows it’s ‘\’. While GO has a constant for this, the Docker client and server are not handling this platform translation just yet. The trick for this is just TAR up your directory first, then use the ADD Dockerfile command which can handle TAR files natively.

The other key change is downgrading the VERSION number so the client matches the Boot2Docker versions.  While I didn’t see any API changes that would impact this other than the version number.

Here’s an image of it running on a Docker host container (running on Hyper-V Windows 8.1).  Getting here was a bit challenging, but worth it Smile

github repo here: https://github.com/cicorias/dockerMvcSample

 

image

 

Here are the general steps that I followed:

Follow boot2docker on Hyper-V setup steps

In the post here you can use that to get Docker via Boot2Docker running in HyperV. Again, all you need is a Docker host, but if you want to be all HyperV this is a way to do it.

Modify Docker client version ‘server 1.15’ (HACK)

Ahmet goes through the HOWTO on building the Docker client – here: https://ahmetalpbalkan.com/blog/compiling-docker-cli-on-windows/.

GO is from here: https://golang.org/

Follow the steps to install GO, then clone the Docker git repo – and make a small change to the version number so you’ll be able to attach with the Native client (which is being built against the dev branch from Docker’s Github repo. The Boot2Docker server is still at the prior version.  See the comments in the pull request above where some folks have indicates similar approach.

C:\gopath\src\github.com\docker\docker\api\common.go
const (
	APIVERSION        version.Version = "1.15"

Build Docker client with GO

Once you have the docker.exe built, you can put it away safely and kill the repo if you want.

Turn off TLS if you like a simple command line

I turn off TLS for development.  see https://github.com/boot2docker/boot2docker/blob/master/README.md

“disable it by adding DOCKER_TLS=no to your/var/lib/boot2docker/profile file on the persistent partition inside the Boot2Docker virtual machine (use boot2docker ssh sudo vi /var/lib/boot2docker/profile).”

if you don’t turn it off, you can use TLS and just copy over to your Windows machien the following files then reference them from the ‘docker’ command line or set the environment variables.

If using TLS ‘steal’ the following files from your boot2docker host

The following files sit on the Docker host in /var/lib/boot2docker

  • cert.pem
  • key.pem
  • ca.pem

image

If you need to SSH into the Docker image:

ssh docker@192.168.1.165

Password: tcuser

 

Run docker client to verify access to your Docker host

Using the Docker client that you built from the GO source (and the hacked version #)

If you set an environment variable, you can avoid passing command line parms each time.

Note that the non-secure port is 2375 by default, and the secure port is 2376.

E:\gitrepos\dockerAspNet>set dock
DOCKER_HOST=tcp://192.168.1.165:2375

If you’re running via TLS, you can use the Certificate files that are located on the Server and mentioned above:

docker --tls --tlscert="e:\\temp\\docker\\cert.pem" --tlskey="e:\\temp\\docker\\key.pem" --tlscacert="e:\\temp\\docker\\ca.pem" ps

Getting ASP.NET vNext running

Now for the fun part.

First, grab (clone) the github repo at:

git clone https://github.com/aspnet/Home.git

Tar files into 1 archive

Then in the ./samples/HelloMvc directory using a tool (such as 7-zip) to ‘tar’ up all the files so you have a ‘HelloMvc.tar’ file. This step is needed until the Docker client/daemon properly addresses File Separator differences between Windows and Linux.

Create a ‘Dockerfile’ with the following:

FROM microsoft/aspnet
# copy the contents of the local directory to /app/ on the image
ADD HelloMvc.tar /app/

RUN ls -l
# set the working directory for subsequent commands
WORKDIR app
RUN ls -l
# fetch the NuGet dependencies for our application
RUN kpm restore
# set the working directory for subsequent commands
# expose TCP port 5004 from container
EXPOSE 5004
# Configure the image as an executable
# When the image starts it will execute the “k web” command
# effectively starting our web application
# (listening on port 5004 by default)
ENTRYPOINT ["k", "kestrel"]

Once this is done the directory should look like this:

image

Build the Docker package

Now, from the root of the repo (./dockerAspNet/samples in my example) execute the following:

docker build -t myapp samples/HelloMvc

At this point, you should see Asp.NET and all the supporting dependencies fly by in the build interactive console. It will take a bit a time the first time as it will install the ‘microsoft/aspnet’ docker package too. Once that is done, future updates will be faster just for you’re package.

After a bit, you should see something like the following. 

SNAGHTML2041d3e

 

Startup the Container

Now we’re ready to start our MVC app on ASP.NET in our Docker Container on Linux!!!!

docker run -d -t -p 8080:5004 myapp

image

Navigate to your IP address of your Linux instance:

As Martha Stewart would say – “It’s a good thing…”

image

Posted: 11-23-2014 2:46 PM by cicorias | with no comments
Filed under: , , , ,
Useful Machine Learning and HDInsight / Hadoop Links Posts and Information

Updates:

  • Initial Post: 2014-11-17

As many ramp up on Microsoft Azure Machine Learning, I wanted to start keeping a succinct list of many of the articles, blogs, videos, posts, etc. that have shown to be helpful in conveying the essence of the general practice of Machine Learning as well as the implementation within Microsoft Azure.

Machine Learning Center

http://azure.microsoft.com/en-us/documentation/services/machine-learning/

R Programming

R for Beginners by Emmanuel Paradis

http://cran.r-project.org/doc/contrib/Paradis-rdebuts_en.pdf

Introductory Statistics with R (Statistics and Computing), Peter Dalgaard

http://amzn.com/0387954759 

R Succinctly, Barton Poulson, Syncfusion
http://bit.ly/1pzxbJi

http://www.syncfusion.com/resources/techportal/ebookconfirm/rsuccinctly/sitevisitors%20-%20www.onlineprogrammingbooks.com

An Introduction to Statistical Learning, with Applications in R, Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani

http://www-bcf.usc.edu/~gareth/ISL/

Papers

Analyzing Customer Churn using Microsoft Azure Machine Learning

http://azure.microsoft.com/en-us/documentation/articles/machine-learning-azure-ml-customer-churn-scenario/

Tutorials

Develop a predictive solution with Azure Machine Learning

http://azure.microsoft.com/en-us/documentation/articles/machine-learning-walkthrough-develop-predictive-solution/

Create a simple experiment in Azure Machine Learning Studio

http://azure.microsoft.com/en-us/documentation/articles/machine-learning-create-experiment/

Videos

Instructional Azure Machine Learning videos

http://azure.microsoft.com/en-us/documentation/videos/index/?services=machine-learning

Tools / Scripts

Creates a cluster with specified configuration.
DESCRIPTION
Creates a HDInsight cluster configured with one storage account and default metastores. If storage account or container are not specified they are created
automatically under the same name as the one provided for cluster. If ClusterSize is not specified it defaults to create small cluster with 2 nodes.
User is prompted for credentials to use to provision the cluster.

During the provisioning operation which usually takes around 15 minutes the script monitors status and reports when cluster is transitioning through the
provisioning states.

https://github.com/Azure/azure-sdk-tools-samples/blob/master/solutions/big-data/New-HDInsightCluster.ps1

Blog Posts

Benjamin Guinebertière (from Microsoft France) has a great blog that covers quite a few scenarios that many encounter when ramping and using Microsoft Azure Machine Learning

http://blogs.msdn.com/b/benjguin/

Azure Automation: What is running on my subscriptions - Benjamin Guinebertière

Remember you pay for what you use; ensure you keep track of these in-use clusters. In fact, the goal is to provision only when needed. Take a look at Kerrb for a commercial option to help you manage your spend: http://www.kerrb.com/

http://blogs.msdn.com/b/benjguin/archive/2014/07/24/azure-automation-what-is-running-on-my-subscriptions.aspx

Sample code: create an HDInsight cluster, run job, remove the cluster - Benjamin Guinebertière

Again, we want to keep our data in Blobs (or other persistence) then hydrate the cluster, process, save off our results, then kill the cluster.

http://blogs.msdn.com/b/benjguin/archive/2014/07/24/sample-code-create-an-hdinsight-cluster-run-job-remove-the-cluster.aspx

How to upload an R package to Azure Machine Learning - Benjamin Guinebertière

Adding R scripts and packages can be achieved through this method.

http://blogs.msdn.com/b/benjguin/archive/2014/09/24/how-to-upload-an-r-package-to-azure-machine-learning.aspx

How to retrieve R data visualization from Azure Machine Learning - Benjamin Guinebertière

R is a great point of extensibility. Here we see how to visualize the R output (images) that could be run as part of your R script.

http://blogs.msdn.com/b/benjguin/archive/2014/10/24/how-to-retrieve-r-data-visualization-from-azure-machine-learning.aspx

Carl Nolan’s blog is also a great resource – much more than just ramblings: http://blogs.msdn.com/b/carlnol/

Managing Your HDInsight Cluster using PowerShell – Update - Carl Nolan

http://blogs.msdn.com/b/carlnol/archive/2013/12/16/managing-your-hdinsight-cluster-using-powershell-update.aspx

Managing Your HDInsight Cluster and .Net Job Submissions using PowerShell - Carl Nolan

http://blogs.msdn.com/b/carlnol/archive/2013/12/02/managing-your-hdinsight-cluster-and-net-job-submissions.aspx

Hadoop .Net HDFS File Access – Carl Nolan

http://blogs.msdn.com/b/carlnol/archive/2013/02/08/hdinsight-net-hdfs-file-access.aspx

Books

There is a book on Azure ML due out this week (2014-11-19)

Predictive Analytics with Microsoft Azure Machine Learning: Build and Deploy Actionable Solutions in Minutes, Valentine Fontama, Roger Barga, Wee Hyong Tok, ISBN-13: 978-1484204467 ISBN-10: 1484204468 Edition: 1st

http://amzn.com/1484204468

FAQ

Microsoft Azure Machine Learning Frequently Asked Questions (FAQ)

http://azure.microsoft.com/en-us/documentation/articles/machine-learning-faq/

Pricing

Machine Learning Preview Pricing Details

http://azure.microsoft.com/en-us/pricing/details/machine-learning/

Data Factory

http://azure.microsoft.com/en-us/documentation/articles/data-factory-introduction/

SharePoint 2013 Fixing WCAG F38 Failure–Images without ALT tags–using a Control Adapter–Display Templates

The WCAG (Web Content Accessibility Guidelines) provide a baseline for accessibility standards so various tools, such as screen readers, can provide a reasonable experience for those with accessibility challenges.

With regards to images, the guideline provides that all Image tabs <img…> should probably (I say probably here for various reasons) have an ALT tag.

In the case of filler images, or “decorative” that isn’t representative of content, according to F38 here: They should have an empty ALT tag – thus ‘alt=””’.

F38: Failure of Success Criterion 1.1.1 due to not marking up decorative images in HTML in a way that allows assistive technology to ignore them

The above reference specifically states for validation:

Tests

Procedure

For any img element that is used for purely decorative content:

  1. Check whether the element has no role attribute or has a role attribute value that is not "presentation".

  2. Check whether the element has no alt attribute or has an alt attribute with a value that is not null.

Expected Results
  • If step #1 is true and if step #2 is true, this failure condition applies and content fails the Success Criterion.

 

How this Applies to SharePoint 2013

In SharePoint 2013, if using Display Templates, the generation of the master page is done by the Design Manager “parts”.

Inside of HTML version of the master pages, you will see the following:

    <body>
        <!--SPM:<SharePoint:ImageLink runat="server"  />-->

This will translate to just using the ImageLink SharePoint Web Control, and will emit the following:

        <div id="imgPrefetch" style="display:none">
<img src="/_layouts/15/images/favicon.ico?rev=23" />
<img src="/_layouts/15/images/spcommon.png?rev=23" />
<img src="/_layouts/15/images/spcommon.png?rev=23" />
<img src="/_layouts/15/images/siteIcon.png?rev=23" />

So, we need to “add” an alt=”” tag to this “block” of HTML.

To do this, we can utilize a ControlAdapter – which is a Web Forms based concept, that allows interception at Render time for the control. In the past, ControlAdapters were used in SharePoint 2007 to provide all the rewriting of HTML Tables to more CSS friendly versions – ultimately at the time to help with the WCAG needs.

ControlAdapter

ControlAdapter on MSDN

The main part of the control adapter to do this re-rendering is within the Render statement.  Below are the primary methods that will do this rendering and fixup of the IMG tags:

 

namespace ImageLinkControlAdapter.Code
{
    public class ImageLinkAdapter : ControlAdapter
    {
        protected override void Render(System.Web.UI.HtmlTextWriter writer)
        {
            /// First we get the control's planned HTML that is emmitted...
            using (StringWriter baseStringWriter = new StringWriter())
            using (HtmlTextWriter baseWriter = new HtmlTextWriter(baseStringWriter))
            {
                base.Render(baseWriter);
                baseWriter.Flush();
                baseWriter.Close();
                /// Now we have an HTML element...
                string baseHtml = baseStringWriter.ToString();
                /// now fixit up...
                writer.Write(RebuildImgTag(baseHtml));
            }
        }


        internal string RebuildImgTag(string existingTagHtml)
        {
            var pattern = @"<img\s[^>]*>";
            var rv = Regex.Replace(existingTagHtml, pattern, this.InsertAlt);
            
            return rv;

        }

        internal string InsertAlt(Match match)
        {
            return this.InsertAlt(match.ToString());
        }

        internal string InsertAlt(string existingTag)
        {
            if (!existingTag.StartsWith("<img", StringComparison.InvariantCultureIgnoreCase))
                return existingTag;

            if (existingTag.Contains("alt=", StringComparison.InvariantCultureIgnoreCase))
                return existingTag;

            var insertPoint = existingTag.IndexOf("/>");
            var rv = existingTag.Insert(insertPoint, "alt=\"\"");
            return rv;
        }

    }

    internal static class StringExtensions
    {
        public static bool Contains(this string source, string toCheck, StringComparison comp)
        {
            return source.IndexOf(toCheck, comp) >= 0;
        }
    }

 

Finally, the full Visual Studio 2013 Solution and source is located here:

https://github.com/cicorias/SharePoint-ImageLink-WCAG-Control-Adapter

As a bonus, there’s a Feature Receiver that will deploy the *.browser file to the Web Application’s App_Browsers directory as well.

 

Links

http://www.w3.org/WAI/intro/wcag

https://github.com/cicorias/SharePoint-ImageLink-WCAG-Control-Adapter

Why you should never say “Turn ON Intranet Settings” in Internet Explorer IE

I recently checked into a hotel – connected to their guest wireless – and I start noticing odd things with some websites.

UPDATE: corrected title to conform to the message – thanks Mark…

If you’ve ever seen the following:

NEVER say “Turn on Intranet Settings”.

In my case, the hotel’s wireless (specifically their DHCP server) was returning a WPAD (Browser Proxy Autoconfiguration) with the following:

function FindProxyForURL(url, host)
{
  return "DIRECT";
}

Which for IE that means ALL sites will be mapped to the Intranet Zone automatically IF you’ve “Turned ON Intranet Settings”.  This is bad, bad, bad.

That means IE runs in Unprotected Mode for ALL internet sites.

If you have responded “incorrectly” – then you can reset it to auto as follows:

image

Finally, if you want to see the message where IE WOULD HAVE mapped the zone to Intranet you can turn back on the warning via regedit:

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings]
"WarnOnIntranet"=dword:00000001

See http://blogs.msdn.com/b/ieinternals/archive/2012/06/05/the-local-intranet-security-zone.aspx  for more information

More Posts Next page »