Shawn Cicoria - CedarLogic

Perspectives and Observations on Technology

Recent Posts

Sponsors

Tags

General





Community

Email Notifications

Blogs I Read

Archives

Other

Use OpenDNS

Making Windows Azure Drive Letter Persistent

Windows Azure Fieldnote

Summary

Windows Azure Drives [1] provide a means to represent a file based (disk drive) persistent storage option for the various role types within Windows Azure Compute. Each of the roles within Windows Azure can mount and utilize for persistent storage (that survives reboot, reimaging, and updated deployments, of a role instances).

During the mounting of a VHD as a CloudDrive, the managed classes have no means to control the drive letter assignment this directly through the CloudDrive managed classes that are provided through the Windows Azure SDK.

Problem

Many solutions today require the use of standard Windows File IO based access and instead of refactoring solutions to leverage the storage options available in the PaaS part of the Windows Azure platform, solutions deployed to Windows Azure can mount a Virtual Hard Disk (VHD) that is persisted in a storage account inside of a running instance. That Page Blob backed VHD is then represented through Virtual Disk Services and Windows Cloud Drive services to the running instances as a Disk Drive and addressable through File IO using a Drive Letter.

While a persistent drive option is available, the drive letter assignment is determined at runtime during the mounting process. This potentially presents a problem with existing solutions, codebases, libraries that require a setting to be established prior to runtime. For example, an application configuration setting that provides a full path, including the drive letter to a location for read/write access for File IO.

Solution

The following solution takes advantage of the Virtual Disk Services through the DiskPart.exe operating system utility to first identify what the VHD is mounted as and, select that volume, and re-assign the letter to the target drive letter.

The original idea for the approach comes from this blog post here: http://techyfreak.blogspot.com/2011/02/changing-drive-letter-of-azure-drive.html

While there is a COM interface available that could be wrapped via an interop layer, the choice was made to initiate a process to take the actions required for remapping the drive letter due to simplicity. Additionally, while there is an existing managed Interop assembly available (Microsoft.Storage.Vds) that is an undocumented and unsupported assembly.

The example scenario presented does the following:

1. Leverages a Windows Azure Web Role (could be a Worker Role or VM Role as well)

2. Implements a Windows Console applications that:

a. Is a Startup task – in elevated mode and background

b. Runs elevated in order to affect Virtual Disk Services

c. At startup:

    • Mounts the VHD from Windows Azure Storage
    • Detects if target drive letter and re-assigns as needed to target drive letter **

d. Then Continuously (every 30 seconds)

    • i. Checks if drive is mounted on target drive letter
    • ii. If not, reassigns drive letter **

** Drive Letter reassignment is done through a System.Process startup object that runs Diskpart.exe with a “select volume” and “assign drive letter” command sequence.

Implementation

The sample solution contains the following:

1. Windows Azure Web Role – simple MVC3 application that just lists the mapped CloudDrives using the CloudDrive.GetMountedDrives() method

2. CloudDriveManager class library – helper class that provides the CloudDrive management actions leveraged by the caller (either Console or other code)

3. CloudDriveManagerConsole – Windows console application intended to be a startup project and running in elevated mode in order to affect the assigned driver letter

4. CloudDriveManagerRole – implementation of Microsoft.WindowsAzure.ServiceRuntime.RoleEntryPoint – which allows this class to be used from within a Windows Azure Web or Worker role – however, that role entry point would need to be elevated (via the “Runtime” and “NetFxEntryPoint” Elements)

5. Logger – simple logger class that writes to a Queue for debugging purposes

6. ResponseViewer – simple WPF application that reads Queue messages so you can view log messages from your cloud instances – purely for debugging purposes

7. TestListDrives – simple Windows console application that lists the mapped CloudDrives – usable from within the Role instance by using Remote Desktop and connecting to the instance

Instance Initialization

During role startup, Windows Azure will execute the Task defined in the Service definition in background mode and elevated (running as system). Inside of the console application, the implementation of OnStart does the following:

public override bool OnStart()
{
    try
    {
        Initialize();
        MountAllDrives();
    }
    catch (Exception ex)
    {
        _logger.Log("fail on onstart", ex);
    }
    return true;
}

void MountAllDrives()
{
    try
    {
        var driveSettings = RoleEnvironment.GetConfigurationSettingValue(DRIVE_SETTINGS);
        string[] settings = driveSettings.Split(':');
        CloudStorageAccount account =CoudStorageAccount.FromConfigurationSetting(STORAGE_ACCOUNT_SETTING);
        string dCacheName = RoleEnvironment.GetConfigurationSettingValue(DCACHE_NAME);
        LocalResource cache = RoleEnvironment.GetLocalResource(dCacheName);
        int cacheSize = cache.MaximumSizeInMegabytes / 2;
        _cloudDriveManager = new CloudDriveManager(account, settings[0], settings[1][0], cache);
        _cloudDriveManager.CreateDrive();
        _cloudDriveManager.Mount();
    }
    catch (Exception ex)
    {
        _logger.Log("fail on mountalldrives", ex);
        throw;
    }
}

 

Mostly, the startup routine calls into the custom class CloudDriveManager, which provides the simple abstraction to the Windows Azure CloudDrive managed class.

The custom CreateDrive method calls the CloudDrive create drive method in a non-destructive manner – and, for this sample, creates the initial VHD in storage if it does not already exist.

Mounting calls the managed classes CloudDrive.Mount along with calling into a custom VerifyDriveLetter method.

public void Mount()
{
    _logger.Log(string.Format("mounting drive {0}", _vhdName));
    _cloudDrive = _account.CreateCloudDrive(_vhdName);

    var driveLetter = _cloudDrive.Mount(_cacheSize, DriveMountOptions.Force);
    _logger.Log(string.Format("mounted drive letter {0}", driveLetter));

    var remounted = VerifyDriveLetter();
}

 

Within VerifyDriveLetter there’s some logic to validate the current state of the mounted drives. And then verification if the mounted drive is the intended drive letter.

public bool VerifyDriveLetter()
{
    _logger.Log("verifying drive letter");
    bool rv = false;
    if (RoleEnvironment.IsEmulated)
    {
        _logger.Log("Can't change drive letter in emulator");
        //return;
    }

    try
    {
        DriveInfo d = new DriveInfo(_cloudDrive.LocalPath);
        if (string.IsNullOrEmpty(_cloudDrive.LocalPath))
        {
            _logger.Log("verifydriveLetter: Not Mounted?");
            throw new InvalidOperationException("drive is notmounted");
        }

        if (!char.IsLetter(_cloudDrive.LocalPath[0]))
        {
            _logger.Log("verifiydriveLeter: Not a letter?");
            throw new InvalidOperationException("verifydriveletter - not a letter?");
        }

        if (IsSameDrive())
        {
            _logger.Log("is same drive; no need to diskpart...");
            return true;
        }

        char mountedDriveLetter = CurrentLocalDrive(_vhdName);
        RunDiskPart(_driveLetter, mountedDriveLetter);

        if (!IsSameDrive())
        {
            var msg = "Drive change failed to change";
                   _logger.Log(msg);
                   throw new ApplicationException(msg);
               }
               else
               {
                   Mount();
               }

               _logger.Log("verifydriveletter done!!");
               return rv;

           }
           catch (Exception ex)
           {
               _logger.Log("error verifydriveletter", ex);
               return rv;
           }

       }

 

The IsSameDrive method validates if the current mapped drive is indeed the planned drive letter. If not, it will return “false”.

bool IsSameDrive()
{
    char targetDrive = _driveLetter.ToString().ToLower()[0];
    char currentDrive = CurrentLocalDrive(_vhdName);

    string msg = string.Format(
        "target drive: {0} - current drive: {1}",
        targetDrive,
        currentDrive);

    _logger.Log(msg);

    if (targetDrive == currentDrive)
    {
        _logger.Log("verifydriveLetter: already same drive");
        return true;
    }
    else
        return false;

}

 

Finally, the RunDiskPart method initiates the action of spawning a new process with the dynamically created DiskPart script file that selects the existing volume name (by drive letter) and assigns the target drive letter.

void RunDiskPart(char destinationDriveLetter, char mountedDriveLetter)
{
    string diskpartFile = Path.Combine(_cache.RootPath, "diskpart.txt");

    if (File.Exists(diskpartFile))
    {
        File.Delete(diskpartFile);
    }

    string cmd = "select volume = " + mountedDriveLetter + "\r\n" + "assign letter = " + destinationDriveLetter;
      File.WriteAllText(diskpartFile, cmd);

      //start the process
      _logger.Log("running diskpart now!!!!");
      _logger.Log("using " + cmd);
      using (Process changeletter = new Process())
      {
          changeletter.StartInfo.Arguments = "/s" + " " + diskpartFile;
          changeletter.StartInfo.FileName = 
     System.Environment.GetEnvironmentVariable("WINDIR") + "\\System32\\diskpart.exe";
        //#if !DEBUG
        changeletter.Start();
        changeletter.WaitForExit();
        //#endif
    }

    File.Delete(diskpartFile);

}

Output and Results

As an example of the interaction and how the drive appears within the running Windows Azure Role, the following screen shots illustrate the results.

Program Startup

At program startup the drive is initially mounted by the Console application – immediately the drive is mounted as the F: drive – the startup code verifies if this is the intended drive – as shown below in the logs, it isn’t, so the code initiates the RunDiskPart method setting M: as the mapped drive.

image

 

The following shows how a Windows Azure Drive appears after the custom code reassigns the drive letter to the Operating system using Windows Explorer – the drive is selected below.

image

 

Within the custom MVC3 application, which simply just lists the Mounted Windows Azure drive (which runs in a separate Process non-elevated – the drive appears as a regular Operating System drive – accessible for File IO as required using the intended drive letter.

image

Forced Letter Change

The following shows what happens if the drive letter is intentionally changed – in this example, I just initiate a DiskPart set of commands to assign the mounted drive the letter L:

image

As you can see in the Windows Explorer window the letter now appears as L: for the WindowsAzureDrive.

Within approximately 30 seconds (which is the value used in the Run method by the custom code) VerifyDriveLetter detects it’s not the intended drive and initiates a change.

image

 

And the below image shows the drive again, appearing as the M: drive:

image

 

Future Options

Since capabilities in the Windows Azure platform change over time the ability to dictate the specific letter to be used may come available. Until then, this approach, by means of the Windows Azure Drive and Virtual Disk Services abstraction provided by the platform offers a means to accommodate codebase and application logic that is dependent upon predetermined drive letters.

References

[1] Windows Azure Drives http://www.windowsazure.com/en-us/develop/net/fundamentals/cloud-storage/#drives

[2] Virtual Disk Service http://msdn.microsoft.com/en-us/library/windows/desktop/bb986750(v=vs.85).aspx

[3] CloudDrive Storage Client http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.storageclient.clouddrive.aspx

[4] Diskpart.exe http://technet.microsoft.com/en-us/library/cc770877(v=WS.10).aspx

[5] Task element http://msdn.microsoft.com/en-us/library/windowsazure/gg557552.aspx#Task

Devil Runtime element http://msdn.microsoft.com/en-us/library/windowsazure/gg557552.aspx#Runtime

[7] NetFxEntryPoint element http://msdn.microsoft.com/en-us/library/windowsazure/gg557552.aspx#NetFxEntryPoint

 

Solution File: MountXDriveSameLetter.zip

Viewing the User Token from Visual Studio 2010 Debugger

When you’re debugging security related things, sometimes you need to take a look at the thread identities user token.

When you’re inside of Visual Studio 2010 – in the watch windows you enter ‘$user’  and you’ll get the same as when in windbg with !token –n

 

SNAGHTML6af2ad9

Microsoft TechNet–Create PDF Takeaway chapters for your set of topics–great feature just added..

If you’re like me, having those PDF version for offline review are great.  It was a pain before as I had to individually print web pages to single PDF using tools.

Now, TechNet can track a “book” of topics for you, and then generate HTML or PDF for you to download – personal publishing Smile

Roll-your-own techdocs for free - TONYSO - Site Home - TechNet Blogs

Posted: 11-30-2011 7:25 AM by cicorias | with no comments
Filed under:
Dennis Ritchie, Father of C and Co-Developer of Unix, Dies | Wired Enterprise | Wired.com

Wow – I still have my K&R book from a class I took at AT&T.  Cut my teeth on nix…

Dennis Ritchie, Father of C and Co-Developer of Unix, Dies | Wired Enterprise | Wired.com

Description of Update Rollup 1 for Active Directory Federation Services (AD FS) 2.0

Multiple UPN support now available…

Description of Update Rollup 1 for Active Directory Federation Services (AD FS) 2.0

Posted: 10-13-2011 5:57 AM by cicorias | with no comments
Filed under: ,
Additional Mime Types in Visual Studio 2010 Development Web Server

While the development server in Visual Studio 2010 is great for most work, it does have 1 shortcoming in that if you start adding content types that are not part of the base set of known Mime types built in, you won’t affect the proper header response that is emitted to the client/browser.

For example MP4 files, out of the box the development web server emits application/octet-stream or something like that.  What we really need is video/mp4.

Now, with IIS Express, you can easily switch over to use that and just add the correct mapping to the section of the web.config when you’re running in integrated mode.  Such as follows:

<system.webServer>
  <modules runAllManagedModulesForAllRequests="true" />
  <staticContent>
    <mimeMap fileExtension=".mp4" mimeType="video/mp4" />
    <mimeMap fileExtension=".m4v" mimeType="video/m4v" />
  </staticContent>
</system.webServer>

 

However, with the Visual Studio 2010 built in Web Development server, you can’t affect the mime type support through configuration.

For this a simple NuGet package is available that provides a simple HttpModule to affect the ContentType on the response headers.  it reads the Web.config for the site and will honor the section above – this all happens only when NOT running in Integrated Pipeline mode.

image

SNAGHTML6f59550

Sample Solution and Source here: SampleMimeHelper.zip

The HttpModule makes use of dynamically loading via the PreApplicationStartMethod and the DynamicModuleHelper utility method that is part of the Microsoft.Web.Infrastructure namespace.

 

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Diagnostics;
using System.Configuration;
using System.Web.Configuration;
using System.Xml.Linq;
using Microsoft.Web.Infrastructure.DynamicModuleHelper;

[assembly: PreApplicationStartMethod(typeof(MimeHelper), "Start")]

/// <summary>
/// Summary description for MimeHelper
/// </summary>
public class MimeHelper : IHttpModule
{
    static Dictionary<string, string> s_mimeMappings;
    static object s_lockObject = new object();

    public static void Start()
    {
        if ( ! HttpRuntime.UsingIntegratedPipeline)
            DynamicModuleUtility.RegisterModule(typeof(MimeHelper));
    }

    static string GetMimeType(HttpContext context)
    {
        var ext = VirtualPathUtility.GetExtension(context.Request.Url.ToString());
        if (string.IsNullOrEmpty(ext)) return null;

        CreateMapping(context.ApplicationInstance);

        string mimeType = null;
        s_mimeMappings.TryGetValue(ext, out mimeType);

        return mimeType;

    }

    static void CreateMapping(HttpApplication app)
    {
        if (null == s_mimeMappings)
        {
            lock (s_lockObject)
            {
                if (null == s_mimeMappings)
                {
                    string path = app.Server.MapPath("~/web.config");
                    XDocument doc = XDocument.Load(path);

                    var s = from v in doc.Descendants("system.webServer").Descendants("staticContent").Descendants("mimeMap")
                            select new { mimeType = v.Attribute("mimeType").Value, fileExt = v.Attribute("fileExtension").Value };

                    s_mimeMappings = new Dictionary<string, string>();
                    foreach (var item in s)
                    {
                        s_mimeMappings.Add(item.fileExt.ToString(), item.mimeType.ToString());
                    }
                }
            }
        }
    }


    public void Dispose() { }

    public void Init(HttpApplication context)
    {
        context.EndRequest += new EventHandler(context_EndRequest);
    }

    void context_EndRequest(object sender, EventArgs e)
    {
        try
        {
            HttpApplication app = sender as HttpApplication;
            string mimeType = GetMimeType(app.Context);

            if (null == mimeType) return;

            app.Context.Response.ContentType = mimeType;
        }
        catch (Exception ex)
        {
            Debug.WriteLine(ex.Message);
        }
    }
}
Posted: 10-06-2011 3:38 PM by cicorias | with no comments
Filed under: , , ,
Faking SPContext–for testing only…

Keith Dahlby has a good post on creating a fake SPContext.  Here’s the link and the code

NOTE: This is not production safe code – use at own risk…

http://solutionizing.net/2009/02/16/faking-spcontext/

public static SPContext FakeSPContext(SPWeb contextWeb)
{
  // Ensure HttpContext.Current
  if (HttpContext.Current == null)
  {
    HttpRequest request = new HttpRequest("", web.Url, "");
    HttpContext.Current = new HttpContext(request,
      new HttpResponse(TextWriter.Null));
  }

  // SPContext is based on SPControl.GetContextWeb(), which looks here
  if(HttpContext.Current.Items["HttpHandlerSPWeb"] == null)
    HttpContext.Current.Items["HttpHandlerSPWeb"] = web;

  return SPContext.Current;
}
Posted: 09-21-2011 2:54 PM by cicorias | with no comments
Filed under:
Use an Action delegate to time a method…

I wanted an ability to be able to simply time methods and write to a log/trace sink and a very simple approach that I ended up using was to provide a method that takes an Action delegate which would be the method that is to be timed.

The following is what I came up with (this is my reminder…)

class Program
{
    static void Main(string[] args)
    {
        TestMethod1();
    }

    private static void TestMethod1()
    {
        LoggingHelper.TimeThis("doing something", () =>
        {
            Console.WriteLine("This is the Real Method Body");
            Thread.Sleep(100);
        });
    }
}

public static class LoggingHelper
{
    public static void TimeThis(string message, Action action)
    {
        string methodUnderTimer = GetMethodCalled(1);
        Stopwatch sw = Stopwatch.StartNew();
        LogMessage( string.Format("started: {0} : {1}", methodUnderTimer, message));
        action();
        sw.Stop();
        LogMessage(string.Format("ended  : {0} : {1} : elapsed : {2}", methodUnderTimer, message, sw.Elapsed));

    }

    private static string GetMethodCalled(int stackLevel)
    {
        StackTrace stackTrace = new StackTrace();
        StackFrame stackFrame = stackTrace.GetFrame(stackLevel + 1);
        MethodBase methodBase = stackFrame.GetMethod();
        return methodBase.Name;
    }

    static void LogMessage(string message){
        Console.WriteLine("{0}", message);
    }

}
Posted: 09-21-2011 12:15 PM by cicorias | with no comments
Filed under: ,
Comparison of Windows Azure Storage Queues and Service Bus Queues « Microsoft Technologies Rocks !!!

Nice table comparing Windows Azure Queues vs. Windows Azure AppFabric Service Bus – note the comment regarding in WAZ SDK 1.5 Queue message size is now 64KB

Of course, I like the name of the blog too.

Comparison of Windows Azure Storage Queues and Service Bus Queues « Microsoft Technologies Rocks !!!

Posted: 09-20-2011 5:06 AM by cicorias | with no comments
Filed under:
MiniProfiler– A simple but effective mini-profiler for ASP.NET MVC and ASP.NET.

Once in a while a good tool that I find out about that helps me developing solutions comes in real handy.  MiniProfiler is one of those tools.

Developed by the StackOverflow folks it’s available in source or binary, and NuGet packages

Take a look

http://code.google.com/p/mvc-mini-profiler/

http://nuget.org/List/Packages/MiniProfiler

Posted: 09-18-2011 6:34 AM by cicorias | with no comments
Filed under: , , ,
Slides for BUILD conference…

On the Channel 9 site where the BUILD conference sessions are available, there are several feeds that provide the media associated with the sessions.

One that’s not listed explicitly is the PowerPoint slides – that feed is here:

http://channel9.msdn.com/Events/BUILD/BUILD2011/RSS/slides

Posted: 09-18-2011 5:44 AM by cicorias | with no comments
Filed under:
Building scalable web applications with Windows Azure (ed. and on premise too!)

Matthew Kerner’s session at BUILD covers many of the patterns and approaches that a well designed and highly scalable solution can do to make the most efficient use of the platform.

Truth is many of the areas Matthew covers should be for on Premise too – including use of Windows Azure CDN...

Smile  At about ~30:00 in Matthew references one of my posts on Windows Azure CDN and using it with your Compute role (hosted service) as an CDN origin…

Building scalable web apps with Windows Azure
http://channel9.msdn.com/events/BUILD/BUILD2011/SAC-870T

Bringing Hyper-V to “Windows 8” - Building Windows 8 - Site Home - MSDN Blogs

This is huge – and a welcomed addition.  Been waiting too long for this.

Bringing Hyper-V to “Windows 8” - Building Windows 8 - Site Home - MSDN Blogs

Hosted Service as a Windows Azure CDN Origin Tips

The Windows Azure Content Delivery Network (CDN) helps improve the solution experience by putting content closer to the end-user, enhances availability, geo-distribution, scalability, lower latency delivery, and performance. If that’s the goal we want to be sure that when we instantiate the source of this content at the origin it’s as CDN friendly as we need.

In Windows Azure, when you’re running under IIS7.x /ASP.NET you have to be aware of the inherent behavior associated with Output Caching as it is part of the standard deployment of IIS7.x.

Some of that inherent behavior affects how cache-friendly your content (Http Response) will be as the CDN directly consumes your Hosted Service endpoint ( httpSleep://yourservice:80|443/cdn ) on behalf of your users.

If you don’t understand how your solution emits these HTTP headers, you will end up with NO caching – defeating the purpose of the CDN (in fact making performance worse) and additional costs incurred.

The areas we’ll briefly take a look at here are:

  • Working with the ASP.NET OutputCache module for CDN Friendly HTTP Headers
  • Vary:* Headers
  • Compressed content with the CDN
  • Use of IIS Virtual Application / Directories under Windows Azure
  • Provide your own OutputCache module implementation

 

Working with the ASP.NET OutputCache Module

Default behavior

The following code is an example of what developers generally provided with anticipation that the HTTP headers, specifically the Cache-control header will be emitted with a CDN friendly HTTP header – or any cache for that matter.

using (var image = ImageUtil.RenderImage(…))
 {
     context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(Constants.MA));
     context.Response.Cache.SetCacheability(HttpCacheability.Public);
     context.Response.ContentType = "image/jpeg";
     image.Save(context.Response.OutputStream, ImageFormat.Jpeg);
     context.Response.OutputStream.Flush();
 }

 

Under ASP.NET 3.5/4.x, this will result in the following

image

 

---request begin---
GET /image/0.jpg HTTP/1.0
User-Agent: Wget/1.11.4
Accept: */*
Host: az30993.vo.msecnd.net
Connection: Keep-Alive
---response begin---
HTTP/1.0 200 OK
Cache-Control: public
Content-Type: image/jpeg
Server: Microsoft-IIS/7.5
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Fri, 08 Jul 2011 11:26:01 GMT
Content-Length: 6976
X-Cache: MISS from cds168.ewr9.msecn.net
Connection: keep-alive

With that set of headers, you will encounter a cache MISS on every request – with a read-through to the Hosted Service origin. You might not notice the impact right away as it can get picked up by the OutputCache module – but you’ve defeated the purpose of the CDN – and made the request performance worse.

The sample solution with this post provides a set of test scenarios for manipulating the HttpResponse under a standard IHttpHandler and under MVC3. If you take a look at the code you’ll see that 3 things are done to help diagnose the situation.

  1. Request Logger – this is a simple request logger that captures requests for the purposes of providing a simple view against the incoming requests (could have used IIS logs, but this is a simple way to get the requests I’m interested in and display them)
  2. Kernel caching is disabled via the web.config – with this enabled requests won’t make it into your ASP.NET pipeline when it’s a cache hit – giving you a false positive on understanding if and when CDN requests are “leaking” through and not being cached at the CDN
  3. OOB OutputCache module is removed, then re-added – this ensures it’s lower in the module list at request time allowing the Request Logger to be higher up in the call chain so I’m sure to capture the inbound requests – if they’re cached or not in the OutputCache module

 

Set SlidingExpiration on Response

The easiest fix is to ensure you set SlidingExpiration to true on the response. This will ensure that the Cache-control header will contain your desired “public, max-age=xxxx”

public void ProcessRequest(HttpContext context)
{
    using (var image = ImageUtil.RenderImage(…)
    {
       context.Response.Cache.SetCacheability(HttpCacheability.Public);
       context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(Config.MaxAge));
       context.Response.ContentType = "image/jpeg";
       context.Response.Cache.SetSlidingExpiration(true);
       image.Save(context.Response.OutputStream, ImageFormat.Jpeg);
    }
}
Set an explicit Expires on the Response
public void ProcessRequest(HttpContext context)
{
    using (var image = ImageUtil.RenderImage(…)
    {
      context.Response.Cache.SetCacheability(HttpCacheability.Public);
      context.Response.Cache.SetExpires(DateTime.Now.AddMinutes(Config.MA));
      context.Response.ContentType = "image/jpeg";
      image.Save(context.Response.OutputStream, ImageFormat.Jpeg);
      context.Response.OutputStream.Flush();
    }
}

 

Use Downstream as the location when using the MVC OutputCache Attribute
[OutputCache(CacheProfile = "CacheDownstream")]
public ActionResult Image3()
{
    MemoryStream oStream = new MemoryStream();
    using (Bitmap obmp = ImageUtil.RenderImage(…)
    {
       obmp.Save(oStream, ImageFormat.Jpeg);
       oStream.Position = 0;
       return new FileStreamResult(oStream, "image/jpeg");
    }
}

//web.config
 <caching>
      <outputCacheSettings>
        <outputCacheProfiles>
          <add name="CacheDownstream" 
               location="Downstream" 
               duration="1000" 
               enabled="true"/>
        </outputCacheProfiles>
      </outputCacheSettings>

 

Append a query string

Providing a query string on the request affects the Cache-control header. Even if you add just a “?” after the URL path, the OutputCache module will then emit your intended max-age.

Disable OutputCache module – via config

You can do this by removing it from the ASP.NET pipeline altogether, or remove it in the sub-path where /cnd is located (or Virtual Application – see section later). This disables all Output caching for all requests.

Disable OutputCache module – via code – per request

You can also choose to bypass the OutputCache by affecting the Response with the following code

public void ProcessRequest(HttpContext context)
{
    using (var image = ImageUtil.RenderImage(…)
    {
       context.Response.Cache.SetCacheability(HttpCacheability.Public);
       context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(Config.MA));
       context.Response.Cache.SetNoServerCaching();
       context.Response.ContentType = "image/jpeg";
       image.Save(context.Response.OutputStream, ImageFormat.Jpeg);
       context.Response.OutputStream.Flush();
    }
}

 

Implement your own OutputCache module

You can take a look at the links in the section on implementing your own OutputCache module to get an idea on the implementation effort, but the reasoning why you would want to is varied – which I’ll cover a couple of reasons in that section.

Vary:* Headers and Caching

Ensure you’re not emitting Vary:* by headers at all if you want to take advantage of caching – either with the Windows Azure CDN or not – as the specification indicates responses with Vary:* should not be cached and only handled at the origin.

From RFC2616: "A Vary header field-value of "*" always fails to match and subsequent requests on that resource can only be properly interpreted by the origin server."

Compressed content with the CDN

One of the reasons you would want to move your origin from Windows Azure Storage to a Hosted Service is to take advantage of compression. As part of IIS7.x, you can ensure that static and dynamic compression is enabled for your content – this will then cascade through to the Windows Azure CDN and provide an overall better experience for your end users.

Use of IIS Virtual Application / Directories under Windows Azure

Today, using Hosted Service as an origin to Windows Azure CDN requires a production deployment of your service listening at the path httpSleep://yourserviceDnsName:80|443/cdn. Currently we do not support Hosted Services as origins in staging.

All that is required is that your service provide responses under the /cdn path. You can achieve this with a WebRole that has a directory (path) under your main site.

What happens if you need (or desire) to isolate that path (/cdn)? Under Windows Azure, you can take advantage of IIS Virtual Applications / Directories under your main WebRole.

The following Service Definition illustrates the approach by taking advantage of the Full IIS model and the VirtualApplication element. The key to the approach here for your solution in the development fabric is to ensure the physical directory is relative to the MainWeb path.

<ServiceDefinition name="TR13VirtualApp" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
  <WebRole name="MainWeb" vmsize="ExtraSmall">
    <Sites>
      <Site name="Web">
        <VirtualApplication name="cdn" physicalDirectory="../MainWebCdn" />
        <Bindings>
          <Binding name="Endpoint1" endpointName="Endpoint1" />
        </Bindings>
      </Site>
    </Sites>
    … 

 

This results in a deployment up on Windows Azure as the following – with a single site, and 2 application pools:

image

Simple VS2010 Solution is also provided at the end of the post and the following links provide further detail:

Creating Virtual Applications / Directories

The Windows Azure Training kit contains a sample walkthrough that demonstrates the approach.

http://msdn.microsoft.com/en-us/wazplatformtrainingcourse_advancedwebandworkerrolesvs2010lab_topic2.aspx

Additionally, Wade Wegner goes into a bit of detail as well.

http://www.wadewegner.com/2011/02/running-multiple-websites-in-a-windows-azure-web-role/

Why provide your own OutputCache module implementation?

So, what would make you want to write your own OutputCache module implementation? Recall that the Service model when you have many instances in your Windows Azure Role may result in different host instances servicing requests.

image

If that content is VERY expensive to produce….

You now have N (# of instances) producing possibly exact or similar replicas of your content. Not exactly a desirable effect if your transaction costs are high (maybe you’re reaching out to external services, or on premise mainframes, etc.)

Take advantage of Windows Azure AppFabric Caching

Either replacing the OutputCache module with your own implementation, or leveraging your own request model (that will still work with or bypass the OutputCache module) you can instantiate a single copy of that content in AppFabric Caching – thereby reducing the overall cost associated with repetitive content creation. Whatever your choice, ensure to factor in operational costs of AppFabric to see if it meets your economic model.

 

image

 

Implement your own OutputCache

The following links provide some guidance on replacing OutputCache module – which can be done at the /cdn path level if required.

Custom OutputCacheProvider

The following is a sample implementation of a custom OutputCache module under NetFx 4.0.

http://weblogs.asp.net/gunnarpeipman/archive/2009/11/19/asp-net-4-0-writing-custom-output-cache-providers.aspx

ASP.NET 4.0 Caching Overview

Check out the following link on ASP.NET 4.0 caching in general to get an idea of OutputCache module.

http://msdn.microsoft.com/en-us/library/ms178597.aspx

Solution Files

CDN Test Solution

Virtual App Sample

Raffaele Rialdi DeployManager June 2011 edition–Now supports SAN certificates

Raffaele Rialdi has been adding features to his certificate management tool.  Already supporting wildcard certificates, he’s now added SAN cert support.

But it’s more than certificate management too.

IAmRaf - Tools

Posted: 07-06-2011 12:15 PM by cicorias | with no comments
Filed under:
More Posts « Previous page - Next page »