As I've worked with Windows Azure, I've encountered things that require a little extra research or implementation effort, which can be annoying when you're trying to get to the meat of your application development efforts. For this special cloud-focused issue of DevConnections, I contribute some tips that might help those new to Azure development get on their way with a little less effort. Obviously, there are many areas of Azure development that can benefit from tips and tricks, and I can't cover them all here. So I'll focus on the following topics: accessing your application path, a few common load-balancing problems, working with certificates, PowerShell cmdlets, and automating deployment and (sometimes more importantly) cleanup.

Getting Your Physical Application Path

Some applications access files from a relative path in the hosted application directory. For example, you might want to launch an executable or open a file (such as in my example later in this article when I load an X.509 certificate). To achieve this, you must access the physical application path or file system path. This path is made available through the environment variable RoleRoot.

RoleRoot returns different results when you run in the development fabric versus when it's deployed to Azure (the cloud). Running in the development fabric returns a path such as this one for a worker role project: D:\Solution\CloudProject\bin\Debug\CloudProject.csx\roles\RoleProject. When running in the cloud you get E:. In either case, you want to append “approot” to the end of the path, and from there, your deployed files follow your application structure. If you have subdirectories with files or a \bin directory for typical ASP.NET deployments, they are all relative to approot. Note that there isn't a “\” after the drive letter in the cloud, so you need code like the following example to produce a proper approot path:

string appRoot = Environment.GetEnvironmentVariable("RoleRoot");
appRoot = Path.Combine(appRoot + @"\", @"approot\");

To produce a file path, use this code:

string privateKeyCert = "busta-rp.com.pfx";
string appRoot = Environment.GetEnvironmentVariable("RoleRoot");
string pathToPrivateKey = Path.Combine(appRoot + @"\", string.Format(@"approot\{0}", privateKeyCert));

This code will work for either development or cloud deployments, so you don't have to adjust your code when you deploy. 

Know Thy Load Balancer Address

Hosting in the cloud is all about scalability, so it's no surprise that all deployments are hosted behind a load balancer. Even while you're testing in the development fabric, you work in a simulated load-balanced environment. You must write applications to consider the effect of load balancing. For example:

  • Outside calls should use the load balancer address and not the address of the machine executing a request.
  • Windows Communication Foundation (WCF) services should expose metadata that points to the load balancer address.
  • You must configure WCF endpoint addressing to allow requests to the load balancer to be processed at load-balanced machine instances.

There are other areas in which using the address of the load balancer is important, but the next few tips should give you enough understanding to tackle related problems.

First, how do you know the address of the load balancer? In the cloud, that's easy. The endpoints you specify as your input endpoints in the cloud service definition (ServiceDefinition.csdef) will be granted in the cloud. For a web role, that's usually port 80, which the following example illustrates:

<WebRole name="CloudApp">
    <InputEndpoints>
      <InputEndpoint name="HttpIn" protocol="http" port="80" />
    </InputEndpoints>
</WebRole>

For a worker role, you can choose from several ports:

<WorkerRole name="CloudAppWorker">
    <Endpoints>
      <InputEndpoint name="TcpIn" protocol="tcp" port="9000" />
    </Endpoints>
</WorkerRole>

Although the load balancer uses the input endpoint port, each instance of your web or worker role is allocated an available port behind the load balancer. Calls to your application will always use the load balancer port, which raises some issues with WCF that I discuss later.

In the developer fabric, you might not get the port you assigned in the service definition if that port is already in use. For example, if you have Microsoft IIS running on your development machine, port 80 will be in use. Development fabric assigns the first available port, while incrementing from the port number you specified. I usually specify each port for my input endpoints during development, and sometimes I get the ones I ask for. Figure 1 illustrates the view of the load balancer port for a cloud solution with two web roles and a worker role running in the development fabric.

Figure 1: Input endpoints and load balancer ports assigned to each

I typically supply configuration settings for the domain name and port for both my development fabric and cloud deployments. I load these settings at runtime and use them in all areas that depend on the load-balancer endpoint. It's not guaranteed that I'll get the port specified in development, so I might need to adjust if the port is bumped. In the cloud, port assignment is predictable. Here's an example of a service definition (ServiceDefinition.csdef) for a web role in which I've indicated a domain and port configuration setting.

<WebRole name="CloudApp">
    <InputEndpoints>
      <InputEndpoint name="HttpIn" protocol="http" port="83" />
    </InputEndpoints>
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString" />
      <Setting name="Domain" />
      <Setting name="Port" />
    </ConfigurationSettings>
  </WebRole>

In the service configuration, I specify the values—in this case, for development, as the following example shows:

<Role name="CloudApp">
    <Instances count="1" />
    <ConfigurationSettings>
      <Setting name="DiagnosticsConnectionString" value="UseDevelopmentStorage=true" />
      <Setting name="Domain" value="localhost" />
      <Setting name="Port" value="83" />
    </ConfigurationSettings>
  </Role>

I swap the development settings for these settings before I publish to the cloud:

<Setting name="DiagnosticsConnectionString" value="DefaultEndpointsProtocol=https;AccountName=learningwcf;AccountKey=xxHdhSJlov/yoAUHOSkWds8pfVgusdoownD2miGsEs9ZaBd/6Kt17DrOD2Fq+NGy0GBAoxpvTnZFXh7uxxaz2Q==" />
<Setting name="Domain" value="learningwcf.cloudapp.net" />
<Setting name="Port" value="80" />

Anywhere in my application, when I need access to the correct domain and port, I can use the RoleEnvironment static class to get the configuration setting, as this code shows:

string domain = RoleEnvironment.GetConfigurationSettingValue("Domain");
int port = Convert.ToInt32(RoleEnvironment.GetConfigurationSettingValue("Port"));

WCF Metadata and Proxy Generation

Whenever your WCF services are located behind a load balancer, you'll encounter problems with metadata and proxy generation. When the WCF runtime produces metadata, it uses the currently running machine instance. This means the port will match that of the load-balanced machine, not that of the load balancer. This is not a cloud issue; it's a load balancer issue, and there's a fix for WCF 3.0, as noted here:

After installing the hotfix, you can use the new WCF service behavior or endpoint behavior, UseRequestHeadersForMetadataAddressBehavior, to indicate that you want metadata generation to use a specific port. This is a per-schema setting, so you can indicate a port for HTTP, HTTPS, TCP, and so forth. I would typically apply the service behavior as opposed to the endpoint behavior, so that all service endpoints will produce metadata this way. If you have Representational State Transfer (REST)-based endpoints exposed by the same <service> element (something I try to avoid), you may find the endpoint behavior useful. This example shows how to configure the service behavior: 

<behaviors>
  <serviceBehaviors>
    <behavior name="serviceBehavior">
      <serviceMetadata httpGetEnabled="true"/>
      <serviceDebug includeExceptionDetailInFaults="false"/>
      <useRequestHeadersForMetadataAddress>
        <defaultPorts>
          <add scheme="http" port="81" />
          <add scheme="https" port="444" />
        </defaultPorts>
      </useRequestHeadersForMetadataAddress>
    </behavior>
  </serviceBehaviors>

You'll want to supply a different port for development and cloud environments. For development, I use the port that the development fabric grants my service. For the cloud, I specify the same port as my input endpoint because you get what you ask for in the cloud.

The result is that your Web Services Description Language (WSDL)-generation process won't use the port of the machine for the metadata request. Instead, it uses the port you place in your WCF behavior, as Figure 2 shows. 

Figure 2: Service metadata using a specified port

This hotfix is deployed to Azure so your cloud deployments need only specify the behavior in configuration. WCF 4.0 includes this behavior, so no hotfix is required.

Creating WCF Endpoints

When you create a WCF endpoint, the address of the service is typically relative to the host base address. When you host in IIS (as in a web role), it will be a port 80 address; when you self-host (as in a worker role) you specify the port. When hosting WCF services in a web role, you can, in theory, create endpoints in your <system.serviceModel> configuration section using relative endpoints and be done. Consider this configuration based on the default output produced by the WCF template for Azure:

<service behaviorConfiguration="serviceBehavior" name="CloudAppWCF.CloudService">
        <endpoint address="" binding="basicHttpBinding" contract="CloudAppWCF.ICloudService" />
        <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
</service>

The above code will produce endpoints listening at the base address for the IIS web site, typically port 80. For services hosted in a worker role, you must explicitly supply the base address for service endpoints. To create endpoints programmatically, you can request the address and port from the RoleEnvironment static type. Just request the instance endpoint (as you named it in the service definition), and you'll have access to the endpoint address and port for the machine instance—not the load balancer port. Create the endpoint by using the following code: 

ServiceHost  host = new ServiceHost(typeof(CloudService));

RoleInstanceEndpoint ep = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["TcpIn"];
string listenTcpAddress = string.Format("net.tcp://{0}:{1}/CloudService", ep.IPEndpoint.Address, ep.IPEndpoint.Port);

host.AddServiceEndpoint(typeof(ICloudService), new NetTcpBinding(SecurityMode.None), new Uri(listenTcpAddress));

host.Open();

The above example is the usual way to create a WCF endpoint, but there is a little more to it. A WCF service endpoint has two addressing concepts: the physical address where the service listens for requests and the logical address, which is expected to match the incoming SOAP request To header. I've discussed these concepts at length in articles on MSDN (see Additional Resources). Load balancers introduce a challenge. When the client sends a request, it targets the load balancer address. This also populates the To header for each request sent by the client. But the To header must match what each service role instance thinks it is expecting—which is the specific role instance port. 

To fix the problem, you have two options. One option is to relax the requirement that all requests must have a To header that matches the service logical address. You do this by applying a ServiceBehavior to the service type, as the following code shows:

[ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)]
public class CloudService : ICloudService

Another technique, which prevents ill-matched requests from arriving at your service door, is to specify a listening address separate from the service's logical address. The logical address must match the load balancer address as this is the value that is used to enforce the match, and the physical address matches the address and port for the machine behind the load balancer, which is acquired from the RoleEnvironment as I mentioned earlier. By combining the use of the domain and port configuration settings discussed earlier (to return the load balancer address) and the RoleEnvironment type to gather the physical address, you would end up with this code to produce new endpoints for a worker role:

RoleInstanceEndpoint ep = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["TcpIn"];
string listenTcpAddress = string.Format("net.tcp://{0}:{1}/CloudService", ep.IPEndpoint.Address, ep.IPEndpoint.Port);
string lbTcpAddress = string.Format("net.tcp://{0}:{1}/CloudService", ep.IPEndpoint.Address, RoleEnvironment.GetConfigurationSettingValue("Port"));

host.AddServiceEndpoint(typeof(ICloudService), new NetTcpBinding(SecurityMode.None), lbTcpAddress, new Uri(listenTcpAddress));

The result is that the expected To header (logical address) matches the load balancer and now the service can handle requests from the client to the load balancer. Meanwhile, the service receives requests at the physical address matching the machine instance.

To achieve this for a web role, you would provide a custom ServiceHostFactory and ServiceHost so that you can add endpoints programmatically.  

PowerShell CmdLets

You need to know about the Azure Service Management cmdLets (http://code.msdn.microsoft.com/azurecmdlets) if you're doing Azure development. These PowerShell cmdlets are extremely helpful for scripting setup, deployments, configuration, and cleanup of your Azure applications. With help from some wrapper batch files, you can create beautiful scripts to automate frequent tasks during development, not to mention the kind of automation you need for production deployments. Your PowerShell scripts should include a statement such as the following to ensure that the Azure cmdlets are enabled:

if(!($azSnapin = Get-PSSnapin AzureManagementToolsSnapIn -erroraction silentlycontinue))
{    
  Add-PSSnapin AzureManagementToolsSnapIn
}

Working with Certificates

There are many places where certificates come into play for Azure deployments including management certificates, enabling SSL, and application certificates used for other cryptography purposes. I'll explain in this section how you create a test certificate and how to work with certificates for some of these scenarios.

Creating Certificates

You can create your own certificates for use in any of the aforementioned scenarios, although for the SSL certificate, you should use a certificate issued by a trusted authority such as Verisign. Here's an example showing how to create your own certificates using makecert.exe: 

makecert -r -pe -a sha1 -n "CN=Azure Service Management" -ss My -len 2048 -sp "Microsoft Enhanced RSA and AES Cryptographic Provider" -sy 24 AzureServiceManagement.cer

 The above code will create a certificate with a private key (.pfx file) in your CurrentUser certificate store and produce a public key certificate (.cer file) in the current directory. Usually, it's best to export the private key certificate using the Certificates snap-in, then provide a strong password and delete the key from the CurrentUser store. Then, install the private key and public key wherever you need it, depending on the purpose of the certificate. Remember that any keys you use in production, you must create and protect in a very controlled manner. You should control keys used in development and test environments, but you can relax your key management procedures a little to keep everyone productive.

Management Certificates

You should share management keys only with those who can deploy to the Azure account for the environment (development, test, or production). For production accounts, this means that ideally, a build machine holds this key, and only the configuration manager in charge of builds and deployment has access. For development and test scenarios, you probably have separate Azure accounts and can upload multiple certificates to your account to support management by multiple teams or developers, as Figure 3 shows.

Figure 3: Configuring management certificates through the portal

 You can easily configure certificates for account management through the portal, but Microsoft Visual Studio also automates creating a new management certificate and associating that with the account. With configured management certificates, you can automate many other tasks using PowerShell scripts, which I discus later in this article.

Uploading Application Certificates

Applications often rely on one or more X.509 certificates (e.g., to enable SSL or to support message security for WCF services). When you're working in development, application certificates are accessible on your local machine; for cloud deployments, you must upload certificates if you want them to be available to the application. You can upload X.509 certificates in the form of a .pfx file via the Azure management portal. Certificates are associated with a particular namespace, as Figure 4 shows.

Figure 4: Uploaded X.509 certificates shown in the portal

The management portal supports uploading only .pfx files, which are typically certificates that include a private key. This makes it easy to upload certificates for SSL or for other cryptography operations that require a private key, such as signing or decrypting messages. However, sometimes you need access to public key certificates to check signatures of incoming messages or to encrypt outgoing messages. Unfortunately, the portal doesn't support uploading .cer files (the typical format for exporting the base 64-encoded public key), so you must either produce a .pfx that includes only the public key (and a password for uploading through the portal) or use the PowerShell cmdlets to upload the certificate. The latter is more practical, so I usually rely on a script such as this one:

$cert = get-item -path cert:\currentuser\my\01482AFDB25638E1A4AAE2A13B118C685DF960C2
$certToDeploy = get-item -path cert:\localmachine\my\7E2CED8803BDF40A23604B5D52B89C78B937B5DF

Add-Certificate -subscriptionId 1383dd2d-7199-4c86-4a5e-7a5d005dc682 -certificate $cert -serviceName learningwcf -certificateToDeploy $certToDeploy

After you upload any public and private key certificates to your namespace, you can configure your web and worker roles to use these certificates. You configure each role with a list of certificates it depends on, and specify where you want the certificate deployed on the host machine.

Deploying Certificates with a Web or Worker Role

Once uploaded to your service namespace, certificates are available to any role you deploy to the cloud. The cloud service configuration includes a list of certificates (already uploaded to the cloud) that you want to deploy with each role instance. Figure 5 illustrates the Certificates tab for a web role for which you've specified an SSL certificate, along with its root certificate. Note that you can choose the certificate store where you want to deploy each certificate in Azure. In this case, the SSL certificate is deployed to LocalMachine\My and the root certificate to LocalMachine\CA, which is the trusted root store.

Figure 5: Selecting certificates to be used by the web or worker role

Figure 5 illustrates selecting the thumbprint of the certificate and indicating to which store to deploy the matching certificate that you uploaded via the portal. The following code adds the list of certificates to the service configuration: 

<Certificates>
      <Certificate name="SSLCert" thumbprint="7E2CED8803BDF40A23604B5D52B89C78B937B5DF" thumbprintAlgorithm="sha1" />
      <Certificate name="SSLRootCert" thumbprint="A24395387C38FB879127172D54B18926B383B38A" thumbprintAlgorithm="sha1" />
</Certificates>

The example below adds information to the service definition to indicate the certificate store destination for each certificate: 

<Certificates>
      <Certificate name="SSLCert" storeLocation="LocalMachine" storeName="My" />
      <Certificate name="SSLRootCert" storeLocation="LocalMachine" storeName="CA" />
</Certificates>

Other certificate stores are also available. For example, you might need to deploy a public key certificate to the TrustedPeople store.

When you use Visual Studio to populate this list of certificates, you can choose only from the LocalMachine\My certificate store. But you can always edit this code manually to supply the thumbprint for a certificate in another location. This step ensures your certificates are available to the machine on which each role is executed.

Enabling SSL

To enable SSL for a web role, follow the steps I described to upload the SSL certificate with a private key to your service namespace, then add it to the list of certificates required by the role in the Certificates tab. In addition, the Endpoints tab lets you specify an HTTP and HTTPS endpoint asFigure 6 shows. You can optionally limit communications to HTTPS only. The certificate that you specify as the SSL certificate will be deployed to the host machine, and IIS will be configured for SSL communications.

Figure 6: Specifying an HTTPS endpoint with SSL certificate

If you're creating test certificates for your development environment and haven't mapped a domain to the service namespace, you'll probably hit an endpoint such as mine at https://learningwcf.cloudapp.net. For best results, create a test certificate with a distinguished name (DN) to match, such as “CN=learningwcf.cloudapp.net." You'll need to tell Internet Explorer (IE) to trust the certificate because it's self-signed (not issued by Verisign or some other trusted issuer). This will help you avoid certificate errors when you browse to the deployed site.

Automating Deployments

Although you can deploy applications through Visual Studio, you might want to automate this process with a command line script that non-developers can execute. For example, the build master will need to check out the latest from source control and deploy from there, using the production certificate.

The following batch instructions placed in the solution (.sln file) directory will build and publish the solution.  It then executes a PowerShell script, which Figure 7 shows, to deploy the solution to the cloud.

Figure 7: PowerShell script to deploy Windows Azure projects
param([string]$subscriptionId=$(throw "Required parameter missing: subscriptionId"), [string]$certThumbprint=$(throw "Required parameter missing: certThumbprint"), [string]$servicename=$(throw "Required parameter missing: servicename"), [string]$storagename=$(throw "Required parameter missing: storagename"), [string]$label=$(throw "Required parameter missing: label"), [string]$packagePath=$(throw "Required parameter missing: packagePath"), [string]$configPath=$(throw "Required parameter missing: configPath"))

if(!($azSnapin = Get-PSSnapin AzureManagementToolsSnapIn -erroraction silentlycontinue))
{    
  Add-PSSnapin AzureManagementToolsSnapIn
}

$cert = get-item -path cert:\currentuser\my\$certThumbprint

Get-HostedService $servicename -Certificate $cert -SubscriptionId $subscriptionId |
    New-Deployment 'Staging' $packagePath $configPath -Label $label -StorageServiceName $storagename |
    Get-OperationStatus -WaitToComplete
   
Get-HostedService $servicename -Certificate $cert -SubscriptionId $subscriptionId |
    Get-Deployment -Slot 'Staging' |
    Set-DeploymentStatus 'Running' |
    Get-OperationStatus -WaitToComplete

It passes the subscription identifier, management certificate (which should be located in your CurrentUser\My certificate store), and path to the cloud service package and configuration files.

msbuild /t:build;publish
PowerShell -File DeployStaging.ps1 -subscriptionId 1383dd2d-7199-4c86-4a5e-7a5d005dc682 -certThumbprint 01482AFDB25638E1A4AAE2A13B118C685DF960C2 -servicename learningwcf -storagename learningwcfstorage -label SimpleCloudServiceV1 -packagePath "C:\SimpleCloudService\SimpleCloudService\bin\Debug\Publish\SimpleCloudService.cspkg" -configPath "C:\SimpleCloudService\SimpleCloudService\bin\Debug\Publish\ServiceConfiguration.cscfg"

Automating Cleanup

I saved the best tip for last, because cleaning up your deployments when you're learning and playing around with Azure is the single most important thing you should do to save costs. You don't want to leave your deployments sitting idle when you stop work for the day because you're charged for every hour and every instance of your service deployed. I quickly realized that I don't enjoy going to the portal to delete each production and staging deployment, so I created a script that gets the job done in one click.

I created a PowerShell script, which Figure 8 shows, that supports deleting Staging, Production or all deployments depending on the parameters you pass in.

Figure 8: PowerShell script to delete Azure deployments
param([string]$subscriptionId=$(throw "Required parameter missing: subscriptionId"), [string]$certThumbprint=$(throw "Required parameter missing: certThumbprint"), [string]$slot = 'All')
if ($slot -eq 'Staging'){}
elseif ($slot -eq 'Production'){}
elseif ($slot -eq 'All'){}
else { throw "Parameter -slot must be 'All|Staging|Production'"}
if(!($azSnapin = Get-PSSnapin AzureManagementToolsSnapIn -erroraction silentlycontinue))
{    
  Add-PSSnapin AzureManagementToolsSnapIn
}
$cert = get-item -path cert:\CurrentUser\My\$certThumbprint
$services = Get-HostedServices -Certificate $cert -SubscriptionId $subscriptionId
foreach ($service in $services)
{
  $serviceName = $service.serviceName
  Write-Host "Looking for deployment for: " $serviceName -fore "Green"
  if ($slot -eq 'Staging' -or $slot -eq 'All')
  {
    $stagingDeployment = Get-Deployment staging -SubscriptionId $subscriptionId -Certificate $cert -ServiceName $serviceName
    if ($stagingDeployment.Name)  
    {  
      if ($stagingDeployment.Status -ne 'Suspended')
      {
        Write-Host "Suspending staging deployment..."  -fore "Red"
        $stagingDeployment | Set-DeploymentStatus 'Suspended' | Get-OperationStatus -WaitToComplete
        Write-Host "Suspended"  -fore "Red"
      }
      Write-Host "Deleting staging deployment..."  -fore "Red"
      $stagingDeployment | Remove-Deployment | Get-OperationStatus -WaitToComplete    
      Write-Host "Deleted"  -fore "Red"
    } else  
    {  
      Write-Host "No deployment found in staging"  
    }
  }
  if ($slot -eq 'Production' -or $slot -eq 'All')
  {
    $productionDeployment = Get-Deployment production -SubscriptionId $subscriptionId -Certificate $cert -ServiceName $serviceName

    if ($productionDeployment.Name)  
    {    
      if ($productionDeployment.Status -ne 'Suspended')
      {
        Write-Host "Suspending production deployment..."  -fore "Red"
        $stagingDeployment | Set-DeploymentStatus 'Suspended' | Get-OperationStatus -WaitToComplete
        Write-Host "Suspended"  -fore "Red"
      }

      Write-Host "Deleting production deployment..."  -fore "Red"
      $productionDeployment | Remove-Deployment | Get-OperationStatus -WaitToComplete    
      Write-Host "Deleted"  -fore "Red"
    } else  
    {  
      Write-Host "No deployment found in production"  
    }
  }
}

With a simple batch file that passes the subscription identifier and the management certificate, you can clean up in no time! Keep this script in your favorites list!

PowerShell -File DeleteAllAzureHostedServiceDeployments.ps1 -subscriptionId 1383dd2d-7199-4c86-4a5e-7a5d005dc682 -certThumbprint 01482AFDB25638E1A4AAE2A13B118C685DF960C2 -slot Staging

Additional Help

I hope that a few of these tips resonate with you and will help you avoid some time consuming research to find a solution. Take a look at my Azure resources listed in the Additional Resources box and enjoy.