Telerik blogs
How ToT Light_870x220

We always want the best performance from the software we deploy to Azure App Services. Not only does better performance make our customers happy, but better performance can also save us money if we “do more with less” in Azure. In this article we'll look at settings and strategies for improving the performance of web applications and web APIs running in an Azure App Service. We'll start with some easy configuration changes you can make for an instant improvement.

Enable HTTP/2

Microsoft announced support for HTTP/2 in App Services early in 2018. However, when you create a new App Service today, Azure will start you with HTTP 1.1 configured as the default protocol. HTTP/2 brings major changes to our favorite web protocol, and many of the changes aim to improve performance and reduce latency on the web. For example, header compression and binary formatting in HTTP/2 will reduce payload sizes. An even better example is the use of request pipelineing and multiplexing. These features allow for more concurrent requests using fewer network sockets and help to avoid one slow request from blocking all subsequent requests, which is a frequent problem in HTTP 1.1 that we call the “head-of-line” blocking problem.

To configure your App Service to use HTTP/2 with the portal, go to Platform Settings in the Configuration blade. Here you will find a dropdown to specify the HTTP version. With 2.0 selected, any clients that support HTTP/2 will upgrade their connection automatically.

HTTP Version Selection in App Services

HTTP/2 might not benefit every application, so you will want to run performance tests and stress tests to document your improvements. Here's a simple test where I used the network tools in Firefox against a page hosted in an App Service. The page references a handful of script and CSS resources, and also includes 16 images. Each image is over 200 KB in size. First, I used the developer tools to record what happens on an App Service using HTTP 1.1. Notice how the later requests start in a blocked state (the red section of the bars). This is the dreaded “head-of-line blocking” problem, where limitations on the number of connections and concurrent requests throttle the throughput between the client and the server. The client doesn't receive the final bytes for the page until 800ms after the first request starts.

HTTP 1.1 Blocking

Next, I switched on HTTP/2 support in the App Service. I didn't need to make any other configuration changes on the client or the server. The last byte arrives in less than 500ms. We avoid blocking thanks to the improved network utilization of HTTP/2.

HTTP/2 Improvements

In front of every Azure App Service is a load balancer, even if you only run a single instance of your App Service Plan. The load balancer intercepts every request heading for your app service, so, when you do move to multiple instances of an app service plan, the load balancer can start to balance the request load against available instances. By default, Azure will make sure clients continue reaching the same app service instance during a session, because Azure can't guarantee your application isn't storing session state in server memory. To provide this behavior, the load balancer will inject a cookie into the first response to a client. This cookie is what Azure calls the Application Request Routing Cookie.

If you have a stateless application and can allow the load balancer to distribute requests across instances without regard to previous requests, then turn off the routing cookie in the Configuration blade to improve performance and resiliency. You won't have requests waiting for a server restart, and when failures do happen, the load balancer can shift clients to a working instance quickly.

The routing configuration is another item you'll find in the Platform Settings of the App Service Configuration blade.

Turning off Instance Affinity

Keep the App Service Always On

If you've deployed applications into IIS in the past, you'll know that IIS will unload idle websites after a period of inactivity. Azure App Services will also unload idle websites. Although the unloading can free up resources for other applications that might be running on the same App Service Plan, this strategy hurts the performance of the app because the next incoming request will wait as the web application starts from nothing. Web application startup time can be notoriously slow, regardless of the technologies involved. The caches are empty, the connection pools are empty, and all requests are slower than normal as the site needs to warms up.

To prevent the idle shutdown, you can set the Always On flag in the App Service Configuration blade.

The Always On App Service Flag

Use a Local Cache

By default, the file system for your App Service is mounted from Azure Storage. The good news is your file system is durable, highly available, and accessible from multiple App Service instances. The sad news is your application makes a network call every time the app touches the file system.

Some applications require the Azure Storage solution. These are the applications that write to the file system, perhaps when a user uploads a file, and they expect the file system changes to be durable, permanent, and immediately visible across all running instances of the application. Other applications might benefit from having a faster, local, read-only copy of the website content. If this sounds like your application, or you want to run a test, then create a new App Setting for the app with a key of WEBSITE_LOCAL_CACHE_OPTION and a value of Always. You'll then have a d:\home folder pointing to a local cache on the machine and populated with a copy of your site content.

Using a Local Cache

Although I say the cache is read-only, the truth is you can write into the local cache folder. However, you'll lose any changes you make after an app restart. For more information on the tradeoffs involved and how the local cache works with log files, see the Azure App Service Local Cache overview.

Keep Your Customers Close, and Your Resources Even Closer

All the performance improvements we've looked at so far only require configuration changes. The next set of improvements require some additional infrastructure planning or restructuring, and in some cases changes to the application itself. The common theme in the next set of tips is to reduce the distance bits need to travel over the network. The speed of light is finite, so the further a bit has to travel, the longer the bit needs to reach a destination.

Co-Locate Your App Service and Your Database

In Azure you assign most resources you create to a specific region. For example, when I create an App Service, I can place the service close to me in the East US region, or, if I'm on an extended visit to Europe, I can select the North Europe region. If you create multiple resources that work together closely, you'll want to place the resources together in the same region. In the past I've seen performance suffer when someone at the company accidentally places an App Service in one region and an associated Azure SQL instance in a different region. Every database query from the App Service becomes a trip across the continent, or around the world.

How do you check your existing subscriptions to make sure your resources are properly co-located? Assuming you don't want to click through the portal to check manually, you can write a custom script or program, or use Azure Policy. Azure Policy has a built-in rule to check every resource to ensure the resource location matches the location of the resource's parent resource group. All you need to do with this rule in place is make sure your associated resources are all in the same resource group. The policy definition for this audit rule looks like the following.

{
  "if": {
    "field": "location",
    "notIn": [
      "[resourcegroup().location]",
      "global"
    ]
  },
  "then": {
    "effect": "audit"
  }
}

Keep Your App Service Close to Your Customer

If most of your customer traffic originates from a specific area of the world, it makes sense to place your resources in the Azure region closest to your customers. Of course, many of us have customers fairly distributed around the world. In this case, you might consider geo-replicating your resources across multiple Azure regions to stay close to everyone. For App Services, this means creating multiple App Service plans inside of multiple Azure data centers around the world. Then, you'll typically use a technology like Azure Traffic Manager to direct customer traffic to the closest App Service instance.

Traffic Manager is a DNS-based load balancer. So, when a customer's web browser asks for the IP address associated with your application's domain, Traffic Manager can use rules you provide and other heuristics to select the IP address of a specific App Service. Traffic Manager can select the App Service with the lowest latency for a given customer request, or you can also configure Traffic Manager to enforce geo-fencing where the load balancer sends all customers living in a specific province or country to the App Service you select. You can see the routing methods built into Traffic Manager in the Create Traffic Manager profile blade below.

Setting up a Traffic Manager Profile

There are tradeoffs and complications introduced by Traffic Manager. It is easy to replicate stateless web applications and web services across data centers around the world, but you'll need to spend some time planning a data access strategy. Keeping one database as the only source of truth is the easiest data access approach. But, if your App Service in Australia is reading data from a database in the U.K., you might be losing the performance benefits of geo-replicating the App Service. Another option is to replicate your data, too, but much depends on your business requirements for consistency. Data replication is typically asynchronous and delayed, and your business might not be able to live with the implications of eventual consistency.

Keep Your Content Close to the Customer

Azure's content delivery network allows you to take static content from Azure Storage, or from inside your App Service, and distribute the content to edge servers around the world. Again, the idea is to reduce the distance information needs to travel, and therefore reduce the latency in network requests. Static files like script files, images, CSS files, and videos, and are all good candidates for caching on the CDN edge servers. A CDN can have other benefits, too. Since your App Service doesn't need to spend time or bandwidth serving files cached on a CDN, it has more resources available to produce your dynamic content.

When setting up a CDN profile in Azure, you can select a pricing plan with the features you need from a set of providers that includes Microsoft, Verizon, and Akamai.

Setting up a CDN Profile

Keep Your Apps Together

Today's architecture fashion is to decompose systems into a set of microservices. These microservices need to communicate with each other to process customer requests. Just like keeping your application and database close together can benefit performance, keeping your microservices close to each other can benefit performance, too.

With App Services, remember that multiple services can live on the same App Service Plan. Think of the plan like a virtual machine dedicated to the role of a web server. You can place as many applications on the web server as you like, and keeping services together can reduce network latency. However, keep in mind that having too many services on the same machine can stretch resources thin. It will take some experimentation and testing to figure out the best distribution of services, the ideal size of the App Service Plans, and the number of instances you need to handle all your customer requests.

Summary

We've looked at several strategies we can use to improve the performance of web applications and web APIs we've deployed to Azure App Services. Just remember that your first step before trying one of these strategies should be to measure your application performance and obtain a good baseline number. Not every strategy in this article will benefit every application. Starting with baseline performance numbers will allow you to compare strategies and see which ones are the most effective for your application.


K Scott Allen
About the Author

K. Scott Allen

K. Scott Allen is the founder of OdeToCode and a developer, teacher, and Pluralsight author. He is a Micrcosoft MVP and in the role of CTO for Medisolv Inc. A long time .NET developer, Scott has also worked on embedded software and large JavaScript applications. Follow his blog at https://odetocode.com, or his tweets as @OdeToCode.

Related Posts

Comments

Comments are disabled in preview mode.