The relationship between web developers and IT pros is too often strained. IT pros in your organization appear to want to stand in the way of your progress, always saying no, always telling you the problem is "your" application and not "their" servers. Sometimes IT pros are stubborn and protective of "their" infrastructure. I know that IT pros make you want to pull your hair out. I hate to tell you, though, that very often...they feel the same way about you.
Remember that the best IT pros ensure that -- from the perspective of their users -- nothing unexpected happens. An IT pro's job is to ensure that non-technical people never need to think about technology; it's just reliably and dependably there. When an IT pro does his or her job perfectly, IT is forgotten. That new version of your application? In the eyes of an IT pro, it's just the most likely candidate to ruin a perfectly non-eventful day (at least as far as infrastructure goes).
I've spent a lot of time sitting on the front lines of these wars, trying to be the peacemaker between the two camps. With that goal in mind, my intent here is to convey on behalf of Windows IT pros everywhere the top items that will not only make them happy but also make you, the Microsoft platform developer, happy as well. These tips make it easier for IT pros to manage your applications, as well as help your applications run more efficiently, more securely, and more reliably. And that makes everyone happy.
Proper Logging Is Everyone's Friend
Web applications of a reasonable size will have some type of logging or diagnostic capture capabilities built in. Seasoned developers are aware of many of the reasons good logging information is crucial to properly running and maintaining a web application. But it's important to keep all interested parties in mind when you think about logging.
Windows event log. The first place that an IT pro looks when anything goes wrong is the Windows event log. Although there might be many good reasons to store logging data in other sources, the Windows event log offers some great advantages. The Windows event log's biggest advantage is that it's the standard log source on a Windows machine. On top of that, the Windows event log is always available, our monitoring tools hook into it easily, it shows logging data from many sources and not just your application, and (possibly the most important) at 3:00 a.m. when we're trying to solve a technical issue, you can easily remember how to get the Windows event log opened. It's not necessarily as easy to remember where you decided to store logging data for your particular application when you're using a custom logging mechanism.
Writing to the Windows event log is very easy from any programming language. From a .NET C# application, writing an entry to the event log requires the following code:
EventLog.WriteEntry("Message Source", "Message",
ELMAH. Even though the Windows event log should be one of your logging destinations, it shouldn't be your only logging destination. The Windows event log has its downsides. For example, it isn't a great place to capture detailed error information, the event log is difficult to access remotely, and it doesn't readily support logging from multiple-source servers. The open-source utility Error Logging Modules and Handlers for ASP.NET (ELMAH) can help resolve those issues for you.
To install ELMAH, you can either grab the NuGet package, or you simply need to add a DLL to your web application's bin folder and insert a couple lines into your Web.config file. Once your site is configured to use ELMAH, navigating to http://yourwebsite.com/elmah.axd will provide you with a screen similar to that in Figure 2.
Out of the box, ELMAH will capture any unhandled errors. In your code, you can start to send additional information to ELMAH as appropriate, as shown in the following code example:
//Code that produces error
catch (Exception ex)
An important concept that might not be immediately obvious is that there's a difference between event logging and alerting. IT pros need a way to determine on a real-time basis whether an application is healthy. The details of how to programmatically provide that information will vary from application to application. Here are some basic monitoring strategies that will probably apply to your web application.
Status page. The very first step in providing real-time monitoring is to write a status.aspx page that performs basic tests on the key systems required by your application. For example, if your web application depends heavily on a SQL Server database, this might mean doing a very simple SQL query against the database. As shown in the code in Figure 3, if the test returns the expected results, the status page would report success; otherwise, detailed error information returned from the test could be shown.
string SqlConnectionString = ConfigurationManager.ConnectionStrings["SqlServer"].ConnectionString;
using (SqlConnection connection = new SqlConnection(SqlConnectionString))
//maybe run a simple query?
lblStatus.Text = "[Good]";
catch (Exception ex)
lblStatus.Text = ex.Message.ToString();
Using an HTTP status monitor (either as part of a larger monitoring package such as Microsoft System Center or a standalone service such as Pingdom) can check that page on a schedule and alert administrators whenever [Success] isn't returned.
Error monitoring. The next step is to write an error.aspx page that checks the number of errors that have occurred (remember, your ELMAH database has this information for you) in the past 15 minutes. When the number of errors exceeds a threshold (you'll have to determine what that number should be), it returns something other than [Success]. This is very helpful for catching errors reported by your users that are unrelated to system outages. The status.aspx page watches for your dependent systems (e.g., IIS, network, SQL) to be functional, whereas error.aspx watches for issues that get past your QA processes.
The query against your ELMAH database is very simple:
WHERE [TimeUtc] > DATEADD(mi, -15, GetUtcDate())
The next time you (or even better, your IT administrator) release an update to your application, you'll find it significantly less stressful having this page to watch for increases in the number of errors. On more than one occasion, I've seen this type of monitoring detect errors caused by only a small portion of an application's users -- which is why the app made it through QA without the errors being detected. Using the monitoring I just described, we were able to resolve the issue before customers reported it.
I've seen a lot of creative ways to monitor web applications. The two techniques I've covered here are just a start. Think about your application and how you can programmatically determine the status of its health. IT pros will already be watching for healthy CPU, memory, disk space, and other system-level indicators. What other factors related to your application are crucial for a good user experience? For example, I had a client with a specific number of hardware devices that were supposed to call into a web service every four hours. The client created a status page running a SQL query that watched for that number to exceed the expected range.
Sit down with your IT staff and discuss what they can monitor and what you can provide them that will help them provide proactive monitoring.
Your Application Pool Will Be Recycled
When Microsoft IIS hosts your web application, it does so using an application pool. An application pool is made up of one or more worker processes named w3wp.exe. It's the application pool's w3wp.exe worker process that hosts the execution environment for your web application's code. IIS will recycle (shut down and restart) your application pool's worker processes from time to time. Although the idea of the worker process restarting might make developers shudder, it makes IT pros smile because it fixes so many issues. As a developer, though, you need to realize that application pool recycling is inevitable, and you just need to deal with it. Don't worry, though; there are steps you can take to make your application handle an application pool recycle without a hiccup.
Why does it recycle? Recycling an application pool is like rebooting your computer. It gives your web application a fresh start and resolves a high percentage of issues.
By default, IIS 7.0/7.5 will recycle your application pool every 29 hours. This is similar to how we used to reboot our computers every day to reset any issues that used to crop up throughout the day.
In addition to recycling at regular intervals, IIS will recycle your application pool if it thinks something is wrong. Without getting into the details of how IIS determines your application's health, let me assure you that IIS is almost always right on this one. I've never run across an issue where IIS recycled a healthy application by mistake.
By default (usually), IT pros will recycle your application pool if they ever think something is wrong. Recycling the application pool will often fix the issue we're investigating, so we naturally try that first.
How do you survive a recycle? When an application pool is recycled, IIS eliminates the problem of the pool not being available for new requests by overlapping the application pool processes. First, IIS starts a new w3wp.exe process to host your application. Once this new process is started, any new requests are directed to the new process. Then, after all the requests still being handled by the old process have finished, the old process is killed. As long as no in-process session-state data exists, your users will never know an application pool recycle occurred.
Out of the box, ASP.NET stores session-state data in process (aka in the application pool process). When a recycle happens, that session-state data is lost.
If you're fortunate enough to not have to store anything in session state that can't be repopulated without user interaction, then congratulations! However, if you have important data in session state (e.g., logon data), then you need to look at storing session state out of process.
Out-of-process session state. ASP.NET provides three locations to store session state out of process: the ASP.NET State Service, SQL Server, or a custom session-state service.
There are different advantages and disadvantages to the various session-state services. The key point is that the service separates your session-state data from your application pool. It also allows your application to be distributed in a web farm, where multiple web servers accept requests for your users. Developers often underestimate the value of that flexibility. Although you're surely conscious of the performance implications of your code, sometimes deadlines don't allow you to focus on performance optimizations.
By taking the steps to move your session state out of process, you allow your application to survive an application pool recycle without affecting your users, and you enable your IT administrator to add web servers to your application as traffic increases. By doing a little work up front, you gain two large benefits. In your development environment, use an out-of-process session-state service on your local machine to make sure your application supports that from version one.
Developers are generally pretty good about storing configuration data such as SQL connection strings in a configuration file instead of hard-coded in the application. Look through your application and see whether there are other configuration settings that could be moved out to the configuration file. Does your application send email via an SMTP server? Make sure that information is specified in your application's configuration file. Does your application store flat files in a file system folder? Make sure you specify that folder's path in your application's configuration file.
Web.config. Use the configuration file that's the standard for your application platform. If you're an ASP.NET programmer, this means using the Web.config file. All the settings discussed previously (e.g., file paths, SMTP server) should be placed in the <AppSettings> section of your application's Web.config file. If you're considering putting a configuration value in the Machine.config file, think long and hard about whether doing so is absolutely necessary, and then consult with your IT administrators. Modifying the Machine.config file introduces complexities such as dealing with conflicting settings between different applications, requiring elevated permissions to modify the file, and requiring migration of settings when moving between different versions of .NET. Modifying Machine.config also makes deployment scenarios more complicated by requiring that you copy the application (along with its Web.config) and modify the Machine.config file.
Web farms and configuration files. Never create a configuration value that can be determined programmatically. A common example is when you need the name of the computer running the application and you ask an administrator to configure your configuration file instead of pulling that information programmatically. When you moved session state out of process as discussed in the previous section, you made it very easy for your IT administrator to put your application in a farm. When your IT administrator moves your site to a web farm, the admin will either copy your site to multiple servers using a deployment tool or centralize the configuration using IIS's Shared Configuration feature. If the configuration file needs to be modified on every web server (e.g., to specify that web server's name), that requirement increases the difficulty of adding servers as well as the likelihood of a mistake being made.
URL dependencies. On this same note, make sure your application isn't dependent on a specific URL. An IT admin will need to access your application via different host names. If your website is website.com, I might want to access your application on a specific web server node by using webserver1.website.com.
This next tip is a quick one. If you ever ask a question along the lines of "What's the largest number of x's that y can z?" step back and re-evaluate what you're trying to accomplish. A good example is, "What's the largest number of files that Windows can store in a folder?" Yes, there's a theoretical limit. However, long before you reach that limit, you're pushing the boundaries of what's practical. Working with the data in this example will be extremely difficult (have you ever tried to copy one million files from one folder to another?). Also remember that systems and applications receive more testing around normal conditions, not extreme conditions. Maybe a folder can hold a certain number of files, but can my backup application back it up? Can my antivirus software scan that many files in a folder?
I realize there might be cases in which looking into those kinds of details is valid. However, be sure first that there isn't a better way to accomplish what you're trying to do without pushing the boundaries of what's possible.
Do NOT Store Passwords in Plain Text
This tip is even more succinct than the last one. For the sake of your job and mine, never store a password in plain text. The rules are very simple:
- If you ever need to use the password again (e.g., for connecting to a third-party service that doesn't implement some type of OAuth authentication system), then encrypt the password.
- Otherwise, store a hash of the original password.
A user's password is a great example. There's no good reason to store or know that password in clear text. Often it's the same password he or she uses for other sites, possibly even secure sites such as a bank. You don't want to be responsible for exposing your service and every other service a user uses.
Integrated Mode/IIS Express
I'll give you two tips for the price of one in this section. By default, IIS 7 (and later) runs your code in what's called Integrated Mode. Starting with IIS 7, there are two modes an application pool can run in: Integrated Mode and Classic Mode. Classic Mode refers to the process by which IIS 6 would pass off requests for ASP.NET pages to the ASP.NET engine. Integrated Mode refers to how IIS 7 includes the ASP.NET engine as part of the pipeline. This allows for better manageability and functionality because all requests regardless of page type are routed through the same process. This also allows you to control the entire life cycle of a request in managed code, such as doing ASP.NET forms authentication for non-ASP.NET pages or doing URL rewriting in managed code. You can switch your application pool to run in Classic Mode (aka IIS 6 mode), but you'll be giving up all the benefits of Integrated Mode.
The built-in web server for Visual Studio, Cassini, runs in Classic Mode. When you take your application from development in Visual Studio to your pre-production servers running IIS, it's possible that you'll need to do some work to fix any issues your application has in Integrated Mode.
Starting with Visual Studio 2010 SP1, you can switch from Cassini to IIS Express simply by right-clicking your web project and selecting Use IIS Express. You can also set IIS Express as the default for all web projects by going to Options, Projects and Solutions, Web Projects, Use IIS Express for new file-based websites and projects. Along with developing in Integrated Mode, by using IIS Express you'll now also get the additional benefits of having a modern and consistent web server, IIS extension support, trace logging, W3 logging, SSL support, and more.
I can't end this article without making the most important point: Communicate with your IT colleagues. Reach out to the IT administrators in your organization and work with them. Invite them out to lunch and ask them how you can help them better support your application (trust me -- we'll just be impressed that someone wants to eat lunch with us). If you don't have IT administrators in your company, reach out to your online community. If you aren't part of an online community, then find one. Making a little effort to improve communication between developers and IT pros will go a long way toward improving the performance of your applications and creating a better working environment.
Steve Evans is a Microsoft MVP and has worked in the IT field for over 12 years, specializing in Microsoft technologies.