HTTP is the core protocol of the World Wide Web, governing just about every interaction that a Web client has with a server. And yet the version of HTTP in wide use on the Web is version 1.1, officially released in 1999. (Browsers had widely implemented a pre-release version of the spec in 1996, and there have been some tweaks in the intervening years.) In terms of the earth’s geologic time scale, 1999 for the Internet was in the latter part of the Hadean Period, before even photosynthesis began, the earliest process leading to the formation of life as we know it. HTTP is a very old technology. How is it that HTTP/1.1 has dominated the Web for such a very long time, when even HTML has made massive strides forward?

Simply put, it works--not well, mind you, but it works well enough. It seems like every new protocol and standard released since HTTP/1.1 was specifically designed to overcome some limitation in HTTP. And don’t get me started about the security problems! The protocol is so deeply ingrained into the servers, routers, operating systems and browsers connected to the Web that change would be massively disruptive.

Although it was a marvelous achievement a couple of decades ago, HTTP/1.1 has a number of problems for the modern Web. Back in the day, Web pages were largely static pages of text that might have a few other resources embedded in them, such as images. But now even a simple Web page is full of resources coming from multiple servers, with lots of client-side scripting making Ajax-y calls for updates, trying to act like native platform applications. Layer on all the security needed to mitigate modern attacks, and HTTP/1.1 is just limping along, impeding the development of an even better Web.

So the world is more than ready for an update, and it looks like the time has come. Primarily through the efforts of Google, the Internet Engineering Task Force (IETF) in February approved the HTTP/2 standard, the last major step before becoming an RFC (Request for Comments, the IETF’s odd name for a specification).

The HTTP Working Group used Google’s SPDY protocol as a starting point to build out HTTP/2. SPDY, pronounced "speedy," is an open Web protocol that reduces Web page load time and improves Web security. It uses a variety of techniques, including prioritization, compression and multiplexing. Although built into the main Web servers and all the major browsers, SPDY hasn’t been widely implemented across the Web. SPDY is now deprecated, destined to be fully withdrawn sometime in 2016. During its short life, SPDY helped to dramatically speed up the Web, particularly when using SSL/TLS with the https protocol.

With SPDY as its foundation, the Working Group made substantial changes for the resulting HTTP/2 specification. The formal name of the protocol is Hypertext Transfer Protocol version 2 (you can find the final draft here; just be warned that it isn’t light reading), but it is widely known as just HTTP/2.

One of the major changes is that a browser is limited to just one connection to a server. It used to be that a browser could open multiple connections, each of which is used for a single resource request. HTTP/1.1 added the ability to keep connections alive and reuse them, saving the overhead of creating and destroying many connections during the course of retrieving a single page and its resources. But in HTTP/2, the single connection is much more sophisticated, able to handle more bandwidth and reduce the security negotiation to just one connection. This is a major way that HTTP/2 is way more efficient.

There is way more to explore of the nitty-gritty details of HTTP/2. Here are a few good resources that I’ve found about it:

One of the lofty and ambitious goals of HTTP/2 was, according to the IETF, “to allow a seamless switch between HTTP/1 and HTTP/2, with minimal changes to applications and APIs, while at the same time offering improved performance and better use of network resources. Web users largely will be able to benefit from the improvements offered by HTTP/2 without having to do anything different.”

This is a practical necessity to give HTTP/2 a fighting chance for wide adoption, but it remains to be seen just how effectively the final RFC meets this goal. As near as I can tell, the main way the new version does this is to make it easy for HTTP/1.1 and HTTP/2 to work side-by-side, on both the server and the browser, since there is little or no attempt to support any kind of backward compatibility.

It appears that the major Web server vendors are on board with HTTP/2. Microsoft has implemented it in IIS in the Windows 10 Technical Preview, Apache has a module for it, and other server are in various stages of implementation. And the major browsers are building it in as well. So it appears that the world is poised to go live with HTTP/2 as soon as the final RFC is published. The current draft expires Aug. 15, 2015, but the industry consensus seems to be that the RFC will be out in 2016.

As a Web developer, you probably won’t notice anything different when your application is running over HTTP/1.1 or HTTP/2--except that it is very likely that your Web pages are going to load much faster and respond faster when the user clicks around to update portions of the page. And you can take advantage of new HTTP/2 features to speed up your applications even more.

It is very likely that HTTP/2 will help take the Web to a whole new level--the real Web 2.0--at least until Web pages and applications get even more insanely complex and we need to again move to something even better!