Change Language


Follow Navioo On Twitter

AJAX Speed Up the Web


Speed Web delivery with HTTP compression

Speed Web delivery with HTTP compression Part2 > > > >

Why Compress HTML?

HTML is used in most Web pages, and forms the framework where the rest of the page appears (images, objects, etc). Unlike images (GIF, JPEG, PNG) which are already compressed, HTML is just ASCII text, which is highly compressible. Compressing HTML can have a major impact on the performance of HTTP especially as PPP lines are being filled up with data and the only way to obtain higher performance is to reduce the number of bytes transmitted. A compressed HTML page appears to pop onto the screen, especially over slower modems.

HTTP compression, a recommendation of the HTTP 1.1 protocol specification for improved page download time, requires a compression feature implemented at the Web server and a decompression feature implemented at the browser. While popular browsers were able to receive the compressed data as early as three years ago, Web servers were not ready to deliver compressed content. The situation is changing, though, as server compression modules are introduced. Dr. S. Radhakrishnan dissects Web compression, examines the benefits of HTTP compression, offers several compression tools, and highlights the effectiveness of the technology in a case study.

Many Internet applications deliver data and content in the form of dynamically generated HTML; the HTML dynamic content is generated by a Web or application server using such technologies as Java Servlet, JavaServer Pages, Personal Home Pages (PHP), Perl scripts, or Active Server Pages (ASP). The speed with which these Web pages are available to the client browser on request mainly depends on two things:

  • The Web or application server's ability to generate the content. This is related to the general performance characteristics of the application and the servers.
  • The network bandwidth.

The performance of the Web application is determined by good design, tuning the application for performance, and if needed, by providing more hardware power for the servers. The network bandwidth available to the user, directly related to the page-download time, is normally taken for granted. But for the user, it is the speed of Web page delivery that indicates the performance level, not how fast the application is executed on the server.

Therefore, to ensure a good user experience, the performance of the network and its bandwidth is considered an important part of the overall performance of the application. This becomes even more important when network speed is low, network traffic is high, or the size of the Web pages is large.

In the case of the Internet, the traffic may not be controllable, but the user's network segment (modem or other technology) and the server's connection to the Internet can be augmented. In the case of Web applications hosted and accessed in close premises through Local Area Networks (LANs), the bandwidth is usually sufficient for fast page download. In the case of Wide Area Networks (WANs), segments of the network may have low speed and high traffic. In this case, the user accessing the application might experience poor page download time.

Ideally, it is desirable to have increased bandwidth in the network; practically, it results in additional cost. However, you can have increased bandwidth without a substantial cash investment. If Web pages (containing mainly plain text documents and images) could be compressed and sent to the browser on request, the speed of page downloads improves without regard for the traffic or speed on the network. The user receives faster response time for an HTTP request.

In this article, I explore the intricacies of Web-based compression technology, detail how to improve Web page download times by compressing the Web pages from the Web server, highlight the current status of the technology, and provide a real-world case study that examines the particular requirements of a project. (Throughout the article, the term Web application refers to an application generating dynamic content -- for instance, any content created on the fly.)

Now, look at the specifics of Web-related compression technology.

Types of compression

I first examine the following various types and attributes of compression:

  • HTTP compression. Compressing content from a Web server
  • Gzip compression. A lossless compressed-data format
  • Static compression. Pre-compression, for when static pages are the delivery
  • Content and transfer encoding. IETF's two-level standard for compressing HTTP contents

HTTP compression

HTTP compression is the technology used to compress contents from a Web server (also known as an HTTP server). The Web server content may be in the form of any of the many available MIME types: HTML, plain text, images formats, PDF files, and more. HTML and image formats are the most widely used MIME formats in a Web application.

Most images used in Web applications (for example, GIF and JPG) are already in compressed format and do not compress much further; certainly no discernible performance is gained by another incremental compression of these files. However, static or on-the-fly created HTML content contains only plain text and is ideal for compression.

The focus of HTTP compression is to enable the Web site to serve fewer bytes of data. For this to work effectively, a couple of things are required:

  • The Web server should compress the data
  • The browser should decompress the data and display the pages in the usual manner

This is obvious. Of course, the process of compression and decompression should not consume a significant amount of time or resources.

Browsers and servers have brief conversations over what they'd like to receive and send. Using HTTP headers, they zip messages back and forth over the ether with their content shopping lists. A compression-aware browser tells servers it would prefer to receive encoded content with a message in the HTTP header like this:

GET / HTTP/1.1
Host: www.webcompression.org
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.5) 
  Gecko/20031007 Firebird/0.7
Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,
  text/plain;q=0.8,video/x-mng,image/png,image/jpeg,image/gif;q=0.2,*/*;q=0.1
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive

An HTTP 1.1-compliant server would then deliver the requested document with using an encoding accepted by the client. Here's a sample response from Navioo.com:

HTTP/1.1 200 OK
Date: Thu, 04 Dec 2003 16:15:12 GMT
Server: Apache/2.0
Vary: Accept-Encoding
Content-Encoding: gzip
Cache-Control: max-age=300
Expires: Thu, 04 Dec 2003 16:20:12 GMT
X-Guru: basic-knowledge=0, general-knowledge=0.2, complete-omnipotence=0.99
Content-Length: 1533
Content-Type: text/html; charset=ISO-8859-1

Now the client knows that the server supports gzip content encoding, and it also knows the size of the file (content-length). The client downloads the compressed file, decompresses it, and displays the page. At least, that is the way it is supposed to work.

 

Here is the response !
Here is the response!

 

So what's the hold-up in this seemingly simple process? The recommendations for HTTP compression were stipulated by the IETF (Internet Engineering Task Force) while specifying the protocol specifications of HTTP 1.1. The publicly available gzip compression format was intended to be the compression algorithm. Popular browsers have already implemented the decompression feature and were ready to receive the encoded data (as per the HTTP 1.1 protocol specifications), but HTTP compression on the Web server side was not implemented as quickly nor in a serious manner.

Gzip compression

Gzip is a lossless compressed-data format. The deflation algorithm used by gzip (also zip and zlib) is an open-source, patent-free variation of the LZ77 (Lempel-Ziv 1977) algorithm.

The algorithm finds duplicated strings in the input data. The second occurrence of a string is replaced by a pointer (in the form of a pair -- distance and length) to the previous string. Distances are limited to 32 KB and lengths are limited to 258 bytes. When a string does not occur anywhere in the previous 32 KB, it is emitted as a sequence of literal bytes. (In this description, string is defined as an arbitrary sequence of bytes and is not restricted to printable characters.)

Static compression

If the Web content is pre-generated and requires no server-side dynamic interaction with other systems, the content can be pre-compressed and placed in the Web server, with these compressed pages being delivered to the user. Publicly available compression tools (gzip, Unix compress) can be used to compress the static files.

Static compression, though, is not useful when the content has to be generated dynamically, such as on e-commerce sites or on sites which are driven by applications and databases. The better solution is to compress the data on the fly.

Content and transfer encoding

The IETF's standard for compressing HTTP contents includes two levels of encoding: content encoding and transfer encoding. Content encoding applies to methods of encoding and compression that have been already applied to documents before the Web user requests them. This is also known as pre-compressing pages or static compression. This concept never really caught on because of the complex file-maintenance burden it represents and few Internet sites use pre-compressed pages.

On the other hand, transfer encoding applies to methods of encoding during the actual transmission of the data.

In modern practice the difference between content and transfer encoding is blurred since the pages requested do not exist until after they are requested (they are created in real-time). Therefore the encoding has to be always in real-time

The browsers, taking the cue from IETF recommendations, implemented the Accept Encoding feature by 1998-99. This allows browsers to receive and decompress files compressed using the public algorithms. In this case, the HTTP request header fields sent from the browser indicate that the browser is capable of receiving encoded information. When the Web server receives this request, it can

  1. Send pre-compressed files as requested. If they are not available, then it can:
  2. Compress the requested static files, send the compressed data, and keep the compressed file in a temporary directory for further requests; or
  3. If transfer encoding is implemented, compress the Web server output on the fly.

As I mentioned, pre-compressing files, as well as real-time compression of static files by the Web server (the first two points, above) never caught on because of the complexities of file maintenance, though some Web servers supported these functions to an extent.

The feature of compressing Web server dynamic output on the fly wasn't seriously considered until recently, since its importance is only now being realized. So, sending dynamically compressed HTTP data over the network has remained a dream even though many browsers were ready to receive the compressed formats.



Speed Web delivery with HTTP compression Part2 > > > >

Ajax Javascript feed

↑ Grab this Headline Animator