Speed Up Your Web Site with Varnish
Varnish is a program that can greatly speed up a Web site while reducing the load on the Web server. According to Varnish's official site, Varnish is a "Web application accelerator also known as a caching HTTP reverse proxy".
When you think about what a Web server does at a high level, it receives HTTP requests and returns HTTP responses. In a perfect world, the server would return a response immediately without having to do any real work. In the real world, however, the server may have to do quite a bit of work before returning a response to the client. Let's first look at how a typical Web server handles this, and then see what Varnish does to improve the situation.
Although every server is different, a typical Web server will go through a potentially long sequence of steps to service each request it receives. It may start by spawning a new process to handle the request. Then, it may have to load script files from disk, launch an interpreter process to interpret and compile those files into bytecode and then execute that bytecode. Executing the code may result in additional work, such as performing expensive database queries and retrieving more files from disk. Multiply this by hundreds or thousands of requests, and you can see how the server quickly can become overloaded, draining system resources trying to fulfill requests. To make matters worse, many of the requests are repeats of recent requests, but the server may not have a way to remember the responses, so it's sentenced to repeating the same painful process from the beginning for each request it encounters.
Things are a little different with Varnish in place. For starters, the request is received by Varnish instead of the Web server. Varnish then will look at what's being requested and forward the request to the Web server (known as a back end to Varnish). The back-end server does its regular work and returns a response to Varnish, which in turn gives the response to the client that sent the original request.
If that's all Varnish did, it wouldn't be much help. What gives us the performance gains is that Varnish can store responses from the back end in its cache for future use. Varnish quickly can serve the next response directly from its cache without placing any needless load on the back-end server. The result is that the load on the back end is reduced significantly, response times improve, and more requests can be served per second. One of the things that makes Varnish so fast is that it keeps its cache completely in memory instead of on disk. This and other optimizations allow Varnish to process requests at blinding speeds. However, because memory typically is more limited than disk, you have to size your Varnish cache properly and take measures not to cache duplicate objects that would waste valuable space.
Let's install Varnish. I'm going to explain how to install it from source, but you can install it using your distribution's package manager. The latest version of Varnish is 3.0.3, and that's the version I work with here. Be aware that the 2.x versions of Varnish have some subtle differences in the configuration syntax that could trip you up. Take a look at the Varnish upgrade page on the Web site for a full list of the changes between versions 2.x and 3.x.
Missing dependencies is one of the most common installation problems. Check the Varnish installation page for the full list of build dependencies.
Run the following commands as root to download and install the latest version of Varnish:
cd /var/tmp
wget http://repo.varnish-cache.org/source/varnish-3.0.3.tar.gz
tar xzf varnish-3.0.3.tar.gz
cd varnish-3.0.3
sh autogen.sh
sh configure
make
make test
make install
Varnish is now installed under the /usr/local directory. The full path to the main binary is /usr/local/sbin/varnishd, and the default configuration file is /usr/local/etc/varnish/default.vcl.
You can start Varnish by running the varnishd binary. Before you can do that though, you have to tell Varnish which back-end server it's caching for. Let's specify the back end in the default.vcl file. Edit the default.vcl file as shown below, substituting the values for those of your Web server:
backend default {
.host = "127.0.0.1";
.port = "80";
}
Now you can start Varnish with this command:
/usr/local/sbin/varnishd -f /usr/local/etc/varnish/default.vcl
↪-a :6081 -P /var/run/varnish.pid -s malloc,256m
This will run varnishd as a dæmon and return you to the command prompt. One thing worth pointing out is that varnishd will launch two processes. The first is the manager process, and the second is the child worker process. If the child process dies for whatever reason, the manager process will spawn a new process.
Varnishd Startup OptionsThe -f option tells Varnish where your configuration file lives.
The -a option is the address:port that Varnish will listen on for incoming HTTP requests from clients.
The -P option is the path to the PID file, which will make it easier to stop Varnish in a few moments.
The -s option configures where the cache is kept. In this case, we're using a 256MB memory-resident cache.
If you installed Varnish from your package manager, it may be running already. In that case, you can stop it first, then use the command above to start it manually. Otherwise, the options it was started with may differ from those in this example. A quick way to see if Varnish is running and what options it was given is with the pgrep command:
/usr/bin/pgrep -lf varnish
Varnish now will relay any requests it receives to the back end you specified, possibly cache the response, and deliver the response back to the client. Let's submit some simple GET requests and see what Varnish does. First, run these two commands on separate terminals:
/usr/local/bin/varnishlog
/usr/local/bin/varnishstat
The following GET command is part of the Perl www library (libwww-perl). I use it so you can see the response headers you get back from Varnish. If you don't have libwww-perl, you could use Firefox with the Live HTTP Headers extension or another tool of your choice:
GET -Used http://localhost:6081/
Figure 1. Varnish Response Headers
The options given to the GET command aren't important here. The important thing is that the URL points to the port on which varnishd is listening. There are three response headers that were added by Varnish. They are X-Varnish, Via and Age. These headers are useful once you know what they are. The X-Varnish header will be followed by either one or two numbers. The single-number version means the response was not in Varnish's cache (miss), and the number shown is the ID Varnish assigned to the request. If two numbers are shown, it means Varnish found a response in its cache (hit). The first is the ID of the request, and the second is the ID of the request from which the cached response was populated. The Via header just shows that the request went through a proxy. The Age header tells you how long the response has been cached by Varnish, in seconds. The first response will have an Age of 0, and subsequent hits will have an incrementing Age value. If subsequent responses to the same page don't increment the Age header, that means Varnish is not caching the response.
Now let's look at the varnishstat command launched earlier. You should see something similar to Figure 2.
Figure 2. varnishstat Command
The important lines are cache_hit and cache_miss. cache_hits won't be shown if you haven't had any hits yet. As more requests come in, the counters are updates to reflect hits and misses.
Next, let's look at the varnishlog command launched earlier (Figure 3).
Figure 3. varnishlog Command
This shows you fairly verbose details of the requests and responses that have gone through Varnish. The documentation on the Varnish Web site explains the log output as follows:
The first column is an arbitrary number, it defines the request. Lines with the same number are part of the same HTTP transaction. The second column is the tag of the log message. All log entries are tagged with a tag indicating what sort of activity is being logged. Tags starting with Rx indicate Varnish is receiving data and Tx indicates sending data. The third column tell us whether this is data coming or going to the client (c) or to/from the back end (b). The forth column is the data being logged.
varnishlog has various filtering options to help you find what you're looking for. I recommend playing around and getting comfortable with varnishlog, because it will really help you debug Varnish. Read the varnishlog(1) man page for all the details. Next are some simple examples of how to filter with varnishlog.
To view communication between Varnish and the client (omitting the back end):
/usr/local/bin/varnishlog -c
To view communication between Varnish and the back end (omitting the client):
/usr/local/bin/varnishlog -b
To view the headers received by Varnish (both the client's request headers and the back end's response headers):
/usr/local/bin/varnishlog -i RxHeader
Same thing, but limited to just the client's request headers:
/usr/local/bin/varnishlog -c -i RxHeader
Same thing, but limited to just the back end's response headers:
/usr/local/bin/varnishlog -b -i RxHeader
To write all log messages to the /var/log/varnish.log file and dæmonize:
/usr/local/bin/varnishlog -Dw /var/log/varnish.log
To read and display all log messages from the /var/log/varnish.log file:
/usr/local/bin/varnishlog -r /var/log/varnish.log
The last two examples demonstrate storing your Varnish log to disk. Varnish keeps a circular log in memory in order to stay fast, but that means old log entries are lost unless saved to disk. The last two examples above demonstrate how to save all log messages to a file for later review.
If you wanted to stop Varnish, you could do so with this command:
kill `cat /var/run/varnish.pid`
This will send the TERM signal to the process whose PID is stored in the /var/run/varnish.pid file. Because this is the varnishd manager process, Varnish will shut down.
Now that you know how to start and stop Varnish, and examine cache hits and misses, the natural question to ask is what does Varnish cache, and for how long?
Varnish is conservative with what it will cache by default, but you can
change most of these defaults. It will consider only caching GET and HEAD
requests. It won't cache a request with either a Cookie or Authorization
header. It won't cache a response with either a Set-Cookie or Vary
header. One thing Varnish looks at is the Cache-Control header. This
header is optional, and it may be present in the Request or the Response. It
may contain a list of one or more semicolon-separated directives. This
header is meant to apply caching restrictions. However, Varnish won't
alter its caching behavior based on the Cache-Control header, with
the exception of the max-age directive. This directive looks like
Cache-Control: max-age=n
, where n is a number. If Varnish receives
the max-age directive in the back end's response, it will use that value
to set the cached response's expiration (TTL), in seconds. Otherwise,
Varnish will set the cached response's TTL expiration to the value of
its default_ttl parameter, which defaults to 120 seconds.
Varnish has configuration parameters with sensible defaults. For example, the default_ttl parameter defaults to 120 seconds. Configuration parameters are fully explained in the varnishd(1) man page. You may want to change some of the default parameter values. One way to do that is to launch varnishd by using the -p option. This has the downside of having to stop and restart Varnish, which will flush the cache. A better way of changing parameters is by using what Varnish calls the management interface. The management interface is available only if varnishd was started with the -T option. It specifies on what port the management interface should listen. You can connect to the management interface with the varnishadm command. Once connected, you can query parameters and change their values without having to restart Varnish.
To learn more, read the man pages for varnishd, varnishadm and varnish-cli.
You'll likely want to change what Varnish caches and how long it's cached for—this is called your caching policy. You express your caching policy in the default.vcl file by writing VCL. VCL stands for Varnish Configuration Language, which is like a very simple scripting language specific to Varnish. VCL is fully explained in the vcl(7) man page, and I recommend reading it.
Before changing default.vcl, let's think about the process Varnish goes through to fulfill an HTTP request. I call this the request/response cycle, and it all starts when Varnish receives a request. Varnish will parse the HTTP request and store the details in an object known to Varnish simply as req. Now Varnish has a decision to make based entirely on the req object—should it check its cache for a match or just forward the request to the back end without caching the response? If it decides to bypass its cache, the only thing left to do is forward the request to the back end and then forward the response back to the client. However, if it decides to check its cache, things get more interesting. This is called a cache lookup, and the result will either be a hit or a miss. A hit means that Varnish has a response in its cache for the client. A miss means that Varnish doesn't have a cached response to send, so the only logical thing to do is send the request to the back end and then cache the response it gives before sending it back to the client.
Now that you have an idea of Varnish's request/response cycle, let's talk about how to implement your caching policy by changing the decisions Varnish makes in the process. Varnish has a set of subroutines that carry out the process described above. Each of these subroutines performs a different part of the process, and the return value from the subroutine is how you tell Varnish what to do next. In addition to setting the return values, you can inspect and make changes to various objects within the subroutines. These objects represent things like the request and the response. Each subroutine has a default behavior that can be seen in default.vcl. You can redefine these subroutines to get Varnish to behave how you want.
Varnish SubroutinesThe Varnish subroutines have default definitions, which are shown in default.vcl. Just because you redefine one of these subroutines doesn't mean the default definition will not execute. In particular, if you redefine one of the subroutines but don't return a value, Varnish will proceed to execute the default subroutine. All the default Varnish subroutines return a value, so it makes sens that Varnish uses them as a fallback.
The first subroutine to look at is called vcl_recv. This gets executed after receiving the full client request, which is available in the req object. Here you can inspect and make changes to the original request via the req object. You can use the value of req to decide how to proceed. The return value is how you tell Varnish what to do. I'll put the return values in parentheses as they are explained. Here you can tell Varnish to bypass the cache and send the back end's response back to the client (pass). You also can tell Varnish to check its cache for a match (lookup).
Next is the vcl_pass subroutine. If you returned pass in vcl_recv, this is where you'll be just before sending the request to the back end. You can tell Varnish to continue as planned (pass) or to restart the cycle at the vcl_recv subroutine (restart).
The vcl_miss and vcl_hit subroutines are executed depending on whether Varnish found a suitable response in the cache. From vcl_miss, your main options are to get a response from the back-end server and cache it (fetch) or to get a response from the back end and not cache it (pass). vcl_hit is where you'll be if Varnish successfully finds a matching response in its cache. From vcl_hit, you have the cached response available to you in the obj object. You can tell Varnish to send the cached response to the client (deliver) or have Varnish ignore the cached response and return a fresh response from the back end (pass).
The vcl_fetch subroutine is where you'll be after getting a fresh response from the back end. The response will be available to you in the beresp object. You either can tell Varnish to continue as planned (deliver) or to start over (restart).
From vcl_deliver, you can finish the request/response cycle by delivering the response to the client and possibly caching it as well (deliver), or you can start over (restart).
As previously stated, you express your caching policy within the subroutines in default.vcl. The return values tell Varnish what to do next. You can base your return values on many things, including the values held in the request (req) and response (resp) objects mentioned earlier. In addition to req and resp, there also is a client object representing the client, a server object and a beresp object representing the back end's response. It's important to realize that not all objects are available in all subroutines. It's also important to return one of the allowed return values from subroutines. One of the hardest things to remember when starting out with Varnish is which objects are available in which subroutines, and what the legal return values are. To make it easier, I've created a couple reference tables. They will help you get up to speed quickly by not having to memorize everything up front or dig through the documentation every time you make a change.
client | server | req | bereq | beresp | resp | obj | |
vcl_recv | X | X | X | ||||
vcl_pass | X | X | X | X | |||
vcl_miss | X | X | X | X | |||
vcl_hit | X | X | X | X | |||
vcl_fetch | X | X | X | X | X | ||
vcl_deliver | X | X | X | X |
pass | lookup | error | restart | deliver | fetch | pipe | hit_for_pass | |
vcl_recv | X | X | X | X | ||||
vcl_pass | X | X | X | |||||
vcl_lookup | ||||||||
vcl_miss | X | X | X | |||||
vcl_hit | X | X | X | X | ||||
vcl_fetch | X | X | X | X | ||||
vcl_deliver | X | X | X |
Be sure to read the full explanation of VCL, available subroutines, return values and objects in the vcl(7) man page.
Let's put it all together by looking at some examples.
Normalizing the request's Host header:
sub vcl_recv {
if (req.http.host ~ "^www.example.com") {
set req.http.host = "example.com";
}
}
Notice you access the request's host header by using req.http.host. You have full access to all of the request's headers by putting the header name after req.http. The ~ operator is the match operator. That is followed by a regular expression. If you match, you then use the set keyword and the assignment operator (=) to normalize the hostname to simply "example.com". A really good reason to normalize the hostname is to keep Varnish from caching duplicate responses. Varnish looks at the hostname and the URL to determine if there's a match, so the hostnames should be normalized if possible.
Here's a snippet from the default vcl_recv subroutine:
sub vcl_recv {
if (req.request != "GET" && req.request != "HEAD") {
return (pass);
}
return (lookup);
}
That's a snippet of the default vcl_recv subroutine. You can see that if it's not a GET or HEAD request, varnish returns pass and won't cache the response. If it is a GET or HEAD request, it looks it up in the cache.
Removing request's Cookies if the URL matches:
sub vcl_recv {
if (req.url ~ "^/images") {
unset req.http.cookie;
}
}
That's an example from the Varnish Web site. It removes cookies from the request if the URL starts with "/images". This makes sense when you recall that Varnish won't cache a request with a cookie. By removing the cookie, you allow Varnish to cache the response.
Removing response cookies for image files:
sub vcl_fetch {
if (req.url ~ "\.(png|gif|jpg)$") {
unset beresp.http.set-cookie;
set beresp.ttl = 1h;
}
}
That's another example from Varnish's Web site. Here you're in the vcl_fetch subroutine, which happens after fetching a fresh response from the back end. Recall that the response is held in the beresp object. Notice that here you're accessing both the request (req) and the response (beresp). If the request is for an image, you remove the Set-Cookie header set by the server and override the cached response's TTL to one hour. Again, you do this because Varnish won't cache responses with the Set-Cookie header.
Now, let's say you want to add a header to the response called X-Hit. The value should be 1 for a cache hit and 0 for a miss. The easiest way to detect a hit is from within the vcl_hit subroutine. Recall that vcl_hit will be executed only when a cache hit occurs. Ideally, you'd set the response header from within vcl_hit, but looking at Table 1 in this article, you see that neither of the response objects (beresp and resp) are available within vcl_hit. One way around this is to set a temporary header in the request, then later set the response header. Let's take a look at how to solve this.
Adding an X-Hit response header:
sub vcl_hit {
set req.http.tempheader = "1";
}
sub vcl_miss {
set req.http.tempheader = "0";
}
sub vcl_deliver {
set resp.http.X-Hit = "0";
if (req.http.tempheader) {
set resp.http.X-Hit = req.http.tempheader;
unset req.http.tempheader;
}
}
The code in vcl_hit and vcl_miss is straightforward—set a value in a temporary request header to indicate a cache hit or miss. The interesting bit is in vcl_deliver. First, I set a default value for X-Hit to 0, indicating a miss. Next, I detect whether the request's tempheader was set, and if so, set the response's X-Hit header to match the temporary header set earlier. I then delete the tempheader to keep things tidy, and I'm all done. The reason I chose the vcl_deliver subroutine is because the response object that will be sent back to the client (resp) is available only within vcl_deliver.
Let's explore a similar solution that doesn't work as expected.
Adding an X-Hit response header—the wrong way:
sub vcl_hit {
set req.http.tempheader = "1";
}
sub vcl_miss {
set req.http.tempheader = "0";
}
sub vcl_fetch {
set beresp.http.X-Hit = "0";
if (req.http.tempheader) {
set beresp.http.X-Hit = req.http.tempheader;
unset req.http.tempheader;
}
}
Notice that within vcl_fetch, I'm now altering the back end's response (beresp), not the final response sent to the client. This code appears to work as expected, but it has a major bug. What happens is that the first request is a miss and fetched from the back end, and that response has X-Hit set to "0" then it's cached. Subsequent requests result in a cache hit and never enter the vcl_fetch subroutine. The result is that all cache hits continue having X-Hit set to "0". These are the types of mistakes to look out for when working with Varnish.
The easiest way to avoid these mistakes is to keep those reference tables handy; remember when each subroutine is executed in Varnish's workflow, and always test the results.
Let's look at a simple way to tell Varnish to cache everything for one hour. This is shown only as an example and isn't recommended for a real server.
Cache all responses for one hour:
sub vcl_recv {
return (lookup);
}
sub vcl_fetch {
set beresp.ttl = 1h;
return (deliver);
}
Here, I'm overriding two default subroutines with my own. If I hadn't returned "deliver" from vcl_fetch, Varnish still would have executed its default vcl_fetch subroutine looking for a return value, and this would not have worked as expected.
Once you get Varnish to implement your caching policy, you should run some benchmarks to see if there is any improvement. The benchmarking tool I use here is the Apache benchmark tool, known as ab. You can install this tool as part of the Apache Web server or as a separate package—depending on your system's package manager. You can read about the various options available to ab in either the man page or at the Apache Web site.
In the benchmark examples below, I have a stock Apache 2.2 installation listening on port 80, and Varnish listening on port 6081. The page I'm testing is a very basic Perl CGI script I wrote that just outputs a one-liner HTML page. It's important to benchmark the same URL against both the Web server and Varnish so you can make a direct comparison. I run the benchmark from the same machine that Apache and Varnish are running on in order to eliminate the network as a factor. The ab options I use are fairly straightforward. Feel free to experiment with different ab options and see what happens.
Let's start with 1000 total requests (-n 1000) and a concurrency of 1 (-c 1).
Benchmarking Apache with ab:
ab -c 1 -n 1000 http://localhost/cgi-bin/test
Figure 4. Output from ab Command (Apache)
Benchmarking Varnish with ab:
ab -c 1 -n 1000 http://localhost:6081/cgi-bin/test
Figure 5. Output from ab Command (Varnish)
As you can see, the ab command provides a lot of useful output. The metrics I'm looking at here are "Time per request" and "Requests per second" (rps). You can see that Apache came in at just over 1ms per request (780 rps), while Varnish came in at 0.1ms (7336 rps)—nearly ten times faster than Apache. This shows that Varnish is faster, at least based on the current setup and isolated testing. It's a good idea to run ab with various options to get a feel for performance—particularly by changing the concurrency values and seeing what impact that has on your system.
System Load and %iowaitSystem load is a measure of how much load is being placed on your CPU(s). As a general rule, you want the number to stay below 1.0 per CPU or core on your system. That means if you have a four-core system as in the machine I'm benchmarking here, you want your system's load to stay below 4.0.
%iowait is a measure of the percentage of CPU time spent waiting on input/output. A high %iowait indicates your system is disk-bound, performing many disk i/o operations causing the system to slow down. For example, if your server had to retrieve 100 files or more for each request, it likely would cause the %iowait time to go up very high indicating that the disk is a bottleneck.
The goal is to not only improve response times, but also to do so with as little impact on system resources as possible. Let's compare how a prolonged traffic surge affects system resources. Two good measures of system performance are the load average and the %iowait. The load average can be seen with the top utility, and the %iowait can be seen with the iostat command. You're going to want to keep an eye on both top and iostat during the prolonged load test to see how the numbers change. Let's fire up top and iostat, each on separate terminals.
Starting iostat with a two-second update interval:
iostat -c 2
Starting top:
/usr/bin/top
Now you're ready to run the benchmark. You want ab to run long enough to see the impact on system performance. This typically means anywhere from one minute to ten minutes. Let's re-run ab with a lot more total requests and a higher concurrency.
Load testing Apache with ab:
ab -c 50 -n 100000 http://localhost/cgi-bin/test
Figure 6. System Load Impact of Traffic Surge on Apache
Load testing Varnish with ab:
ab -c 50 -n 1000000 http://localhost:6081/cgi-bin/test
Figure 7. System Load Impact of Traffic Surge on Varnish
First let's compare response times. Although you can't see it in the screenshots, which were taken just before ab finished, Apache came in at 23ms per request (2097 rps), and Varnish clocked in at 4ms per request (12099 rps). The most drastic difference can be seen in the load averages in top. While Apache brought the system load all the way up to 12, Varnish kept the system load near 0 at 0.4. I did have to wait several minutes for the machine's load averages to go back down after the Apache load test before load testing Varnish. It's also best to run these tests on a non-production system that is mostly idle.
Although everyone's servers and Web sites have different requirements and configurations, Varnish may be able to improve your site's performance drastically while simultaneously reducing the load on the server.