Published on: Sun Oct 09 2022
HTTP caching is a standard that many software tools follow and are built on.
The tools include browsers, content delivery networks (CDNs), proxy caches, gateways and many more.
So, that means these tools either support it or are at least compatiable with it because it is so widely used.
This is why it is important to understand it because so many tools are built on this caching standard.
Illustration of common software tools built on the HTTP caching standard
After going through this guide, you should have a better understanding of how HTTP caching works!
It’s going to save you hours of reading and researching!
There are many details that you can learn about HTTP caching but of all of them, I think these 6 concepts would give you enough to build a solid foundation.
It would be like learning the 80/20 of HTTP caching!
Here are the 6 concepts:
Deleting from the cache
Let’s dive right in!
Cache control header provides a way for the server to give instructions to the client or the shared cache about the caching behaviour for a particular resource.
Here is an example:
Cache-Control: max-age=3600 // 60 minutes
is one of the most common directives to use. It controls caching which allows us to specify how long to cache a particular response.
This is not the only directive, there are several other types that you can use.
Here are some of the more common types:
For a full list of directives, check out MDN - Cache Control.
💡 Tip: You can also add several directives by separating them with a comma.
Here is an example:
Cache-Control: public, max-age=3600
Of all the concepts, this is probably the most important, and that concept has to do with validation.
When a resource in a cache becomes stale, the cache does not immediately remove it.
Rather it “validates” this resource with the server by making a request with the
header and the
In a Later section, we’ll also discuss the different strategies for handling stale responses.
ETags are typically a checksum (it can be a hash or a version number) generated by the server to check whether or not a resources has changed.
Let’s see how this fits in with the validation request.
To perform a validation for a cached response, the client sends a
request to the server with the appropriate details in the request including the
header with the
as the value.
Illustration of a validation request: client validating a response with server
The response of this request has two outcomes.
There are only two outcomes from the validation request.
Resource is not modified - The server responds with a HTTP
, the response remains the same
New resource is available - The server responds with a HTTP
, and returns the latest version of the response
Illustration of the possible outcomes
Now, what happens if we have variations of the resource but the same URL.
How do we go about caching that ? That’s where the
header comes in.
When a response comes back from the server, it is typically cached based on its URL.
However, there are times when the same URL can produce many variations (ie languages, compression formats).
Some examples include:
- A different language or localization
- type of compression used for transferring between the client & origin server
So, what we can do in this scenario is to provide a
header in the response from the server to alter the cache key.
Example: Caching when using
header, you are not limited to just one header.
You can provide one or many of them in a comma separated format (ie
), those values will be used to create the cache key for storing the cached response.
Another concept to understand in regards to HTTP caching, is this idea of request collapsing.
When there is a shared cache, and multiple clients are requesting for the same resource, it would reduce the number of forwarded requests to the origin server by collapsing it.
Let’s go through an example.
When a request comes in for a resource, and assuming a response is not in the cache, the shared cache would forward the request to the origin server.
During this time, let’s say more requests arrive for the same resource.
In this case, the shared cache would not forward any more requests to the server, rather it would wait for the first request.
Illustration of the request collapsing from multiple clients
Then when the request returns, it can use the same response to serve multiple clients who requested the same resource.
Illustration of shared response to serve multiple clients
⚠️ Important: The important thing to note here is that this only applies to responses that can be shared across the clients.
When working with cached responses, it is also important to talk about how to handle situations when they become stale.
There are two common directives available that can be used to manage responses that have expired.
The strategy you choose to use will depend on the type of resource you are serving, and the experience you wish to provide.
The Two options are:
Serve stale, while revalidate
Revalidate before reuse
Illustration of the different options to managing stale responses
Let’s go through these options.
When the response expires, this option would serve the stale response. Then it would revalidate with the server in the background.
This means that some clients will temporarily get a stale responses while the cache revalidates in the background. Just keep this in mind.
In some cases, you may want to always serve the latest response. So, this may or may not work depending on your use case.
When the response expires, this option will revalidate with the server before reusing a response.
If the response has changed, the server will return with the new response, which will be served to the client.
This means that the cache will revalidate immediately before serving a response to check if the current response is still up-to-date.
After doing so, it will either serve the existing response (if it hasn’t changed) or the new response to the client.
There is also
which is identical to
but it is used by shared caches.
This is not directly related to stale responses but since we are discussing the validation request, it may be worth mentioning.
Upon receiving an error response from the origin server, the default behaviour of the shared cache varies.
Most shared caches will try serve the stale response (if applicable) — Please keep in mind that this is not always the case.
If you want to support this behaviour, you can use the
This translates to: If the shared cache receives an error response then serve the stale response for the defined duration.
Illustration of using stale-if-error directive
Ok we talked about managing stale responses but what about the cached responses that are no longer valid ?
Sometimes you may want to remove a stored response from the cache.
when working with a client or a shared cache, there are differences in how this is handled.
When working with client cache (browser cache), there isn’t a direct way to delete responses from the cache after caching it.
That means, you would have to wait until the cached response expires before it can be changed.
There are proposals for a new HTTP header,
, which can be set to clear the cache in the browser.
However, this may not supported on all browsers, so just keep that in mind!
When working with a shared cache, you typically have more control over the items in the cache (if it is a self managed service).
Most shared caches (ie CDNs, proxy caches, gateways) will provide be an API to delete (or invalidate) the items in the cache.
Ultimately, this means when storing responses in the client (browser cache), there will be less control.
When devising a caching strategy, you should take that into consideration.
Like I mentioned in the introduction, this guide is not meant to be a guide that covers everything.
Rather it covers some of major elements (The 6 core concepts) to HTTP caching that would help you to better understand how it works.
Once you understand these elements, it should be just a matter of looking up the other details to fill in the gaps.
Let’s do a recap.
Cache-Control headers - This is the HTTP header that controls the caching behaviour, and you do so via directives
Validation - When a cached response goes stale, validation is when the cache reaches out to the server to confirm whether or not this response is up-to-date ⭐️
Vary - When working with variations of the response from the same URL, you can adjust the cache key based on other properties by using the
Request Collapsing - When multiple clients request the same resource, the shared cache will combine the forwarded request into one then return the same server response to all the clients (assuming this is a shared response)
Response staleness - There are two common strategies for managing stale responses
stale-while-revalidate- Serve the stale response while the cache revalidates in the background
proxy-revalidatefor shared cache) - The cache must revalidate with server before serving the response
Deleting from the cache - Keep in mind that if you are caching responses using browser cache, there isn’t an easy way to invalidate or delete the cached responses
That’s it! I hope this guide was helpful.
If you’d like to keep learning about HTTP caching, here are a few more resources I recommend:
If you found this helpful or learned something new, please share this article with a friend or co-worker 🙏🧡 (Thanks!)
Then consider signing up to get notified when new content arrives!