Optimizing for faster page loads07 Jun 2015
Performance is an important consideration in user experience. So what can we do to make pages load more quickly, perform more efficiently, and feel fast and responsive to users?
Before making optimizations, it's a good idea to analyze current load time, and identify any bottlenecks. In the Chrome Developer Tools, you can do this with the Timeline tab. It shows every step the browser takes to load a page, how long each step takes, which processes block other processes, and which ones happen concurrently. (For a more high-level analysis, tools like YSlow and PageSpeed Insights will test a page and suggest specific optimizations for you.)
If a page is loading slowly, it could be one request that's taking a long time, an inefficient sequence of requests, or that some processes are blocking resources unnecessarily.
There are three main areas we can potentially optimize:
- Defining critical page resources
- Preparing those resources
- Fetching them from the server
Defining critical resources
So the first set of optimizations we can make involves minimizing the number of critical resources a page has to request.
Start by making sure every (internal and external) library or framework is actually necessary, and that the performance cost of requesting it is worth the benefit it's providing to your users. Then you can move onto the other optimizations.
For CSS, you can minimize critical resources by splitting your code into different files based on purpose. So styles that only apply to very large screens could go in a separate file, styles that only apply to print go in another separate file, and so on. Then when importing with
link tags, you specify a rule with the
This way, resources not needed for the environment will still be downloaded, but won't block page rendering.
It's also a good idea to include scripts not responsible for the layout of your page just before the closing
body tag. This is because when the browser hits a
script tag, you know the previous page elements have loaded already. You could also defer non-essential scripts until after the page has loaded, to stop them impacting the initial render.
Preparing your resources
Now we're not wasting time being blocked by requests for unnecessary resources, but we can save even more time by making sure the resources we do need are as small as possible. Fewer bytes over the network means faster responses (or lower latency).
There are a lot of minifiers and optimizers to choose from, and plugins to include the popular ones in build processes (with Gulp, Grunt, Webpack etc.) so you can automate these tasks.
Fetching your resources
So we've minimized the number of critical resources, and made them as small as possible, but HTTP requests are still an expensive part of loading a webpage. There are several things we can do to ensure we're making them efficiently.
First, we need to look at the order in which we're requesting resources. The most important requests should happen first, so things like essential CSS aren't being blocked by some less important process. This is why
link elements are usually placed at the top of a page inside the
Finally, we can cut requests on subsequent visits to a page by saving certain resources locally. This is known as HTTP caching. If a resource hasn't changed since the last visit, why request it again? Caching is controlled by the server response rather than the request, so has to be set server-side. In the response headers, you can specify things like whether to allow caching, and how long the resource should be cached for. Once it has been downloaded and cached, when the client needs it a second time, the browser will first check the cache for the resource. If it's there, and hasn't expired, that's one less network roundtrip the browser has to make. Resources that don't change very often (e.g. libraries) or code that is the same for all users are the best kind of things to cache, whereas code that's regularly updated is better cached for little or no time.
Even with these techniques, optimizing page load is still a juggling act. Is including that library worth it for the amount we're using it? Is inlining that CSS to save a request a fair trade-off for how much it increases the size of our HTML file? It takes careful analysis and decision making to find the best compromise for developer convenience and user experience.