This blog post is more of an explanation for our clients, and anyone interested in what we mean when we say we are optimizing a website, than one of our more traditional posts. This post is about how we achieve some of what we do, but more importantly, why we end up making the decisions that we do. I had a conversation with a client recently that really reminded me how opaque these decisions can be to someone who isn’t right in the midst of it.
Site optimization is the process of taking whatever the designer or client has handed you and making it reasonable. But let’s walk that back a little bit and define what I mean when I say “reasonable.” I am not a designer. I am not even really a normal user of websites. I know too much about the network costs associated with getting content from our server to your browser, and too much about all the different types of devices that might want to consume the information we’re providing, to focus on what is aesthetically pleasing. My primary focus is how do I make the information easy to consume, with a little cost to the network and your machine as possible, and available on the largest number of devices.
To put this in context, if I ran the entire internet, everything would look like this section of the article. A System Font (no additional font download from our server), a dark off-black background with off-white text on it (to reduce eye strain for users without the ability or knowledge to adjust their screen brightness, while still delivering a high contrast to make reading easier), and a large font size. Few, if any, images, compressed to the point where they are really only worth looking at in the size that you see them on the page. (Why waste data on detail that you won’t see?).
So, when I’m handed something handcrafted by a designer to be a work of visual art, optimization becomes a matter of compromise. It’s a sort of puzzle: how do I take this thing that is honestly trying way too hard to be a print medium and carve it down to something that still feels like the original design but is at least functional enough not to be a major burden on a semi-modern mobile phone using a 3g or lower data connection.
The answer is almost always a combination of clever choices and hard cuts. An easy, clever choice is compression. Most resources—images, CSS sheets, even the raw HTML—can be compressed through a number of techniques. Text resources of any kind can be made smaller by removing developer comments and any white space that isn’t needed for the text to be understood by the computer. This process is called minification, and there aren’t any hard and fast standards into exactly how it’s done, but it can usually reduce resources by at least a few kilobytes if it is applied liberally. We can also use browser/server compression techniques like gzip which add additional compression in transit in a way that your browser can seamlessly uncompress on the other end without you noticing. But this also really only gives us compression on the lower end of kilobytes.
Images are where we can compress most aggressively. If we have unlimited time, there are a number of protocols that allow us to create images for multiple breakpoints, serving different images based on the screen size of the browser requesting it. That is the ideal, but in my experience, we don’t always have unlimited time, so we have to use the other method, which is calculating (roughly) the maximum size an image will ever appear based on the design of the website and the most common resolution for browsers at the time. We then compress the image so that it looks best at that size. This allows us to make choices about color and detail to make sure that we’re not wasting data on things that can’t be seen during normal browsing of the website. This can usually shed us megabytes of data. And as a side note to any designers out there: the web is not print. We do not need print quality DPI on all of the images. If it’s a tiny image that’s never going to need to scale, I don’t need it to be 4580 x 3000 pixels. I’m just going to chunk down to 77 x 50 pixels anyway based on the elements of your design . . .
Next, we come to fonts. This is an area where I know I’ve made both designers and some clients unhappy, but I promise this is in your best interest. People don’t need and don’t want to download a 4MB font to read your website about lawn care. I know you think it really expresses the identity of your company or design, but page loading is a huge factor in bounce rates on websites (the number of people who leave your website within the first few seconds, sometimes before all the resources are loaded). If they have to wait for 4MB of a font to download on top of all the other resources, you are choosing to hack off most of your potential views over a stupid font. Like I said, if I had my way, we wouldn’t load any custom fonts at all; we’d just use one of the web safe system fonts available on all operating systems. They all share two benefits that are hard to overlook: 1) they’re already installed so there is no additional download before the content can render; 2) all of these fonts a relatively readable, so you can be sure that your content can be read without users having to fight the font to do so. If I can’t have my way, then at the very least, we’re going to send them as small a font as possible. We can shrink the font a number of ways: only packing the weights and symbols we’ll actually use is pretty common and very effective. Probably the most effective method I have for fighting font-bloat is by replacing fonts with lookalikes. Not all fonts are created equal, and sometimes the flourishes on a font can cost us a lot of data. The biggest offenders of these seem to be handwriting, illumination, and sketchy fonts. All of these have a lot of small details that are mostly lost on a webpage anyway. If I can find something that conveys the same basic shape and feeling without all of the incidental details, I’m going to do it. This might mean that your sketchy font is replaced with a block font that has a similar silhouette; it’s not a choice we make to be rude or mean, but every one of those extra lines or details costs precious data to send to the user, and we have to make a hard choice on if that’s worth it.
Wrapping Things Up
Ultimately, our goal with all of this is to reduce each page, and all of the resources it depends on, to as close to 1MB as possible. If we can get below that, that’s perfect because honestly 1MB of data for each page is huge. The text portion of this blog post is probably going to come out around 14KB when all the markup is added to it. That gives me 1,010KB of leeway for things like navigation, CSS, images, fonts, and the like. But more than that, our goal is to then take that 1MB and take all of it that can be shared and configure the website so that data can be cached for subsequent loads, meaning we send you 1MB of data the first time you visit a page, but every time you visit another page on the website, you might only be getting 740KB or 200KB of the data that is actually different.
What I’ve covered here is really only a small subset of our approach to optimization, but it should at least give you an idea of what our concerns are when we approach things like this, and give you a little insight into why we make certain choices. As I see it, a developer’s job during the optimization stage is to take the design or vision of a website, and make it functional for the largest number of people the time and budget allow for, and that can sometimes mean choosing speed over a design element, but if your developer is doing their job well, the trade-off will mean a significant increase in your website’s potential for leads and conversions.