Giving your future self a little credit with progressive enhancement

The web is complex, and the things you build are going to fail. With progressive enhancement, though, you can deliver the best possible experience to each and every user even when things go wrong.
100 people, mostly happy, but there’s always a few getting frustrated


The web is complex. Even setting aside all of your servers, APIs, and clouds for a moment, the front-end of your website is complex all by itself. We might not have to worry quite so much about browser compatibility as we once did (though even that’s not entirely gone, is it, Safari?), but now we have this huge collection of frameworks to consider. The thing about complex systems, though, is that they fail. The more complex they are, the more things there are that can fail. For any complex system, the question isn’t whether or not it will fail (it will, don’t worry, that’s just a matter of time), but how you’re going to handle it when that inevitably happens.

Graceful degradation vs. progressive enhancement

One approach is called “graceful degradation.” This is when you provide a fallback state for when things fail, but how confident are you that you’ve really accounted for every possible failure state in your system? It is complex, after all.

The alternative is progressive enhancement. With this approach, you start with the most basic possible kernel of your website’s experience — for almost all of us, that means static HTML — and then you approach everything else as an enhancement that users might (or might not) get. Images might load, and if they do, the user gets an enhanced experience. Your CSS might load, parse, and render, and if it does, the user gets an enhanced experience. Your JavaScript might load, parse, and execute, and if it does, the user gets an enhanced experience. But crucially, if one or more (or all) of those things fail, the user still gets the most useful, enhanced experience that she can under her current circumstances. In a way, you can think of responsive design as progressive enhancement, too (considering a big, wide screen as an enhancement that a user might or might not get).

But really, when we talk about progressive enhancement, we’re almost always talking about JavaScript — probably because most of those other layers already enhance progressively pretty easily. It’s a concept that meshes well with the cascade in cascading style sheets and all those standard best practices for HTML and images like providing good alt text. Making JavaScript enhance progressively isn’t harder, but it does require us to approach it a little bit differently. Some JavaScript developers say that they shouldn’t have to do that, though. After all, everyone has JavaScript, right


This isn’t about people who “opt out of the modern web”

Back in 2013, the United Kingdom’s Government Digital Services (GDS) ran a clever little study to see how many of their users were missing out on JavaScript enhancements to their websites. They found something unexpected: while only 0.2% of users had disabled JavaScript, there was another 0.9% of users for whom JavaScript simply failed to load, bringing the total up to 1.1%.

I’ve had the chance to repeat this experiment on other websites in recent years, and can confirm the oft-cited statistic that, when dealing with audiences today based mostly in North America, that number is closer to 3% (though I did do this for one website where it was closer to 10%!).

“Well, so what?” the usual rejoinder goes. “If three people in 100 decide to opt out of the modern web, that’s on them. We should focus on the 97 that don’t.”

They’re picturing something like this:


97 happy people and 3 frustrated people

We’re not talking about 3% of users, but 3% of visits

But remember, we’re not talking about 3% of users, but 3% of visits. In the original GDS study, they found only a minority of those who didn’t receive JavaScript had turned it off. Most of them wanted to receive JavaScript, but something happened. If you’ve ever been waiting for a page to load on your phone, and finally stopped it and reloaded it only to see it load immediately, then you’ve experienced this. It happens to all of us sometimes — and that’s the point. This isn’t about people who “opt out of the modern web,” it’s about the failures that happen in any complex system. It doesn’t look like the 97 smiling faces above; it looks more like this:

100 people, mostly happy, but there’s always a few getting frustrated

In fact, that’s still a too-cheery depiction of what’s going on.

In fact, that’s still a too-cheery depiction of what’s going on, because, if you think back to those times when a page was slow or you had to reload it or it just never finished loading at all, did that affect how you saw that website at all? Did you start to think of it as difficult to use, or slow? Did that start to change your perception of the organization that produced it? How many frustrating experiences can you have before you just give up and stop going there? We all have different limits for this, and getting something valuable from a website will make us more likely to put up with more frustration from it, but we all have a limit somewhere. And that means that it really looks more like this:

100 people, mostly happy, but there’s always a few getting frustrated, and if they get frustrated enough times, they stop coming back

These animations were inspired by similar ones presented by Stuart Langridge

(These animations were inspired by similar ones presented by Stuart Langridge in his 2019 presentation at GOTO Copenhagen, “You Really Don’t Need All that JavaScript, I Promise.”)

This doesn’t usually show up in surveys. It’s only a particularly self-aware user who notices that they’re feeling increasingly frustrated because the site is slow, or that the sudden content shift as you’re trying to read has irritated them. Most of us summarize our experiences, even to ourselves, with simple summations like, “it’s unintuitive,” or “it’s not user-friendly,” or “it’s slow,” or “it’s a bit janky.” That can make it difficult to connect the problems that users report to the specific things that will really solve those problems, but if you dig into those statements just a little bit, you’ll find that JavaScript failure may well be the single biggest problem on the modern web.

Am I telling you that with progressive enhancement, your JavaScript will never fail? No, Neo — I’m telling you that with progressive enhancement, it won’t matter.

Building websites that never break

“An escalator can never break,” the late, great Mitch Hedberg sagely told us. “It can only become stairs.” This is the most overused joke in progressive enhancement, but for a pretty good reason. If you build a website with great content, properly marked up with solid, semantic HTML, and that’s the only thing your user is able to load, you’re still likely to have a happy user. They found the thing they were looking for. If they’re also able to load images and CSS, all the better!

Unless you take some concerted efforts to contain it, JavaScript is usually an all-or-nothing affair. If one thing breaks, then that will probably stop any other JavaScript on the page from executing, too. If you’re following the standard practices for, say, a React app — with a root div, and then a React app that builds the whole page into that with client-side code — then if anything goes wrong (as it will about 3% of the time), what you get is a blank white page. There are ways to ameliorate this problem, like server-side rendering (SSR), but SSR alone can still leave you with an inoperable website if you’re not paying attention throughout the development process to what happens when things fail. If you chose React because it would be easy to pull components from npm, you might find that kind of scrutiny challenging the whole reason you chose React in the first place.

There are websites where this is a perfectly acceptable trade-off, of course. What would a core experience for Figma even look like if JavaScript failed? But most of us aren’t working on Figma. Most of us are working on documents that our users read — the core user experience that the web is built for. Most of the web comes with progressive enhancement built in for free; all we need to do is be careful not to break it.

We often talk about technical debt — the costs we’ll need to pay in the future when we make short-term compromises. Progressive enhancement is the opposite of that — a sort of technical credit that will make things easier for us in the future.

When the Apple Watch came out in 2015, I was working on a website for a client who came to me in a bluster, saying that we needed to be ready for this new format. We were already deep into our development cycle, how could we possibly make this work? I was able to tell our client that they could relax; since we’d built the website to rely on progressive enhancement, it was already ready for the Apple Watch without having to do anything. It was future-proof.

In 2018, I helped launch a website that was built to rely on progressive enhancement. We provided a rich front-end experience, with no shortage of JavaScript enhancements — but they were enhancements, not requirements. Six months after launch, we discovered that there was a bug that we’d missed in our JavaScript. It only affected users on an older browser, which was how we missed it, but this particular website still had a substantial number of users visiting with this older browser every day. No one ever reported a problem, though; it was only our own QA team that eventually found it. This bug went undiscovered and unfixed for six months because none of our users could tell there was any bug at all. The page loaded and everything worked. They were lacking some of the enhancements that visitors using other browsers were getting, but how many of them ever checked with a different browser to learn that? Had we built the website without relying on progressive enhancement, this would have been a critical bug, but because we did, it went completely unnoticed by our users for six months because it never got in their way. The escalator didn’t break, it just became stairs.

“Sorry for the convenience.”

Always be enhancing

“Minimum viable product” is one of those phrases that’s as sure to start a fight as politics or religion, and more often than not (in my own humble experience) either because someone has forgotten the word minimum or because someone has forgotten the word viable. You can release an MVP or not — that’s a business decision — but crucially, it marks the point where you have something that’s actually working. Usage is like oxygen for ideas, so until you have something that real people can use, your product is dying. People are terrible at making predictions, especially about the future, and especially about how they’ll feel about something. You can’t start to really learn how people are going to use your product until they start using it. This is why prototyping is so crucial.

If we’re delivering a minimum viable product, then we can focus on the core experience, without any of the other enhancements. After all, if our aim is to delight users, then that has to start with giving them something that they want. If the core experience can’t stand on its own, then all of the enhancements that we can add to it won’t change that. Getting it into the hands of real users will also tell us which enhancements are critical, which are beneficial, which are optional, and which aren’t particularly helpful at all, allowing us to focus our work on what will actually be most effective.

But progressive enhancement also forces us to think critically about what is really the core experience (in its entirety) and what are enhancements, which is the core discussion of what makes a minimum viable product. There’s a tension in those first two words that can be incredibly productive, but all too often becomes dysfunctional — often for fear that once an MVP is minted, none of the promised future iterations will actually happen. But the longer it takes to produce an MVP, the more likely that is to happen.

Done right, an MVP workflow leans heavily into the idea that digital products, like art, are never finished, only abandoned. New enhancements have their cost, but with some existing product it can be a lot easier to talk about the expected return on such an investment. At some point you probably will want to stop adding enhancements, but by breaking this into smaller slices, we can make better estimates and better business decisions.

Scrum is built around this very feedback loop: delivering the next most important increment of working software, getting feedback on how people are using it in the real world to reevaluate what is most important now, and then delivering the next most important increment of working software. This takes as a given that an MVP will be delivered quickly, so that this feedback loop can begin. Progressive enhancement helps us define MVP’s that can be delivered quickly enough to use iterative processes like these. It also helps us break down, understand, and prioritize the work to follow as enhancements. When we know what the most important components are and we have feedback from real world users telling us where enhancements are most needed, we can tell how much those enhancements will cost to develop and have a pretty good idea of how much value they’ll bring to our customers.

Progressive enhancement = more confidence in your website

When you work with Pixo to develop your website, we use progressive enhancement to lay a foundation for a bullet-proof, future-proof experience. We focus first on a core experience that delivers what your users are looking for, and then we start adding enhancements that will delight them. When we hand it off to you, you’ll be able to continue that cycle, because you’ll have a solid foundation that you can keep iterating on for as long as you like.

Trends in web design come and go, but the fundamentals for building an accessible, usable, delightful web remain the same. The web is already complex enough — don’t we owe ourselves a little technical credit?