I’ve talked at length about the benefits and virtues of progressive enhancement. I won’t spend too much time explaining it today, but it boils down to this: provide at least a working experience to the most users possible, and jazz things up for users whose browsers and devices can support those enhancements (jazz hands 🙌).
To me, it’s easy enough to nod along, but in practice we fail all the time. And it’s no surprise. After all, as creators it’s enticing to build novel, engaging experiences using the latest technology. And without prior experience it’s hard to know what quirks we need to be aware of.
What’s the problem?
To paint a more realistic picture, let’s look at a real-life example that I’ve dealt with.
A while back, I was on a website that looked really interesting and I thought, “Yeah, why don’t I sign up?”. I clicked the “Register” button, and you know what happened? Nothing.
Naturally, I clicked it five more times, then popped open my dev tools and saw the big block of red text in the JavaScript console.
The website was using Sentry‘s error tracking script to catch any JavaScript errors (smart thinking). The problem, I was using a browser extension that blocks 3rd-party trackers. The JavaScript on the website relied on Sentry’s code being present. When it was blocked, everything blew up and I could not sign up for the service (presumably, the most important thing).
While the solution may have been to be more careful managing JavaScript dependencies, this story highlights a missed opportunity for practicing progressive enhancement.
This is of the most prevalent scenarios where I see applications fail; relying on JavaScript to send data back and forth between a browser and a server (often as JSON).
For example, a <button>
on the page that when clicked, triggers and HTTP request with the Fetch API. It might look like this:
document.querySelector('button').addEventListener('click', () => {
fetch('https://someapi.com', {
method: 'POST',
body: someBodyData
})
})
This makes for an elegant and effective user experience. They click a button, and the data flies off to the mother ship.
But there’s a problem with relying entirely on JavaScript to send this data. JavaScript may not be available for your app in the user’s browser.
Whenever I mention this the response is invariably:
Who turns off JavaScript!?
And that completely misses the point. Yes, some users may actually disable JavaScript, but I’m not too worried about them. They knew what they were signing up for.
Still, JavaScript can run into issues for other users (see Everyone has JavaScript, right?). Here’s a list of possible ways it can fail:
- Users may have JavaScript disabled.
- Browsers may not recognize the JavaScript syntax (maybe they’re old (the browser, not the user)).
- Browser extensions block scripts from running (<- hey, that’s me).
- Users may have a slow connection that times out (mobile data).
- Users may have intermittent connection (on a train).
- The device may be behind a firewall.
This is not the full list of ways that JavaScript could fail, but if any of this happens, you can kiss that sweet user experience goodbye. The user might see a button, but it wouldn’t do anything.
In other words, if your application only works with JavaScript, there’s a lot of scenarios where it won’t work. Not only are you doing a disservice to your users, you may be negatively impacting your goals.
So what if we just don’t use JavaScript?
For decades now, HTML has been able to send HTTP requests using <a>
and <form>
, but I’ll be focusing on just <form>
. Here’s a very minimal example:
<form>
<label for="input-id">Label<label>
<input id="input-id" name="key" />
<button type="submit">Submit</button>
</form>
If you were to open this HTML in a browser, you would see a familiar input element with the label “Label”, followed by a “Submit” button. Clicking that button will reload the browser to the same URL the form lives on, but appending the name and value from the input to the URL as a query string (technically, this is a navigation, not a reload).
We could also send the data to a different URL by providing the action
attribute, and send the data within the request body by setting the method
attribute to ‘POST’.
<form action="https://some-url.com" method="post">
It’s reliable, but the user experience is meh. The browser navigates to the target URL, causing a whole page refresh. It works, but it’s not very sexy.
We’ve all become accustomed to interactions happening without the browser refreshing. So asking folks to go back to only making HTTP requests with <form>
is not going to happen.
What’s the solution?
The good news is that we don’t have to choose between HTML and JavaScript. We can use both!
Let’s build a pizza ordering form. When it’s submitted, we’ll want to send the data in the request body to the URL “https://api.pizza.com”. In the request body, we’ll include a name, email address, and preferred toppings.
Start with HTML
This is going to be the most straight-forward part. After all, this is how things have worked for decades, so there isn’t any sort of hand-waving we need to do to make it work. It just works. That’s the beauty of HTML.
<form method="POST" action="https://api.pizza.com">
<label for="name">Name</label>
<input id="name" name="name" />
<label for="email">Email</label>
<input id="email" name="email" type="email" />
<fieldset>
<legend>Pizza toppings</legend>
<input id="cheese" name="toppings" value="cheese" type="checkbox" />
<label for="cheese">Cheese</label>
<input id="pepperoni" name="toppings" value="pepperoni" type="checkbox" />
<label for="pepperoni">Pepperoni</label>
<input id="pineapple" name="toppings" value="pineapple" type="checkbox" />
<label for="pineapple">Pineapple</label>
<input id="ham" name="toppings" value="ham" type="checkbox" />
<label for="ham">Ham</label>
</fieldset>
<button type="submit">Submit</button>
</form>
We tell the form where to send the data, and to use the POST method. Then inside the form, each input gets its own <label
> and a name
attribute.
Labels are necessary for accessibility and are associated with their respective inputs via the for
attribute and the input’s id
attribute.
The name
attribute is necessary for functional reasons. It tells the form how to reference that bit of data. Some of the inputs also share the same name
which is important to note because it allows the same data property to have multiple values.
In addition to doing the thing we want (sending data to a URL) using HTML forms also gives us some advantages built into the browser. The browser can can relay important semantic/accessibility information to users relying on assistive technology, and we even get client-side form validation for free.
It’s not the best validation tool, but it doesn’t cost the user anything to download it. And we can also progressively enhance the validation experience, but that is beyond the scope of this article.
Match feature parity with JavaScript (a lil’ tricky)
Next, we’ll use JavaScript to attach an event listener to the “submit” event. In the event handler, we can prevent the normal HTML submit event from running, and replace the same functionality with JavaScript.
The tricky part is to make sure that requests made with JavaScript work the same as with HTML. So the browser determines what we need to rebuild with JavaScript in order to maintain feature parity. In other words, we don’t want to make some things better at the cost of making other things worse.
Let’s break it down, step-by-step. To attach an event handler to a form, we need a form DOM node. We can use document.querySelector
for that. Once we have the DOM node, we can attach the event handler with node.addEventListener()
.
document.querySelector('form').addEventListener('submit', (event) => {
// Event handler goes in here
}
We want to make the HTTP request using the fetch
API. To do so, we’ll need to know the URL, the data, and optionally, the request method. For GET requests, we can send all the data in the URL. For POST requests, we’ll need to pass an Object with the method
and body
properties.
Conveniently, if the HTML is done properly we can derive all the information we need.
// ...
const form = event.target;
const url = new URL(form.action || window.location.href);
const formData = new FormData(form);
const options = {
method: form.method,
};
// ...
<form>
DOM node is available as the event’starget
.- URL comes from the
form.action
attribute. If it’s not defined, the default HTML behavior is to use the current URL. So we can default towindow.location.href
. We’ll use a URL object to make modifications later on a bit simpler. - The
FormData
API makes it easy to capture data from any form (as long as the inputs havename
attributes). - The request method is available from the
form.method
property. It defaults to'get'
. We store this in an object to make it easy to pass to thefetch
call.
Next, we need to determine how to actually send the data. If the request should use the POST method, we want to add a “body” to the request in fetch
‘s options object. Otherwise, we’ll want to send the data in the URL as a query string.
This is trickier than it sounds because on POST requests, we can’t just assign a FormData
object as the request body. Doing so will actually modify request’s Content-Type
header to 'multipart/form-data'
which could break your HTTP request (more on that shortly).
Fortunately, the web platform has another handy tool in URLSearchParams
(this honestly may be the star of the show). We can use a URLSearchParams
object as the request body without modifying the headers. We can also use it to construct the query string for a GET request. Handy!
Ok, more code…
// ...
if (options.method === 'post') {
options.body = form.enctype === 'multipart/form-data' ? formData : new URLSearchParams(formData);
} else {
url.search = new URLSearchParams(formData);
}
// ...
- For POST requests, we’ll send the data in the request body.
- If the form explicitly sets the
enctype
to'multipart/form-data'
, it’s safe to useFormData
in the body. - Otherwise, we can fall back to
URLSearchParams
.
- If the form explicitly sets the
- For GET requests, we’ll send the data in the request URL with the
URLSearchParams
object.
Once again, we need to be especially careful about not modifying the default browser behavior, particularly around the request body. HTML forms can modify the Content-Type
request header by assigning the enctype
attribute. The default is 'application/x-www-form-urlencoded'
, but if you ever need to send files in a form, you have to use the 'multipart/form-data'
.
This is important because many backend frameworks do not support 'multipart/form-data'
by default. So unless you are sending files, it’s probably best to stick to the default.
On to the home stretch.
We have all the data and configuration we need. The last part is to execute the fetch
request.
// ...
fetch(url, options)
event.preventDefault();
// ...
- With our URL and options defined above, we can pass them to the
fetch
API. - Execute the
event.preventDefault
method to prevent the HTML<form>
from also submitting and reloading the page.
You may have seen other tutorials and wondered why we are waiting until the last minute to call the preventDefault
method. Even that is a careful consideration.
Consider the possibility that there could be a JavaScript error hidden in our event handler. If we called preventDefault
on the very first line, and the error occurred before our fetch
call, the script would break and the HTTP request would never be sent. By waiting until all the previous JavaScript has executed, we can make sure that there are no errors before preventing the “submit” event. Or, in the event of an error, the browser will still fall back to the default behavior of submitting the form the old fashioned way.
The complete script might look like this:
document.querySelector('form').addEventListener('submit', (event) => {
const form = event.target;
const url = new URL(form.action || window.location.href);
const formData = new FormData(form);
const searchParameters = new URLSearchParams(formData);
const options = {
method: form.method,
};
if (options.method === 'post') {
// Modify request body to include form data
options.body =
form.enctype === 'multipart/form-data' ? formData : searchParameters;
} else {
// Modify URL to include form data
url.search = searchParameters;
}
fetch(url, options);
event.preventDefault();
}
It’s a bit underwhelming considering how much thought and effort went into it. But I guess that’s a good thing because it means that with just a bit of JavaScript we can add a nicer user experience. And because the HTML declaratively provides all the information the JavaScript needs, this same improvement can be applied to all forms without any modification.
We give users a better experience when JavaScript is enabled and a minimal working experience when JavaScript is disabled or something goes wrong.
Progressive enhancement FTW!!!
(we can do the same with client-side validation as well, but it’s a bit involved to get it right)
But we’re still not done. So far, we’ve only covered the data sending side of things, but what about the data receiving part?
Feature improvements on the server
If all we were doing was sending data to the server, we could call it a day, but in most cases we want to show the user some update. Either a bit of data has changed on the page, or we want to notify the user their request was successful.
In an HTML-only world, the browser would navigate either to a new page or to the same URL. Either way, that navigation event would rebuild the HTML on the server and send it back to the user as the response. So showing the updated state of the application is easy enough.
With JavaScript, we don’t get the benefit of a full page re-render. Technically, the browser could still respond with the HTML for the page and we could use JavaScript to repaint the page, or we could even trigger a page reload manually, but then what’s the point?
It’s much more common (and preferable) to respond with JSON. But that also introduces its own dilemma. If we respond with JSON, the default HTML form submissions will reload the page and show the user a bunch of nonsense.
What if we could respond to HTML requests with HTML and respond to JavaScript requests with JSON?
Well, we can!
When an HTTP request is sent from the client to a server, there’s some additional data that tags along for the ride without the developer or user needing to do anything. Part of this data is the HTTP Headers.
The cool thing is that in most modern browsers, there is a header called Sec-Fetch-Mode
which tells the server the Request.mode
. Interestingly, for requests made with JavaScript, the value is set to cors
, and for requests made with HTML, the value is navigate
.
The bad news is that it’s not supported in IE 11 or Safari. Boo!
The good news is we can still detect what the response type should be by asking JavaScript developers to do just a little bit more leg work.
When we create a fetch
request, the second parameter is a configuration object. Inside of that configuration, we can customize the request headers. Unfortunately, we can’t customize the Sec-Fetch-Mode
header from here (browsers don’t allow that), but we can set the Accept
header.
This handy little header lets us explicitly tell the server what kind of response we would like. The default value is */*
(like, whatever dude), but we can set it to application/json
(JSON please!). We would need to manually add this to every request, which is kind of annoying, but I think it’s worth it.
So here’s what our new fetch
request could look like:
fetch(url, {
method: requestMethod,
body: bodyData,
headers: new Headers({
Accept: 'application/json'
})
})
The first parameter is still just the URL. For POST requests, the second (init) parameter should already exists, so we only need to add or modify the headers
property. For GET requests, the second parameter may not already be defined, so we may need to include it with the headers
property. And note that although I’m using the Headers
constructor here, you could just as well use a regular Object.
If you make a lot of HTTP requests in your application, manually adding this to every single one might get old. So I would recommend using a wrapper or curried function around fetch that automates this for you.
At this point, the request is sending all the data that the server needs. Now, the server needs to handle the request and determine how to respond. For JSON responses, I’ll leave that up to you. For HTML responses we can return the same page, or simply respond with a redirect.
You can determine where to redirect users based on the user request and your application logic. Or if you just want to redirect users back from whence they came, we have one more HTTP header we can use: Referer
. The Referer
header contains the URL from which the request was made, is automatically set by the browser, cannot be modified by JavaScript, and is available on all requests. It’s perfect in every way.
Example time!
Here I’m using fastify, but the concepts should apply across any language or framework:
// ...
server.post('/zhu-li/do-the-thing', (request, reply) => {
// Do stuff with the request...
// Once all the work is done, respond accordingly
const accept = request.headers.accept
const secFetchMode = request.headers['sec-fetch-mode']
const referer = request.headers.referer
if (accept.includes('application/json') || secFetchMode === 'cors') {
return someJson // I dunno, you tell me
}
return reply.redirect(303, referer); // Or respond with HTML
});
// ...
- Create a server route that accepts GET or POST requests
- I skipped the spicy business logic, but that should go before your response (obviously)
- Grab the
Accept
,Sec-Fetch-Mode
, andReferer
headers - Determine if the response should be JSON. If so, respond with JSON. Note that the early
return
will prevent the rest of the execution. - Otherwise, either respond with HTML, redirect to a new URL, or redirect back to where the request came from. In this case, I did the latter.
- Note that the request handler has to accept urlencoded data (the HTML default encoding). It may optionally accept JSON, but if you only accept JSON, then the payload has a hard requirement of being created with JavaScript and therefore makes supporting HTML kind of pointless.
This works really well in my testing, and if you wanted to, you could even create a plugin (or middleware) to add this logic to every route. That way, you don’t have to manually add it every time.
One downside to all of this (maybe you already caught it) is that if the goal is to support HTML forms, you can only support GET and POST request methods.
It’s a bummer because PUT, PATCH, and DELETE are very handy methods for handling CRUD operations. The good news is with a bit of changes to the URL patterns, we can accomplish the same thing, almost as nicely.
Here’s what an API might look like using any method we want:
server.post('/api/kitten', create);
server.get('/api/kitten', readCollection);
server.get('/api/kitten/:id', readSingle);
server.patch('/api/kitten/:id', updateSingle);
server.delete('/api/kitten/:id', deleteSingle);
Here’s that same API using only GET and POST methods:
server.post('/api/kitten', create);
server.get('/api/kitten/', readCollection);
server.get('/api/kitten/:id', readSingle);
server.post('/api/kitten/:id/update', updateSingle);
server.post('/api/kitten/:id/delete', deleteSingle);
- The GET and POST routes don’t change.
- The PATCH and DELETE routes become POST.
- We append the “methods” /update and /delete to their respective routes.
I’ll admit that I don’t love this tradeoff, but it’s a small annoyance and I think I’ll survive. After all, the benefits (as you’ll hopefully see) are so worth it.
Takeaways from this example
This has been just one example of progressive enhancement as it applies to making HTTP requests, but I thought it was an important one. Here are some things I think you should keep in mind:
- Forms can only send GET and POST requests. If you do not specify attributes, they default sending to GET request to the current URL.
- By default, data is sent to the server as URL encoded string (
text=hello&number=1
) unless you change theenctype
. For GET requests, it goes in the request URL. For POST requests, it goes in the request body. - Multiple inputs can have the same name and form data can have multiple values for the same data property.
- When you submit a form, most text-style inputs will send their data as empty strings. There are some exceptions.
- Default values for
checkbox
andradio
is ‘on’, even if left unchecked. If it’s not selected, the data is not included. - The default value for
range
is ’50’. - The default value for
select
is whatever the first selected<option>
. If option does not have a value, will use the contents of the tag. You can avoid sending default data by omitting disabling all options. - The
file
input sends the file name by default. To send the actual file as binary data the request’sContent-Type
must bemultipart/form-data
which you can set with theenctype
attribute. - The default value for
color
inputs is'#000000'
.
- Default values for
- For JavaScript submissions,
FormData
andURLSearchParams
are awesome. Both can be used in the request body, but usingFormData
will change the defaultContent-Type
tomultipart/form-data
. - To include extra data in your request, you can use and input with the
name
,value
, and thetype
of “hidden”.
Applying what we’ve learned to the very first example, a button that sends data when clicked, we can accomplish the same thing more reliably with a bit of a change to the HTML:
<form action="https://someapi.com" method="POST">
<input type="hidden" name="data-key" value="data-value">
<button>Send data</button>
</form>
Sprinkle our JavaScript form submitter on top of that and we’ll have a modern user experience that still works if the page experiences some issue with JavaScript.
The major caveat
No solution is perfect and it would be dishonest of me to say that what I recommend above is without flaws. In fact, there is a really big one.
Relying on HTML forms to construct data means you can’t use nested Objects.
I mean, maybe you could create inputs with dot or bracket notations in the name (<input name="user.first-name"> <input name="user.last-name">
), then do some logic on the server to construct a nested Object. But I haven’t done it myself, and I’m not convinced it’s worth it, so I’m not going to try to convince you. Anyway, if you start with the flat data model in mind from the beginning, this should not be a problem.
Since nested Objects are out of the question, that also means you can’t make GraphQL queries.
Having used GraphQL on the client and the server, with and without libraries, I’ll say it’s super cool and innovative but I don’t like it. It adds more complexity than it’s worth and the costs outweigh the benefits for most projects.
There are, however, few projects where there’s enough data and HTTP requests flying around that GraphQL makes sense. For those case, it’s worth it.
(Hopefully I didn’t make a completely biased ass of myself)
The $10,000 question
Having made my case, I’ll pose this question to you. If you have a checkout form that only works with JavaScript, is it worth a rewrite so that it falls back to HTML if JavaScript fails? Well, it depends, right?
If your company earns a million dollar through that form, and the form fails 1% of the time due to JavaScript issues, that fix could be worth ten thousand dollars.
Why 1%? According to research by GOV.uk, JavaScript will fail for 1% or users. That is all users (or page views), not only the edge cases where someone turns or JavaScript or is using an outdated browser on an old mobile phone. There’s an excellent explainer called Why Availability Matters that goes into more details, but now I’ve digressed.
Getting back to the original question, is $10,000 enough for you to make some changes to that form? It would be for me, and that decision is based on a single form handling $1,000,000. That’s actually not that much revenue for a company, and most websites that have forms, usually have more than one.
Closing thoughts
I’ve used the same example of HTTP requests in HTML vs. JavaScript to drive home the concept, but it’s important to point out that not all progressive enhancement is a zero sum game.
It’s not always about it working or bring broken. Sometimes it’s about working great or working just OK. Sometimes it relates to security, performance, accessibility, or design. Regardless the scenario or technology, progressive enhancement is about building with resilient fallbacks in mind.
There’s a lot of wonderful features available to newer browsers, but it’s important to begin with something that will work for most users, and provide an enhanced experience for the folks that can support it. With CSS, for example, you could consult caniuse.com to check for browser compatibility, write styles for older browsers, and add modern enhancements withing an @supports
at-rule that detects for that feature.
Lastly, I’ll leave you with this quote.
This code is both backwards and forwards compatible. Browsers being evergreen and JavaScript being forced to backwards compat by the legacy of the web is a HUGE feature… many dev teams blow many hours of cycles to keep functionality they already have because they’re chasing dependency updates. This technique is both forwards and backwards compatible without chasing updates compounding your investment of time. Write once, run forever is a very unique feature to JS that many ‘modern’ frameworks leave behind.
Brian LeRoux
If enjoyed this article and want to check out more things I’ve created along the same lines here’s a few:
- A long series on building HTML forms that covers semantics, accessibility, styling, user experience, and security.
- Two episodes of The Function Call. One on progressive enhancement and another on forms.
- A talk called “Building Super Powered HTML Forms with JavaScript” for Conf42.
- A YouTube playlist on progressive enhance forms that was taken from some of my Twitch streams.
- The GitHub repo corresponding to that Twitch stream
Here are some of the resources I found particularly interesting:
- Why we use progressive enhancement to build GOV.UK
- Why Availability Matters
- Progressive enhancement is still important by Jake Archibald
[Update: I listed ways JavaScript could fail some of these include network conditions. Next, I go into an example of progressive enhanced form submissions. It’s worth calling out that the list of ways JS can fail is a good general thing to keep in mind, but if it errors due to network conditions, it’s possible that an HTML form will fail anyway.]
Thank you so much for reading. If you liked this article, and want to support me, the best ways to do so are to share it, sign up for my newsletter, and follow me on Twitter.
Originally published on austingil.com.