What is Edge Compute? It’s kind of like knitting dog hats

Edge compute is the new frontier in computing technology. This article explains what it is and why it's awesome, all along with cute pictures of dogs.

한국어 버전 (Korean version)

I’ve been trying to come up with a good explanation of what exactly “edge compute” is and for reasons I don’t need to justify to you, I’ve landed on the analogy: it’s like selling knitted hats for dogs.

Why knitted dog hats? Because they’re hilarious!

Yorkie wearing a tiny, maroon, knitted hat and looking up.

And they make an OK analogy, but before we get there, let’s define each part of “edge compute”.

We’ll start with the latter.

(Note that “edge compute” is also sometimes referred to as “edge functions” or “edge workers”)

What is “compute”?

Compute is what happens any time you ask a machine to do something for you. For example, when you ask a calculator for the product of 5 x 7 (and while you question what all those years in math class were good for), the calculator will do some beeps and boops and respond with 35.

That calculator is a computer and those beeps and boops are the time and processing energy it needs to calculate the result; also known as “compute”.

In the context of web development, compute can be used to generate several different types of products: HTML, JSON, machine learning data models, selfies of you and your friends with filters making you look like cute anime characters, etc.

For the sake of simplicity, I’ll mostly focus on generating HTML.

And for the sake of our analogy, we can think of “compute” as the time and energy it takes to knit a hat for a dog.

Hence the dog hats.

Where does “compute” take place?

Here is where things get a little more complicated. Some folks may tell you there are two places where compute can occur: on the server or in the browser (on a user’s computer).

While that’s not wrong, it’s a bit oversimplified these days because both options can be broken into smaller categories with distinctly different characteristics.

To handle that nuance I want to cover this in 4 parts:

  • Traditional Servers
  • Clients (Browsers)
  • Static-Site-Generators
  • Cloud Functions

Feel free to skip these sections if you’re already familiar, but you’ll be missing out on my whole analogy thing.

Traditional Servers

In a traditional server, a computer runs software that you selected to execute code you wrote to return HTML whenever a request comes in. Using a server to generate HTML is commonly referred to as Server-Side-Rendering (SSR).

The computer may be a local (or “on-premise”) machine that you own and is housed in your building, or it’s also very common to in the “cloud” which is basically renting a computer someone else owns and is housed in their building.

These servers run 24×7 (ideally) and are ready to receive traffic at any time. You can also setup separate long running tasks or scheduled tasks with a cron job.

This is handy, but there are some downsides:

  • You pay for the server even when it’s just sitting there.
  • High traffic could expire the resources (memory/CPU) and cause it to crash.
  • Scaling up/down requires planning and calculating performance vs cost.
  • Users that are far away from your server have longer latency periods (slower).

One last point I want to highlight in particular is that when you use traditional servers, you are responsible for the business logic code, the server software, and the state of the computer. This can be a good thing because you have all the flexibility and control to do with it whatever you want, but it comes at a cost of maintenance. Security, upgrades, and maintenance are all on you to take care of.

Black pug with a knitted red beret to match its red, black, and white stripped shirt.

Servers are like commercial workspace 🏭

For our analogy, we can think of servers kind of like the building where we make dog hats. We might be renting the space, or flat-out purchase it, but we have a physical place where folks can come and request a hat for their dogs.

It’s a beautiful, office with exposed brick and lots of natural light. We can paint it how we want and modify it as needed. But there are some downsides.

Some people have to travel a long way to get to our building. We also have to pay the bills (rent, electricity, internet) regardless of how many dog hats we sell (I know we’re going to sell, like, a bajillion, but still). And when someone brings their dog by to get a new hat and the dog poops on the grass on the way out, guess who’s going to have to clean it up.

Clients

When we say the word “client” most folks think of a customer. For example, “I’m going to have a billion clients when this dog hat business takes off.” In the case of web development, a “client” is the user’s browser.

After a user requests our website, we can instruct the browser to download some JavaScript, and when this JavaScript executes it can inject some HTML onto the page. In fact, we can even use JavaScript to create the entire application.

This is commonly referred to as Client-Side-Rendering (CSR).

Generating HTML on the client side is great because it can create more dynamic interactions that feel faster because you don’t need to wait for pages to reload.

We can even utilize tools like Service Workers or WebAssembly to make that compute feel less impactful.

Moving compute to the client also means that we can do less work on our own servers. Which could ultimately save us some money, but that compute still has to happen, and the cost is on the user.

Here’s how I see the downsides:

  • User’s must download more data (JavaScript).
  • We can’t have secrets like API keys because source code is accessible.
  • Performance greatly impacted by user’s device.
  • What we can do relies on the user’s device and browser.

For these reasons as well as Search Engine Optimization, Accessibility, and others, I think we’re seeing more of the industry move away from client side renders.

A bunch of crochetting supplies with a partially finished item.

Clients side rendering is like DIY sewing kits 🧶💉

To drive the idea home, client side rendering is a lot like giving customers a DIY sewing kit. We can provide them with all the instructions and materials to make their own dog hats, but the work needs to be done by them. And although this can save us some time and energy, it comes at the cost of the customer.

It can be a good fit for some folks, but is not right for everyone.

Static-Site-Generators

Static-Site-Generators (SSG) are interesting because instead of building a web page on demand as requests come in, they pre-build all the pages of a website ahead of time. The result is a collections of static folders and files (HTML, CSS, JavaScript) representing the website.

Once you have all the static files for the website, you can deploy them to any host you like.

This approach technically falls into the SSR bucket because you are not using a browser to do the compute. You are using some programming language to build the pages ahead of time on a computer you control (your laptop, a build pipeline, etc).

Technically, the end result isn’t much different than if you were to write all those HTML pages by hand, but using a SSG is probably faster and easier to work with in the end.

There are a few advantages to using SSGs. By generating the HTML ahead of time, you are removing that compute time from the user’s request. This can speed up response times because they only need to wait for the server to respond with the static HTML file. No time is spent building it, and that can be significant.

Since you’re only dealing with these static files that don’t change with every request, SSG also make a great pairing with Content Deliver Networks. I’ll cover those more in a moment, but the result is even faster responses because you can remove most of the latency.

Static websites are also very very easy to host. Because they are only serving static files and there’s no need for compute, you can host your own server with very limited resources and handle tons of traffic without a problem. This also makes them very cheap to host. In fact, there are plenty of services available that will let you host a static site for free.

The last big benefit I’ll point out is that when dealing with static sites, there is no need to deal with runtime scripting languages or databases. This makes them incredibly secure. You can’t really hack a static web page, so unless you’re literally sharing private information publicly, you shouldn’t have much to worry about.

Now this all might sound great, but it comes with some significant downsides. Primarily, static HTML cannot have dynamic content (unless you use client-side compute). For some sites where content doesn’t change often, this is fine; blogs, brochure sites, documentation. This lack of dynamic data also means that the experience cannot be personalized for each user.

While you can add dynamic content to a static site with JavaScript, it introduces added complexity and inherent downsides (see CSR above).

One other fault to SSG is that it takes time to build each page. If you have tens or hundreds of thousands of pages to generate, this can take a long time. And when you publish new content or change existing content, you may need to prebuild everything. This could be a non-starter.

Black lab wearing a blue knitted hat with a puff ball on top. He's outside in the fall time.

Static-Site-Generators are like pre-made dog hats 🐶🎩

If I were to compare it to selling knitted dogs hats, SSG is like selling pre-made hats instead of knitting them on demand. When a customer finds one they want, they can simply grab it off the shelf and checkout; no waiting for someone to knit it.

But what if they want something that is personalized, like a tailored fit or in their favorite color? I may not have one available. Some savvy business folks might say to forget them and only make pre-knitted hats due to margins and labor (I have no idea what I’m talking about). Other advisors may think it’s worth it to add more employees (complexity) to support knitting on demand as well as making and stocking pre-knitted hats.

In the end, it depends on your use case. Pre-knitted hats (SSG) may be great, knitting on demand (SSR) may be better, or perhaps you support both.

Cloud Functions

In addition to traditional servers, several cloud compute providers offer cloud functions. These work by allowing you to upload files containing functions designed to handle network requests. The platform takes care of deploying your functions and routing traffic to those functions and as a result, they provide you with the URL where that function will run.

Note that in this system, you do not have to provision, deploy, maintain, or upgrade any server. This is why these are also referred to as “serverless functions” or just “serverless” (they are also sometimes called “lambda functions”).

Despite the “serverless” nature, there is still a server involved. It’s just someone else’s server. This puts it in the realm of SSR.

For these functions to work, you often have to follow conventions in order for the platform to be able to deploy your code. It may be file names, folder structures, exported functions, parameters, and what to return.

One gotcha for that to work is that your functions must be stateless. Meaning they will not share context across invocations. You cannot rely on persisted memory or files systems because the same function may run on completely different machines.

While that can be unusual, this whole approach has some benefits.

  • They are very easy to provision which is nice for dynamic teams or for migrating functionality.
  • They can scale up or down automatically based on traffic.
  • You usually only pay for the time they run which can save money.
  • No more hardware or servers to manage. You just write your functions.

Those are some compelling reasons to consider cloud functions, but it’s also worth noting the limitations. Firstly, since you don’t maintain the servers yourself, it’s up to the service providers to determine which languages are supported.

In addition to language limitations, you’ll probably notice limitations on the compute resources (CPU, memory) available and most providers have relatively short timeouts. These services are intended for short lived operations.

And once again, platforms will probably ask you what region you want your functions to run on. If users access the function URL from far away, this latency can impact the overall speed of their experience.

Terrier wearing an orange knitted hat and scarf combo with some built-in antlers

Cloud functions are like robots trained to knit dog-hats 🤖🧶💉🐶🎩

With all those considerations, I like to think of cloud functions like robots you can train to knit dog hats. When no one is around, the robots are turned off and not costing you anything, but as customers start showing up with demands, the robots can power up to handle it.

Different than the demand for pre-knitted dog hats, these customers want custom hats with their favorite sportsball mascots. So whether you prefer the Denver Dachshunds, the Pittsbulls, the New Yorkies, the Golden Gate Retrievers, or the Chicago-huahuas, custom requests on demand are no problem for robo-knitters (cloud functions).

What is “edge”?

Before describing what the “edge” even means let’s look at the problem it’s trying to solve. Sometimes users are really far away from our compute (servers) and as a result, they have to wait longer periods of time while their request travels to and from that server.

To solve this latency problem, very smart folks came up with the idea of deploying multiple copies of a program and distributing it around the world. When a user makes a request, it can be handled by the closest copy, thus reducing the distance traveled and the time spent in transit.

Here’s where things get a little fuzzy. Does the “edge” have to consist of web servers, or can your smartphone count as a node in the network? Aren’t IoT devices also “edge”? What is the minimum number of nodes you need before you can call a network “edge”? Two? Does a network have to cover a specific region to qualify for “edge” status?

I can’t answer those questions. Sorry. But I don’t think we need to as long as we understand that the goal is to reduce latency by reducing distance between users and endpoints. Therefore, the more distributed number of devices, the better.

Let’s look at a less nebulous example of what the “edge” can be.

Content Delivery Networks

A Content Delivery Network (CDN) is a network of globally distributed servers designed to deliver static assets like CSS, JavaScript, images, fonts, etc. There could be thousands of servers, each with it’s own copy of your assets.

When a request is made for an asset, like a photo of my dog Nugget, the CDN figures out where the nearest server is and sends the request to be handled there. The image is sent back to the user lickity-split. This applies to any of the static assets they request, and it’s a fantastic way to improve performance.

CDNs have been around for a long time and they make a great pairing with things like SSG above. You could pre-generate your website and serve the whole thing from a CDN and it would be super fast.

Small brown dog wearing a knitted, pointing cap that is blue with some clouds in the design.

CDNs are like convenience stores 🏪

Remember the analogy above where we discussed pre-knitting dog hats so they would be readily available whenever someone came to our store? Now imagine we do the same thing, but we also distribute those hats to several stores all over.

CDNs work a lot in the same way. Instead of people having to drive across town to come in and grab a dog hat, they can just walk a couple blocks to the nearest convenience store where we’ve already stocked up for the demand.

It’s very quick and convenient for them.

(Ok, the analogy isn’t perfect because technically CDNs won’t run out of stock, per se, but the main point is about latency)

Users experience life in 3-D

Where the heck am I going with this? Stick with me for a moment.

The main recurring theme of this whole discussion is performance, and when it comes to speed, there are three major factors:

  • Distance a request and response has to travel (aka, latency).
  • Download size for a response to be parse and executed.
  • Device capabilities based on the hardware, software, and available resources.

And this leads me to my next tips. When making a point, alliteration is better than coherence 🔥🔥🔥

But seriously, these three factors really impact the speed of our applications. And our job as developers is to figure out where to weigh costs and benefits and find the best place to do compute.

A very real dilemma today is that although client-side-rendering is low-latency, the actual renders are slow. And although server-side-rendering is fast, they can experience high-latency.

For a perfect, 3-D experience, we would:

  • Move things closer to users (like a CDN)
  • Do work on servers (like cloud servers/functions)
  • Send smaller assets, (pweez 🥺)

That last point is highly subjective based on your applications, so I can’t speak much about it for you specifically, but we can talk about the first two.

Which finally brings us to answering the main question.

What is “edge compute”?

Edge compute is a programmable runtime (like cloud functions) that are globally distributed (like a CDN). Which is awesome because it can give us dynamic server-side functionality that executes as close to users as possible.

As an added benefit, many edge compute platforms can provide information about where the request is being handled. With traditional servers or cloud functions, you already know where the server is because you selected the region to deploy it. It’s never going to change and is not very interesting information.

But it’s useful in the context of edge compute because we know these servers are as close to the user as possible, often in the same city. With this information, we could apply logic in our applications based on the user location.

Of course, browsers have the ability to provide user information through the Geolocation API, but it requires user interaction and the user can always deny access. Having a close-to-the-user location option is convenient and privacy friendly and may be good enough to not require even asking for more details.

Benefits

I see edge compute having benefits of edge compute broken into three different groups.

For users:

  • Less latency compared to servers/cloud functions.
  • Less to download compared to client-side rendering.
  • Keeps work off device improving other apps or battery life.

For developers:

  • Low barriers for creating proof-of-concepts.
  • Consistent execution environments (unlike browsers).
  • Teams own their respective responsibilities.
  • Location-based logic.
  • No servers/infrastructure to manage.
  • Secrets stay secret (compared to client-side).

For stakeholders:

  • Reduce load on origin servers improves the origins performance, reliability, and cost.
  • Automatic scaling improves overall performance and reliability.
  • Only pay for what you use.

Limitations

So we’ve established that edge compute is awesome, but it’s not without its own rough edges ( ͡° ͜ʖ ͡°)

Right now, most platforms support JavaScript in the form of custom runtimes (V8 isolates). So although the language features are supported, you may only have access to a very limited set of platform features. It may not support all the same features you’d find in the browser or Node.js.

In addition to the limited platform features, you may find that edge compute is also more limited on the amount of compute resources or time available for compute compared to things like cloud functions. So you’d have less time to do work, and less power to do it.

When you dig into what is actually going on, these limitations make sense. If you’re going to deploy servers to tens or hundreds of thousands of locations around the world, they need to be as lightweight and fast as possible, and since compute costs money, platform providers have to put some limitations on resources and time.

When should I use edge compute?

Deciding where to do your compute is already difficult. You have to account for latency, download size, device capabilities, etc. before making a decision. And as we’ve outlined above, each offering has its own pros and cons.

So you might be asking yourself where edge compute fits in.

Firstly, we should think of edge compute as an addition to the arsenal, and not a replacement for any one piece.

Where in the past we had:

Client-side JS -> Client-side service worker -> Cloud functions -> Traditional servers

We now have:

Client-side JS -> Client-side service worker -> Edge compute -> Cloud functions -> Traditional servers

Let’s see if I can help you decide.

Signs you have a good edge compute use case:

  • Stateless (doesn’t require persisted memory or files).
  • Doesn’t take a long time.
  • Latency-sensitive
  • Hyper-local

Signs you have a bad edge compute use case:

  • Stateful (requires persisted memory or file system).
  • Requires a lot of computational resources.
  • Long-running operations- Sequential/waterfall requests (may add latency)

(note that the stateless/stateful points above do not pertain to external sources like databases)

Some of the common use cases:

  • Geolocation
  • Fast auto-suggest / type-ahead
  • Modify request/response
  • Redirect management
  • Token-based personalization (A/B testing, feature flags)
  • Stateless auth (JSON Web Tokens)
  • API proxy / orchestration

Why should I care?

And now we arrive at the if-it-aint-broke-dont-fix-it part of the show. If you have been building websites just fine without edge compute, why even worry about it?

The answer comes back to performance.

I outlined some things that impact performance above, and the relevant thing is that no matter how much we, as a society, can improve websites (faster networks, better devices, smaller applications) there will always be one problem we won’t be able to solve:

The speed of light problem

Over time, technology improves; computers get faster, storage gets bigger, and networks can handle more data.

If I write a program to calculate just how much I love my dog, it might take today’s computers 10 years to compute. In 10 years, computers may only need 10 milliseconds. In both cases, however, the time it takes to tell my dog I love him will depend on how far away he is.

So until we can figure out how to send “I love you THIS much” through wormholes, we will never be able to send messages faster than the speed of light. It’s a universal constant.

So what can we do? Simple. Move computers closer to my dog (or the user). This reduces distance, which reduces latency, which reduces time spent waiting. Hence, the reason for edge compute. It’s about reducing latency.

Here’s an example from my post “Optimizing Content Migrations With Edge Compute“. It shows how edge compute can reduce lookup times for redirects.

Without edge compute:

Flow diagram with a long arrow going from a user to the first server, then back to the user, then to the second server, and finally back to the user.
The user requests the old URL, the old server responds with redirect instructions to the new URL, the browser redirects the request to the new URL, and the response is finally sent to the user.

With edge compute:

Flow diagram with a short arrow going from a user to an edge server, then back to the user, then a long arrow going to the server, and finally back to the user.
The user requests the old URL, the edge server closest to the user responds with redirect instructions, the browser redirects the request to the new URL, and the response is finally sent to the user.

Did you notice how I use the length of the arrows to represent physical distance? Smart! And I hope it helps get the point across, but outside of arrows on an image, how much time could we realistically save in the worst case scenario on a round-the-world trip…?

Maybe around 300ms.

Which brings me to my existential crisis:

  • Is all this really just about speed?
  • How much does 300 milliseconds matter?
  • Is it worth the complexity?
  • Do dogs even like wearing hats?

The short answer is: yes, it depends, sometimes, probably not but they just look so darn cute.

Compulsory block of stats

Whenever someone starts talking about performance, it’s invariably joined with a bunch of stats that supports their message from…some study.

This article is no different (perhaps another universal constant?). People like numbers, so here goes:

In 2017, Akamai released their Online Retail Performance Report which found the following:

  • A 100ms delay lead to 7% drop in sales
  • 2s delay increased bounce rate by 103%
  • 53% smartphone users don’t convert if load time is over 3s
  • Optimal loading for most sales is 1.8-2.7s
  • 28% of users won’t return to slow sites
  • Webpages with most sales loaded 26% faster than competitors

Want more stats? You got it.

Walmart found that for every second they increased load times, their sales increased by 2% (source). Consider that Walmart make $500 billion dollars in 2021. Two percent of that is $10 billion dollars. Which means they could hire 133,000 developers to increase the loading times by just 1 second and they’d still make a profit (based on the 2020 average salary of $75k).

Performance impacts revenue, perception, brand loyalty, and engagement. For some companies, it will be more important than others.

It’s not just about money

I always feel a little gross when I say people should do something because of the money. Money is great for data because it’s easy to quantify, people understand the value, and more people care, but are there are other reasons to care?

I did some thinking and although it’s more fuzzy, here are my thoughts.

Edge compute is about speed as well as reliability. Which made me think about the importance of access to information. Information is increasingly important during times of crisis, and we have plenty of examples of that today.

I’m fortunate enough to be writing a mostly light-hearted blog post from my office, but in other parts of the world, people are scared for their lives and they need sources for fast, reliable information.

It might be booking vaccine appointments, updates on a war, or letting their loved ones know they are OK. Speed and reliability are critical, and it has nothing to do with money at all.

Closing thoughts

Hopefully this has helped explain a bit more on what edge compute is, and why it matters. If you need one last analogy, you can think of it like this.

Boxer dog with a pink knitted had that has ears and a unicorn horn

Like robots trained to knit dog-hats at convenience stores 🤖🧶💉🐶🎩+🏪

Pretty clear, right?

I really do see edge compute as the next phase of web development. Yes, there are limitations, yes, they add complexity, and yes, the benefits mostly boil down to shaving hundreds of milliseconds, but that’s just today’s picture.

I believe technology will continue to advance and platforms will reduce limitations. And I believe framework authors will add more support for edge compute. Thus removing some of the complexity. We’re already seeing that happen today:

It’s exciting!

Dog hats for everyone!!!

Is it worth it?

It depends 💩

But I think it’s cool and hope you give it try.

There are several platforms available, but I’d encourage you to check out Akamai EdgeWorkers. It’s not an entirely unbiased suggestion because I work at Akamai, but my bias is from knowing the quality of the platform and the talented folks behind. I don’t have that internal knowledge of other platforms, so it’s hard to compare without bias.

The whole point of edge compute is speed and reliability. Akamai has over 250,000 locations, making it the largest edge network in the world, which is probably why the largest companies in the world choose Akamai. Edge compute may not be the right solution for everyone. But for the folks where those 300ms really matter, definitely check out Akamai.

You can find more info on Akamai EdgeWorkers at these links:

Thank you so much for reading. If you liked this article, and want to support me, the best ways to do so are to share it, sign up for my newsletter, and follow me on Twitter.


Originally published on austingil.com.

Leave a Reply

Your email address will not be published. Required fields are marked *