Whether it's good ol' servers, docker containers or lambda functions in the cloud, the code must run on physical machines in the end.
Nevertheless, their natures are very different, with both advantages and drawbacks. So let's cover the basics and quickly summarize the ups and downs of each.
Good ol' servers
Traditional servers should not be underestimated. By servers, we mean either physical hardware or cloud VMs. Even on cheap hardware or VMs, serving up to millions of requests per minute is commonplace. That's already a lot of users. Moreover, if the "stack" is local, with database and application logic on the same machine, there is no communication overhead which boosts performance to the maximum. It is a very cost-effective solution, but on the other hand, they require a bit more effort to administrate and require adequate Linux know-how.
Moreover, achieving redundancy and rolling updates requires attention. This is commonly solved by having at least two servers behind a load balancer to avoid any downtime. Regarding software updates, the usual way is to stop the application, update it then restart it, and perform this sequentially on the servers. The load balancer will take care of avoiding the temporarily unresponsive servers. While this is typically done manually in the beginning, it can also be automated with the right tools later on when the need arises, as well as scaling up to more servers.
Docker images
These images are run in containers that are typically distributed over larger distributed infrastructures. Containers also have the notion of being "volatile": they are typically quickly started and destroyed. For example, if the software crashes or isn't able to run properly, it can simply be destroyed and a new one created on the fly with the same image, effectively restoring it to its original state in a short time.
Using docker in production involves two levels: the software "images" and the "orchestrator" responsible to run these images on various servers. The latter, like Kubernetes, is typically offered as a service. With docker, scalability is achieved by simply launching "one more container" on the underlying infrastructure.
In the typical docker "stacks" the databases, APIs and UIs are run in separate containers. While this is not an ironclad rule, it is regarded as best practice to do so. This makes it possible to update and scale each part independently according to usage, on the other hand, this adds some overhead for network communication between containers. Some effort is also required to configure how all containers are distributed and "linked together" to interact properly.
The usual way updates are done is by simply launching new containers running the new image and then removing the obsolete ones afterwards. This is straightforward if images are independent, but if an update requires some coordinated update of multiple images at once, it might become more tricky.
Serverless
The idea here is that you only write the code and that it is invoked "on demand". You don't take care of the infrastructure at all; the provider will take care of it. This comfort has a strong constraint: it must be completely stateless. There is no persistence guarantee between requests since it can run "anywhere". This requires planning accordingly in advance, usually coupling it with databases as a service and "object storage" services to persist files.
This "serverless" paradigm comes in two flavours, related to their underlying technology.
Like AWS Lambda
This launches a minimal docker container, processes the request, reuses it for further requests and destroys it after some idle time. This offers large freedom regarding the programming language and frameworks to process the request and almost no constraints regarding execution. However, they do have their downside. In particular, the very first request will take a long time, also called a "cold start" since it involves starting an environment. For the right price, it is possible to leave them "hot".
If you leave them hot, they are expensive, if not, your users often suffer from high latencies. As such, they are best suited for on-demand background jobs or long-running tasks IMHO.
Like Cloudflare Workers
This runs JS/TS functions directly in a "chrome-based engine" to process the function. This is cheap, fast and can be run in a distributed manner easily. This is perfect for small functions running "at the edge" of the network. Every time a request is processed, the function's code is loaded and interpreted on the fly to produce the response. Async code invoking other services is also possible.
These functions should be as self-contained as possible, and providers usually put clear constraints on the overall size and max processing time. While it is great for small functions, it becomes counter-productive for larger programs offering complex functionality.
What should I pick?
Scaling can be achieved with either and for most things there is no clear-cut choice which one is clearly better. It's rather a balancing of advantages and disadvantages, mixed in with the team's expertise, experience and preferences.
What follows are not iron-clad rules but rather rule-of-thumb pieces of advice to be in the comfort zone.
Servers/VMs | Docker | Serverless (Lambda like) | Serverless (Workers like) | |
Low amount of code | ๐ | ๐ | ||
High amount of code | ๐ | ๐ | ||
No linux know-how | ๐ | ๐ | ||
Cheap | ๐ | ๐ | ||
Lots of "services" | ๐ | |||
Keep large stuff in memory | ๐ | |||
Computation intensive | ๐ | ๐ | ๐ | |
Background jobs "on demand" | ๐ | ๐ | ||
Super low latencies | ๐ |
Like I said, the smileys are not a hard requirement. For example, you can also have simple code on servers and complex code on serverless too, it's just likelier that you will likely leave the comfort zone and lose some of its benefits.
If the functionality is simple, go serverless with Cloudflare Workers like offerings! This is free to start with using most providers, scales well, offers top latencies worldwide and has excellent pricing even on large scale. The ecosystem might not be very mature, but with simple functionality comes low complexity where an ecosystem is not that critical.
If you require large amounts of memory or GPUs, physical servers or VMs might be advantageous because of pricing alone and its efficiency since there are no abstractions in between.
If you have a complex software landscape with many web services, use docker. Since the web services can easily be packaged as docker images and almost every open source software already has docker images too, it is simplest to use them directly.
If it's something like background jobs running once in a while, use AWS Lambda-like functions. They can process complex stuff, run longer and you are not worried about cold starts and latency issues.
Some use cases
Passwordless.ID
Here, the API could be divided into individual, largely decoupled functions of reasonable complexity. That made Cloudflare Workers a perfect candidate. Thanks to that, the code can run over worldwide distributed datacentres with optimal latency. Perfect for manageability, scalability and pricing.
KeyValue.Rocks
This is a data-hungry beast. It stores large amounts of data in memory for fast access and high throughput. As such, VMs were used. They are cheaper and less complex than orchestrating a docker deployment. Since the project was kind of low activity with rare updates meanwhile, it was adequate.