In my case switching to an AWS API Gateway + Lambda stack means I have zero-downtime, canary deployments that take less than 5 minutes to deploy from version control. Api Gateway is configured using the same swagger doc that autogenerates my request/response models (go-swagger) and (almost) completely removes routing, request logging, throttling and authentication concerns from the request handlers. Combined with a staticly hosted front-end and SNS/SQS+lambda pub-sub for out-of-process workers I never have to worry about auto-scaling, load-balancing or cache-invalidation and we only pay for what we use. It may not suit every use case, but in our case, we have bursty, relatively low-volume traffic and the hosting bill for our public-facing site/service that comprises most of the main business revenue is the same as a rounding error on our BI service bill.
We use golang lambdas, binaries are built in our CI pipeline. Build stage takes ~10 seconds, tests (integration + unit) take ~30 seconds. We use AWS SAM for generating our CFN templates, we package and deploy using the AWS Cloudformation CLI and this takes the remaining 3-4 minutes.
I didn't include post-deployment end-to-end tests in the 5 minute figure, but technically speaking, we do deploy that quickly
We have a fair number of endpoint, so due to the CloudFormation 200 resource limit per stack, we end up creating about 10 different stacks that frankenstein themselves onto a main API gateway stack.
Try deploying changes to google cloud loadbalancers. Updated within a few seconds, but changes will take seversl minutes to be applied. The first time i was scratching my head why my changes don‘t work as expected...
This (pay for what you use, so many fewer scalability issues) is so big it, by itself, can give you a competitive advantage against anyone who isn't doing this, which is almost everyone.
Maybe I'm overstating it, but I don't think I am...
I think you're overstating it. Why do people care so much about scalability issues, anyway? Given that (a) a stateless server plus an SQLite instance is much, much easier to set up than the proprietary, poorly documented mess that is Lambda, (b) that server can easily be horizontally scaled, and the SQLite instance can be swapped out with any other SQL database with some effort, and (c) a single server with SQLite will easily handle up to 100K connections, it doesn't seem like scaling was ever an issue for most websites.
I don't think you've used the tools that can work with lambda or had to actually scale something in production, based on your response...
It's all a lot harder than you make it out to be, but at least with lambda (and something like Zappa) you don't have to figure anything out beyond how you get your first environment up. There's just no second step, and that's huge.
Using a scripting language like Python or Node it literally is adding one function that takes in a JSON event and a context object as your entry point.