When discussing lambda/serverless/<whatever flavor of pay per request> setup, people don’t often seem to stop and think about the usage/access patterns, the associated costs and performance.
I’ve seen such setups being recommended for APIs that have predictable and fairly constant load, for which you are a lot better off having an actual running set of processes that can be reused. For Google that could be AppEngine, for AWS ElasticBeanstalk. It’s a question of the right tool for the job.
One tech that I haven’t played with that’s really interesting is KNative, where you can run an underlying infra with predictable costs/performance, but allocate it like a lambda per request. Performance of the requests themselves may still be less though when compared to a more traditional setup.
I’ve seen such setups being recommended for APIs that have predictable and fairly constant load, for which you are a lot better off having an actual running set of processes that can be reused. For Google that could be AppEngine, for AWS ElasticBeanstalk. It’s a question of the right tool for the job.
One tech that I haven’t played with that’s really interesting is KNative, where you can run an underlying infra with predictable costs/performance, but allocate it like a lambda per request. Performance of the requests themselves may still be less though when compared to a more traditional setup.