Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But that is the whole point of using cloud services that are tightly integrated with each other. I can not do it as efficiently as Amazon myself can not be called "propriety lock-in".


Said efficiencies are not due to Amazon, just that the services are colocated in the same facility.

If I put the node service and a database on the same box I'd get the same performance, and actually probably better since Amazon would still have them on separate physical hardware.


It’s not about performance, then I have to support all of that infrastructure myself.


The infrastructure, or interfaces is where the lock-in comes in. Each non-portable interface adds another binding, so it's not as easy as swapping out the provider as the OP pointed out, once you've been absorbed into the ecosystem of non-portable interfaces. You have to abstract each service out to be able to swap out providers.

If you use open source interfaces, or even proprietary interfaces that are portable, it's easier to take your app with you to the new hosting provider.

The non-portable interfaces are crux of the matter. If you could run lambda on Google, Azure or your own metal, folks wouldn't feel so locked-in.


As I said. I can run the Node/Express lambda anywhere without changing code.

But, I could still take advantage of hosted Aurora (MySQL/Postgres), DocumentDB (Mongo), ElasticCache (Memcached/Redis) or even Redshift (Postgres compatible interface) without any of the dreaded “lock-in”.


It sounds like you have a preference for choosing portable interfaces when it comes to storage. And you've abstracted out the non-portable lambda interface.

My position isn't don't use AWS as a hosting provider, it's that you ought to avoid being locked into a proprietary non-portable interface when possible.


Not really. My company has plenty of business risks. Out of the those, a dependency on AWS is the least of them.


Vendor lock in isn't really a problem initially. It's something that creeps up on you over time.


Over time, we will have an “exit strategy” that makes it “someone else’s problem” and then we will be well enough capitalized to migrate if needed.

Or the Twitter model - very bad architecture that always crashed, find “product market fit” and then get funding to fix any issues.

Or the company goes out of business, I put X years of AWS experience on my resume and make out like a bandit as an overpriced consultant.

I don’t see the downside....


The downside could be going for a new round and not getting that valuation because projected costs prevent scaling.


I don't really see cloud-provider competition lessening or hardware getting more expensive and less efficient or the VMs getting worse at micro-slicing in the next 5 years. So why would I be worried about rising costs?



I think spending one of the newly-raised millions over a year or so can help there, including hiring senior engineers talented enough to fix the shitty architecture that got you to product-market-fit. This isn’t an inherently bad thing, it just makes certain business strategies incompatible with certain engineering strategies. Luckily for startups, most intermediate engineers can get you to PMF if you keep them from employing too much abstraction.


Isn’t employing too many abstractions just what many here are advocating - putting a layer of abstraction over the SDKs abstractions of the API? I would much rather come into a code base that just uses Python + Boto3 (AWS’s Python SDK) than one that uses Python + “SQSManager”/“S3Manager” + Boto3.


That is indeed what many here are advocating. There are only so many possible interfaces or implementations, and usually abstracting over one or the other is an effort in reinventing the wheel, or the saw, or the water clock, and not doing the job as well as some standard parts glued together until quite far into the endeavor.


Stop scare-quoting "lock-in". Lock-in means development effort to get out of a system, regardless of how trivial you think it is.

If writing code to be able to move to a different cloud isn't considered lock-in, then nothing is since anyone can write code to do anything themselves.


Lock in is an economic concept, it’s not just about code but about “switching costs”. Ecosystem benefits, data gravity etc all come into play.

There are two kinds of lock-in: high cost because no competitor does as a good a job - this is good lock-in, and trying to avoid this just means you’re not building the thing optimally in the first place.

There is also high switching cost because of unique interface and implementation requirmenrs that don’t add any value over a more interopable standard. This is the kind that’s worth avoiding if you can.


I'm talking about his statement:

"Connecting to AWS managed services (s3, kinesis, dynamodb, sns) don't have this overhead so you can actually perform some task that involves reading/writing data."

That is due to network and colocation efficiencies. The overhead of managing such services yourself is another matter.


Not just the network overhead, the maintenance and setup overhead. I can spin up an entire full stack in multiple accounts just by creating a CloudFormation template.

I’ve done stress testing by spinning up and tearing down multiple VMs played with different size databases, autoscaled read replicas for performance. Ran a spot fleet, etc.

When you need things now you don’t have time to requisition hardware and get it sent to your colo.


As far as spinning up and down, a lot of this is solved with docker, while also being relatively platform independent.


So Docker allows me to scale up MySQL Read replicas instantaneously? And I still have to manage infrastructure.


Well, you can use a container service or use EC2 still.


And then you still have more stuff to manage now based on the slim chance that one day years down the road you might rip your entire multi Az redundant infrastructure, your databases, etc with all of the read replicas to another provider....

And this doesn’t count all of the third party hosted services.

Aurora (Mysql) redundantly writes your data to six different storage devices across multiple availability zones. The read replicas read from the same disks. As soon as you bring up a read replica, the data is already there. You can’t do that with a standard Mysql read replica.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: