Don't get me wrong, I celebrate any competitor of AWS, even though I use it massively, but Redis is a tricky one.
For start, you want redis to be as near as you can to your application. It many times is used as cache and it makes no sense to have long latencies to your cache layer (many times Redis is even deployed in the same pod as the app using it precisely because you want it to be nearby).
And if your infrastructure is already in AWS (why would you choose elasticache otherwise?) you would be paying all the data from AWS to your external Redis as a service provider and that might be much more than what you even expected to save in the first place.
To be honest, AWS elasticache is not even an expensive service (t3.micro instances work just great and allow you do no upfront reservations, which you failed to use as comparison for obvious reasons).
Really, I don't think Redis is a problem to solve and I'd put my money on someone giving cheaper documentdb alternatives, or redshifts, or managed ClickHouse services, etc. Those are the real killers!
Anyhow, sorry for being a bummer and wish you best of luck!!!
100% agree, we migrated from Redis Labs -> Elasticache and it was cheaper, more performant, and closer to the rest of our infrastructure. We lost "support", but the service is so brain-dead simple that I can't imagine what the need for that would really be. I wish it auto-scaled like Dynamo/Aurora, however it's just a few clicks to up the instance size, so not a super-pressing need.
The egress costs this would incur with AWS Lambda did give me immediate pause when looking at the product. It seems like it would be a good idea to add some information on that for prospective users.
(Sven from lambda store) We did not use t3.micro instances in our tests because their network was not very reliable. Also its maximum memory/data-size is 500MB this limits many use cases. Even t3.micro is $7.4 (monthly) where lambda store is $4.15 for that specific example in the blog. And with elasticache you have to pay even your database is idle.
One of our applications receive more than 10m hits a day through Kong, which uses Redis for its rate limit plugin. We put a t3.micro for that and never had any issue.
In reality, during our performance tests we got to much higher volumes and it always worked fine. What kind of issue with network did you encounter? Micro, small and medium should have the same network capacity.
So, t3.micros cost $7.4, yes, but if you do just 1 year no-upfront reservation it should be around $4.9 (it's usually a third in saving). I think the value proposition is too weak to even consider it, at least to me.
However, as someone else also mentioned, we need alternatives to other AWS services, those that are expensive enough to run them on prem.
The day you offer cheap Elasticsearch, ClickHouse, documentdb, etc, I'll kill my Hetzner machines and will come to you, sir.
This is not a selling point to me. When it comes to storage, I prefer experience over motivation, just like I prefer durability over speed (cough, MongoDB).
Otherwise though this looks pretty cool.
I do wonder what their scaling story looks like. How can they maintain a profit at such low prices and handle sudden load spikes?
We are not using Redis OSS, instead we implemented our own Redis server. Because it's not very possible to scale it without increasing the costs and also it would be very challenging to adapt serverless use cases.
Different from the Redis OSS, we are using a tiered storage model which keeps the hot entries in the memory and gradually offloads colder ones. Also we can migrate a Redis DB to a bigger machine in a few seconds when a database needs more resources. We will share more technical information about our architecture in Lambda Store blog.
If you're not paying by the hour then I'm guessing the business model is that you're sitting on a shared redis instance with other consumers, which sounds a bit scary for many reasons. Just guessing though.
We are not using shared Redis instances. We have our own Redis server implementation, which has a tiered storage model which keeps the hot entries in the memory to better utilize memory usage. Using this we can use smaller instances for the most of the time and migrate the Redis db to a bigger machine in a few seconds when needed.
This is a great solution to an important problem which IaaS companies are not trying to solve.
This looks very similar to Cloudflare's KV (in terms of offering) https://www.cloudflare.com/products/workers-kv/
Currently, looks like the current version is limited few AWS regions. This may not be a good fit for those outside the AWS ecosystem. The main reason users use Cache Stores is to remove latencies of hitting a database or re-computations. This may not work out for those on DigitalOcean or Azure endpoints.
Also, within the AWS ecosystem, power users may not use it since it bypasses IAM and no insight in case of failures because of connection limits, as well as no SSL in the free version.
A note about "accessibility" on the page, table data is represented as an image and screenshots from Twitter should be have embeds. Otherwise, all users cannot read the content.
Nice offering for hobbyists but not usable for any private data handling or enterprise applications.
I would like to use something like this but with no SOC2 or any other security certification, storing customer's data out of your cloud provider is a no-no for me.
On top of that, there are no ACLs, nor any fine-grained control access. No audit log, SSO, or similar features are required.
So I wonder what market is this trying to capture? If you're not happy with DynamoDB and you need even better latency, it means that you're running this in production and probably handling sensitive data. Would this service be viable for you? Just curious what the HN crowd thinks.
EDIT: Comment bellow me said this:
> TOS says not to upload data that "contains personally sensitive information" so that may limit some use cases
That, for me, concludes this is aimed at casual use.
Hey, Dave from lambda store here, in the short term, we plan to support ACL of Redis 6. Also in the long term, we are planning a version which will work on customer’s VPC for the enterprise customers with high security requirements.
We are applying GDPR practices. Also we are planning to obtain security certificates in the future then we can revisit our terms again. Also note that “personally sensitive information” requires extra procedures as part of GDPR.
Congrats on launching. It must have been a lot of work to make Redis serverless. If you feel comfortable it would be interesting to know more about the technical underpinnings.
From a customer's perspective, "serverless" means consumption-based pricing as opposed to reservation-based, regardless of the actual technology being used.
"Serverless" + "obviously stateful thing" feels click-baity to me. Does this link get right down to brass tacks? (it didn't for me). Is there a "ctrl-f" thing I can search for that explains?
Serverless by AWS standards doesn’t mean “stateless”. It means that you don’t have to deal with the underlying servers and worry about scaling them up and scaling them down.
Classic Aurora for instance is not “stateless”. You still have to appropriately size the underlying server, you pay for it whether you are using it or not, you might need to reboot it, etc.
DynamoDB on the other hand is considered Serverless by AWS because you don’t size the “DynamoDB server”, and it can scale write and read capacity automatically.
Sure. I didn't find the detail of how this is like Aurora...called from Lambda, not implemented in Lambda. Asking for that. Why is "Lambda" in the name?
Serverless by AWS standards doesn’t mean “stateless”"
Willing to be proved wrong, but what I've seen so far does equate serverless with stateless, at least at the compute tier. Aurora has no "Lambda" or "serverless" branding, right? Any stateful stuff is sort of called out as an integration.
Not sure I get the difference. This is sort of the core issue. At some point, some service is stateful. Why all the hoopla and nuance around naming? What, exactly, is new from the 1960's on? Stateful and stateless is as old as computing.
It’s not the data it’s the compute. In the 60s, you couldn’t just bring in an IBM mainframe when you needed it and not pay for it when you didn’t based on the server load.
With regular Aurora, whether you are using it or not, you’re always paying for both the server and the storage and you have to provision the server for peak workload. With Aurora Serverless, if you don’t connect to the database, you only pay for storage.
With lambda and to a lesser extent Fargate, you don’t pay for the underlying server at all until you actually need to run something and then with lambda you can scale up to as many instances as your account allows (a soft limit you can ask for more anytime) and pay nothing when you don’t.
With EC2 you have a server sitting there listening for events whether or not anything is sending you an event.
Interesting example. Mainframes were one of the first to deploy lots of capacity to your data center floor you hadn't yet paid for. Upgrades were often just a phone call and a remote (often, zero down time) re-config.
It's just a Redis cloud service that has a slightly different pricing model than other Redis cloud services. Nothing to do with AWS Lambda besides sharing a name.
I really like the idea because Redis could make a really good lightweight data store for all sorts of things (especially where longevity is not required or the data isn't urgent) but the pricing of Redis services is oddly high (due to the memory requirements, I assume).
I'm going to give it a go! The only thing that jumped out at me, though, was in the TOS which says not to upload data that "contains personally sensitive information" so that may limit some use cases.
Update: I've created a basic free database for now. Using it with redis-cli with no real issues so far but not stressed it yet.
Hey Dave from lambda.store here, thanks for trying our service, appreciated. We would like to hear more feedback If you play with our product further. For your TOS note, we can say that we are applying GDPR practices and storing personal sensitive information requires extra procedures, that’s why it is not allowed for now. In the future, we are planning to obtain security certificates(like soc2) then we can revisit our terms. again thanks for your comment!
That's really cool. Would definitely switch to this for hobby projects like my A/B testing backend[0].
The thing is, however, as someone else touched on, the pricing of Redis Labs is still reasonable, and despite feeling outdated, it's also stable and a safer bet... So I don't really know how many organizations are willing to trade cost-saving/coolness for higher risk, at least when it's still new and not well established.
(Disclalimer: I work for lambda store) The problem with Redis Labs, we see that they move to enterprise space more and more. As an example, they still do not support TLS for paid essential plans. And their pricing is per memory reserved. So you have to pay even if you do not actually use the database. I am aware we are new but we believe that in a short time the quality and stability of our service will gain trust of our users.
Neat! Are you taking requests for Azure regions? : )
FYI, all of the mid-level doc links are returning 403, like:
https://docs.lambda.store/docshttps://docs.lambda.store/help
It would be nice if those headers on the left side of the nav were links, I kept wanting to click on them. And then when I tried to go up one level from the leafs in the address bar, I ran into the 403s. Cheers!
(Disclaimer: I work for lambda store) Azure, not right now. We are planning to support Azure and GCP.
Thanks for reporting the docs issue; will fix it asap.
Thanks! The product pricing seems off to me. If you have a service that averages a continuous 10req/sec, that would be like $100 a month plus storage costs?
Storage cost per month. So if your data is 10GB it will be $1.5 per month.
When you have steady traffic, it makes more sense to move to reserved pricing. Currently the reserved pricing plans start from 500 req/sec. We may have to have more plans to cover smaller throughput cases.
Great to see some new entrants on the IaaS market! Marketing feedback; for me, using the word "serverless" in this context for me is a huge turnoff. It signals either lack of understanding or knowing fully well that it's a misnomer and can easily to be interpreted as being something else than what it is. Especially as a headline or tagline. It's a buzzword that was dead right around the time when it was hot.
I've seen distaste for the word "serverless" emerge a few times on HN and I really don't get it. Is it just excessive literalism? Of course I know that there are servers somewhere, the selling point of serverless is that as the user I generally don't have to think about them. My wireless vacuum presumably has wires in it, but I don't find that to be false advertising. The point is that the wires (and servers) don't get in my way :)
I do appreciate the "wireless" analogy, however even if you did have a vacuum cleaner without any internal wires at all, it wouldn't really make any difference for you as a user.
Whereas there is such a thing as fully peer-to-peer systems without actual servers. It's almost as if the people who coined and marketed the term want to consolidate the idea that servers/backends are an inherent property of information systems.
It should be said that I am a both a believer in that the words we use shape they way we conceptualize and reason about things, and also a proponent for less infrastructure centralization.
Now you might say, "we already have the word peer-to-peer with that meaning". IMO peer-to-peer has been diluted to the point of being practically meaningless, commonly used to describe systems such as Google Hangouts.
I don't believe there is any kind of conspiracy or conscious effort on the side of vendors or providers to do this, but effectively we don't have a word to describe fully decentralized/distributed/peer-to-peer/serverless software today because all those words have either been diluted or have a different meaning. It gives me some 1984 doublespeak vibes.
Besides all the above: My original comment was meant as honest advice that they should reconsider using the term this way, given both that:
* I'm not the only one who feel this way, so it will likely put off other potential customers as well
* There is disagreement on what the term should mean even as commonly used (just look at other comments here arguing that nothing with state between invocations can be considered "serverless")
I think the intended meaning will come across much clearer by instead calling it "Fully managed Redis" or, if it's crucial to get across the billing model, "Fully managed, pay-as-you go Redis".
It arguably has a more specific definition, like "you only interact with it on a per-invocation level" or maybe "you don't manage the servers or the runtime environment". With other systems that are not considered Serverless (like Heroku or Kubernetes) it can also be true that you don't manage the servers.
"Fully managed", "pay-as-you-go (PAYG)". This is the language used by AWS to market their services such as Lambda, for example, and IMO is a lot clearer.
Since we are not using OSS Redis code, it would be very difficult to adapt it to the serverless model, we cannot support Redis modules directly. But after completing missing Redis commands, we can work on modules support too.
For start, you want redis to be as near as you can to your application. It many times is used as cache and it makes no sense to have long latencies to your cache layer (many times Redis is even deployed in the same pod as the app using it precisely because you want it to be nearby). And if your infrastructure is already in AWS (why would you choose elasticache otherwise?) you would be paying all the data from AWS to your external Redis as a service provider and that might be much more than what you even expected to save in the first place.
To be honest, AWS elasticache is not even an expensive service (t3.micro instances work just great and allow you do no upfront reservations, which you failed to use as comparison for obvious reasons).
Really, I don't think Redis is a problem to solve and I'd put my money on someone giving cheaper documentdb alternatives, or redshifts, or managed ClickHouse services, etc. Those are the real killers!
Anyhow, sorry for being a bummer and wish you best of luck!!!