Hacker Newsnew | past | comments | ask | show | jobs | submit | mj2718's commentslogin

I hit a limit in a CMS that then wanted to charge $300 a month. So I got on the beers and vibed my own CMS with cursor in one afternoon.

Agents are now coding better than most of us, but you still need to know wtf you're doing.

The prompt history is under .specstory/

Use at your own risk. I'm using it for my dads business now and saving the headache of wordpress and the cost of an "enterprise" CMS.


My laptop running OOTB Postgres in docker does 100k requests per second with ~5ms latency. How are you getting 20x slower?


If you look at their lab [0], it seems his NAS is separate from his kubernetes nodes. If he hasn't tuned his networking and NAS to the maximum, network storage may in fact add a LOT of delay on IOPS. Could be the difference between fractions of a millisecond vs actual milliseconds. If your DB load is mostly random reads this can really harm performance. Just hypothesizing here though, since it is not clear whether his DB storage is actually done on the NAS.

[0]: https://dizzy.zone/2025/03/10/State-of-my-Homelab-2025/


Sorry but if he’s using a setup that’s 20x worse than a regular laptop then I’m not really interested in his setup.

To be fair, I asked the question and you found the answer - lol, my bad.

Yes I agree using a nas that adds latency would reduce the TPS and explain his results. “Littles law”


That's what I'm struggling with too. Redis can also serve roughly 500k-1m QPS using just ~4-8 cores, so on two cores it should be about 100k-200k at least


Yep… this is what I expect as a baseline.

This is also why I rarely use redis - Postgres at 100k TPS is perfectly fine for all my use cases, including high usage apps.


Of course it always depends, but my experience is people reach for distributed systems when they can simply run a single node.

If you can run a single node, and it meets all your business requirements, then I’d argue that is simpler than having to manage the very challenging problems inherent in distributed systems.

A simple web app can run 100k+ connections per second with latency in the 100s of microseconds. This is 2 orders of magnitude better than most apps need.

If you want durability you can use a service like RDS which takes care of replication and backups for you.

That’s my advice anyway.


Always found the first method of keeping bookmarks to the positions in a table to be the best method. I don’t think the 3 problems given to be an issue personally. - just loop at the interval that your users can accept the delay - querying a database 10 times a second is nothing, even 100 times, this wouldn’t cause resource limitations - the scaling concern made no sense to me, seems a bit arbitrary.

Cool to know this is an option though, but I much prefer not relying on database internals like wal logs, this hurts scaling more IMO.

The only thing you need to worry about with the table method is out of order increment IDs, which is always possible in a transactional database. But there are many solutions for this.


I really like this solution and agree "bookmarks" are a lot better than outbox, but it is hard to guard against races where the reader increments the bookmark before an active transaction commits an ID at an earlier position; if transaction commits in a different order than IDs are allocated.

Perhaps this is what you mean with "out of order increment"... but what are the "many" solutions to this?

I struggle for a long time to find a good way of doing this in Microsoft SQL and still not perfectly happy about the solution we found.


Do you have a few pointers to the solutions regarding out of order IDs? I'm thinking of keeping track of gaps (yet unseen IDs) in another column and retrieving them in the next poll.


Not OP, but there is an approach here of using a dedicated loop worker to assign post-commit ID sequence. I.e. using the outbox pattern once, simply to assign a post-commit ID.

https://github.com/vippsas/mssql-changefeed/blob/main/MOTIVA...

I wish DBs had this more built in, it seems a critical feature of a DB these days and the commit log already have very similar sequence numbers internally...


or just use a proper eventstore


1) Well assuming data is stored in the SQL as the transactional store, how do you move data safely from SQL to the eventstore? You at least need the postbox pattern. It is not clear to me that the postbox pattern is less hacky than listening to a SQL change feed.

E.g., with Azure Cosmos DB -- you would not use the postbox pattern there. You listen to the DB change feed -- since that is provided and is easily accessible.

2) In our case the schemas in SQL are mostly event based already (we prefer insert over update ...even if we do business logic/flexible querying on the data). So using an event store is mainly a duplication of the data.

An event store is a database too. What exactly is it about a database that makes it a "proper event store"?

I honestly think the focus on duplicating data you have in your DB in a separate event store DB too may be something of a fad that will pass in a while -- similar to NoSQL. It's needed for volumes larger than what a SQL DB can handle, but if you don't need such large volumes why introduce the extra component. Event sourcing architecture is great; but such thinking on an architecture level is really orthogonal to the storage and transport chosen.


I wrote a blog post on this. We were using event sourcing and went with locking+batching method described. For other tables we wanted to read and move to a data warehouse we used the txid check.

A bit hacky, but on AWS RDS we didn’t want to deal with wal intricacies.

https://mattjames.dev/auto-increment-ids-are-not-strictly-mo...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: