Hacker Newsnew | past | comments | ask | show | jobs | submit | osbre's commentslogin

"Local LLMs" sounds expensive compared to a 20$ subscription. You'd have to pay for years of usage upfront by purchasing those GPUs.


Sounds like something Dokku's official K3S scheduler meant to solve:

- https://dokku.com/docs/deployment/schedulers/k3s

- https://dokku.com/tutorials/other/deploying-to-k3s


Many website-blocking or "focus" extensions show pop-ups and do all sorts. I just wanted something simple, a set-and-forget kind of thing.

This extension blocks distracting homepages but still lets you in for pages from search or links.


That is why I use Laravel btw


> More concurrency is not always ideal

Is this due to increased memory usage? Does the same apply to Sidekiq if it was powered by Fibers?


Really depends on the job. But generally, yes the same applies to Sidekiq. I think there is a queue for Ruby called Sneakers that uses Fibers?

If you're making API calls out to external systems, you can use all of the concurrency that you want because the outside systems are doing all of the work.

If you're making queries to your database, depending on the efficiency of the query you could stress your database without any real benefit to improve the overall response time.

If you're doing memory intensive work on the same system then it can create a potential issue for the server and garbage collection.

If you're doing CPU intensive work, you risk starving other concurrent processes from using the CPU (infinite loops, etc).

Something like the BEAM is setup for this. Each process has it's own HEAP that's immediately reclaimed when the process ends without a global garbage collector. The BEAM scheduler actively intervenes switch which process is executing after a certain amount of CPU time, meaning that an infinite loop or other intensive process wouldn't negatively impact anything else...only itself. It's one of the reasons it typically doesn't perform as well in a straight line benchmark too.

Even on the BEAM you still have to be cautious of stressing the DB, but really you have to worry about that no matter what type of system you're on.


But wouldn't using a connection pool solve this problem of "stressing out the database"? I assumed a single connection from the pool would be considered "occupied" until we hear back from the database.

Or are you saying that processing lots of requests/tasks in Rails while waiting for the database would quickly eat up all the CPU? It seems like a good thing - "resource utilization" = servers should do things whenever possible rather than just waiting. Although now that I think about it you'd only want maximum resource utilization if your database is on a separate server.


With the database it depends on specific queries. Ideally, you can hammer it and it will be fine.

If you have inefficient queries, N+1 problems, competing locks, full table scans, temp tables being generated, etc then more concurrency will amplify the problem. Thats all I meant.


There are multiple cities/towns with the same name in my country, but unfortunately, it's impossible to specify which one is mine at meet.hn. What a shame.


Sorry for this! You should be able to, now


Thank you! We'll add it on our roadmap


Thank you! We do have one - `runAndWait`. I will shortly update the docs and I agree that using SSE would be more efficient than polling. Will add that next!


Long polling would be cool!


I have a copy of it from 11th of July, since I used to contibute.

https://github.com/osbre/wappalyzer (with the original commit history)

Weird and sad to see this extension go private, it was recieving a lot of contributions on GitHub.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: