Hacker Newsnew | past | comments | ask | show | jobs | submit | Snawoot's commentslogin

I also double that rendezvous hashing suggestion. Article mentions that it has O(n) time where n is number of nodes. I made a library[1] which makes rendezvous hashing more practical for a larger number of nodes (or weight shares), making it O(1) amortized running time with a bit of tradeoff: distributed elements are pre-aggregated into clusters (slots) before passing them through HRW.

[1]: https://pkg.go.dev/github.com/SenseUnit/ahrw


Does it really matter? Here, n is a very small number, which is almost a constant. I'd assume the iteration over the n space is negligible compared to the other parts of a request to a node.


Yes, different applications have different trade-offs.


chrome --headless --disable-gpu --print-to-pdf https://example.com


same: google-chrome --headless --disable-gpu --no-pdf-header-footer --hide-scrollbars --print-to-pdf-margins="0,0,0,0" --print-to-pdf --window-size=1280,720 https://example.com

ended up using headless chrome specifically to make sure javascript things rendered properly


Used this, sigh of relief, thank you


Can Chromium do this?

Edit: it appears so- https://news.ycombinator.com/item?id=15131840


Yes, routinely works for me.


Can Firefox do this?

with an elaborate script that relies on xdotool


Yes, kind of...

/path/to/firefox --window-size 1700 --headless -screenshot myfile.png file://myfile.html

Easy, right ?

Used this for many years... but beware:

- caveat 1: this is (or was) a more or less undocumented function and a few years ago it just disappeared only to come back in a later release.

- caveat 2: even though you can convert local files it does require internet access as any references to icons, style sheets, fonts and tracker pixels cause Firefox to attempt to retrieve them without any (sensible) timeout. So, running this on a server without internet access will make the process hang forever.


Last time I explored this, Firefox rendered thin lines in subtly bordered tables as thick lines, so I had to use Chromium. But back then Chrome did worse at pagination than Firefox.

So I used Firefox for multi-page documents and Chromium for single-page invoices.

I spent a lot of time with different versions of both browsers, and numerous quirks made a very unpleasant experience.

Eventually I settled on Chromium (Ungoogled), which I use nowadays for invoices.


Why, Firefox has a headless mode. It can't just print a document via a simple CLI command, you have to go for Selenium (or maybe Playwright, I did not try it in that capacity). Foxdriver would work, but its development ceased.


I've made a couple of modules which allow canonicalize maps and slices of comparable elements, making their canonical handles also comparable:

* https://github.com/Snawoot/uniqueslice

* https://github.com/Snawoot/uniquemap


I think I found some solution for arenas in Go: https://pkg.go.dev/github.com/Snawoot/freelist

Has some shortcomings, but I think it should work.


How do docker contexts help with the transfer of image between hosts?


I assume OP meant something like this, building the image on the remote host directly using a docker context (which is different from a build context)

  docker context create my-awesome-remote-context --docker "host=ssh://user@remote-host"

  docker --context my-awesome-remote-context build . -t my-image:latest
This way you end up with `my-image:latest` on the remote host too. It has the advantage of not transferring the entire image but only transferring the build context. It builds the actual image on the remote host.


This is exactly what I do, make a context pointing to the remote host, use docker compose build / up to launch it on the remote system.


> What ever happened to providing a good service?

I'm getting an impression it's just not profitable enough. For many years I get a feeling that business is considered sound only if it is superprofitable (not exactly the right term, but still) in order to cover all losses.

Probably it's because of market competition required to be at least noticed. Some companies' spendings for marketing are greater than for R&D, production and operations combined. Maybe we got ourselves into a situation where everywhere competing for low-hanging fruits or trying to make customer believe it's the service they need while all of it doesn't really overlap with real society needs.


Side note: redirection of .onion domain to Tor proxy is how proxy routing with JS script illustrated by example in dumbproxy docs: https://github.com/SenseUnit/dumbproxy?tab=readme-ov-file#up...


Cuckoo filters are more efficient alternative. The algorithm of displacement is also very notable: how we can use random swaps to place data in cells optimally.


I had an attempt to improve performance of memory allocation with the use of arenas in Go and I chose freelist datastructure[1]

It almost doesn't use unsafe except one line to cast pointer types. I measured practical performance boost with "container/list" implementation hooked to my allocator. All in all it performs 2-5 times faster or up to 10 times faster if we can get rid[2] of any and allocations implied by the use of it.

All in all, heap allocations can be not that bad at all if you approach them from another angle.

[1]: https://github.com/Snawoot/freelist

[2]: https://github.com/Snawoot/list


Into the same vein - crude, but truly vendor-independent recipe: https://gist.github.com/Snawoot/b7065addf014d90f858dbd185d51...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: