Hacker Newsnew | past | comments | ask | show | jobs | submit | nenaoki's commentslogin

The pin would just be for coordination, not encryption.


Ah ok. How is the encryption key, if there is one, established then?


I think they just use the encryption and key exchange that WebRTC has cooked in https://datatracker.ietf.org/doc/html/rfc5764


tl;dr: One peer generates a self-signed certificate and sends the fingerprint of that over the signalling channel; the other connects to it as a "client".

The resulting DTLS keying material is subsequently used for SRTP encryption (for media) and SCTP over DTLS (for the data channel, which is presumably what's being used here).


You're right, thank you for answering!


it's sad to compare.

apart from that: the word "mark" comes from a root for "boundary" or "border", and really it doesn't need to be about that; we're all in this together.


your shrewd idea might make a fine layer back up the Tower of Babel


The Wasm Constant Time proposal was just moved to inactive 4 days ago[0].

From what I can tell the bulk of the work for it was done in 2018[1], but it needs updating to consider SIMD, and for legwork to be done on moving it along as a proper spec extension.

Until someone picks up this valuable work and lands this much-needed feature in Wasm, we're extremely vulnerable to timing attacks in all Wasm crypto.

[0] https://github.com/WebAssembly/proposals/blob/9fc7a85e/inact...

[1] https://github.com/PLSysSec/ct-wasm


Python devs needing to be sheltered from mature C++ themes?

Checks out I guess.


Well, yeah, or else they'll release software with memory leaks, which could become a dependency of some big project and bring down some important things and have real effects on some other people.

Or, yeah, because if my child can access streaming without a child lock, they may not recover from what they see.

Looks like we should protect each other as much as we can, using UI!


Just based on the stage of the game I'd say it's not likely, but the possibilities are there:

https://news.ycombinator.com/item?id=43121383

It would have to be from unsupervised tool usage or accepting backdoored code, not traditional remote execution from merely inferencing the weights.


My understanding is that models are currently undertrained and not very "dense", so Q4 doesn't hurt very much now but it may in future denser models.


That may well be true. I know that earlier models like Llama 1 65B could tolerate more aggressive quantization, which supports that idea.


Pretty absurd conflation you've got there.

The package manager just offers a common interface for interop, you can still build without dependencies.


An LLM would have to be two-faced in a sense to surreptitiously mess with code alone and be normal otherwise.

It's interesting and a headline worthy result, but I think the research they were doing where they accidentally found that line of questioning is slightly more interesting: does an LLM trained on a behavior have self-awareness of that behavior. https://x.com/betleyjan/status/1894481241136607412


(NB "Two-faced LLMs" are apparently trivial: https://news.ycombinator.com/item?id=43121383)


These are a bit mythical, finding one for sale is no small feat.

I guess adding memory to some cards is a matter of completely reworking the PCB, not just swapping DRAM chips. From what I can find it has been done, both chip swaps and PCB reworks, it's just not easy to buy.

Software support is of course another consideration.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: