Hacker Newsnew | past | comments | ask | show | jobs | submit | happa's commentslogin

This may just be bad recollection from my part, but hasn't Google reported that their search business is right now the most profitable it has ever been?


Fixing the seed wouldn't necessarily make LLMs deterministic. LLMs do lots of computation in parallel and the order in which these computations are performed is often indeterministic and can lead to different final results.


Yep. And to answer the question about randomness - it's absolutely vital to have a good source of noise to obscure the underlying pattern to prevent the secret information leaking - but the mathematical part that manipulates that noise into the encrypted output has to be precise. That's the distinction made here relating to probability.

Disclaimer: Not a crypto expert. Just like reading about it. Check actual sources for a better insight. Very interesting technology and much smarter people working in this field who deserve a lot of praise.


Who caved?



It's Kinugawa, and Akihabara is written as 秋葉原.


Yeah, I mixed up my input and chose simplified Chinese


For human-generated logic puzzles that you can solve in your browser, I can recommend the following site:

https://puzsq.logicpuzzle.app


Quoting Judge Alsup from his recent ruling in Bartz v. Anthropic.

> Instead, Authors contend generically that training LLMs will result in an explosion of works competing with their works — such as by creating alternative summaries of factual events, alternative examples of compelling writing about fictional events, and so on. This order assumes that is so (Opp. 22–23 (citing, e.g., Opp. Exh. 38)). But Authors’ complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works. This is not the kind of competitive or creative displacement that concerns the Copyright Act. The Act seeks to advance original works of authorship, not to protect authors against competition.


That's unrelated to the reasoning that I provided.


Another great source is https://imabi.org/


LLMs don't really need more training data than they already have. They just need to start using it more efficiently.


Exactly. Smart humans work with far less training data and do better.


propakistani.pk doesn't sound like an impartial source.


Farmers won't stop getting forecasts. They simply get their forecasts from someone other than NOAA.


At what price? Privatization always turn into either monopoly or cartels, that is, price inflation. Those believing that privatization will help them also to pay lower taxes either don't know how it works or are part of the plan.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: