Hacker Newsnew | past | comments | ask | show | jobs | submit | ThereIsNoWorry's commentslogin

That's a lot of unproven assumptions based on the fact that LLMs are just correlation printers.


If you run out of (solvable!) problems in your given logic space, just start branching out your space. Until you find yourself in such esoteric spheres, not even your best math co-researcher knows anymore what's happening and vice versa.


Well, that is what mathematicians do, which is why I said "interesting" problems. I mean, I can think of several problems, like "classify projective modules over rings of global dimension 4 that have no zero divisors".

Even popular problems in Langlands like "explicitly find a trace formula for theta groups" will appeal to the fifteen people in the world that can understand what I'm even talking about.


Does this coincide with the interest rate hikes and the induced layoffs and conservative hiring practices because of it? No, really? What a surprise!


You mean, the high interest landscape made corpos and investors alike cry out in a loud panic while coincidentally people figured out they could scale up deep learning and thus we had a new Jesus Christ born for scammers to have a reason to scam stupid investors by the argument we only need 100000x more compute and then we can replace all expensive labour by one tiny box in the cloud?

Nah, surely Nvidia's market cap as the main shovel-seller in the 2022 - 2026(?) gold-rush being bigger than the whole French economy is well-reasoned and has a fundamentally solid basis.


It couldn’t have been a more well designed grift. At least when you mine bitcoin you get something you can sell. I’d be interested to see what profit, if any, any even large corporation has seen from burning compute on LLMs. Notice I’m explicitly leaving out use cases like ads ranking which almost certainly do not use LLMs even if they do run on GPUs.


> It's real and it's only a matter of time before it's better than humans in most circumstances ...

I heard that argument for many years for many deep learning applications of significant value. Any day now, right?


Almost a decade ago I used to be a hyped up HS graduate fully spoon-fed the AI hype bubble (after 2012, the first "deep" learning breakthroughs for image classification started hyping the game up). I studied at a top 5 university for CS and specialised in deep learning. Three years ago I finished, rejected a (some would call "prestigious") PhD offer and was thoroughly let down by how "stupid" AI is.

For the last 2-ish years, companies found a way to throw supercomputers on a preprocessed internet dictionary dataset and the media gulped it up like nothing, because on the surface it looks shiny and fancy, but when you peek it open, it's utterly stupid and flawed, with very limited uses for actual products.

Anything that requires any amount of precision, accountability, reproducibility?

Yeah, good luck trusting a system that inherently just learns statistics out of data and will thus fundamentally always have an unacceptable margin of error. Imagine using a function that gives you several different answers for the same input, in analytical applications that need a single correct answer. I don't know anyone in SWE that uses AI for more than as a glorified autocomplete which needs to be proof-read and corrected more often than not to the point of oftentimes being contraproductive.

Tldr; it is exactly zero surprising that FSD doesn't work, and it will not work with the current underlying basis (deep learning). The irony is, that people with power to allocate billions of dollars have no technical understanding and just trust the obviously fake marketing slides. Right, Devin?


Or perhaps statistics is reasoning generalised? E.T. Jayne's classic on this makes the case, I'd encourage reading the first chapter - https://www.amazon.com/Probability-Theory-Science-T-Jaynes/d....


What are you using as the definition of 'doesn't work'? Are you taking the author's claims?

Does something have to be perfect to work? If that's the case we shouldn't have bridges because sometimes they fail?


You wouldn't believe how often I have to fight for UUIDs instead of sequencing. UUIDs are great. For all practical purposes 0 possibility of collision; you can use it as an ID in a global system, it just makes so much fucking sense.

But the default is still a number natural number sequence. As if it matters for 99% of all cases that stuff is "ordered" and "easily identifiable" by a natural number.

But then you want to merge something or make double use and suddenly you have a huge problem that it isn't unique anymore and you need more information to identify.

Guess what, an UUID does that job for you, across multiple databases and distributed systems, a UUID is still unique with 99.9999% probability.

The one counter-example every 10 years can be cared for manually.


As far as I know the only practical downside with UUIDs in the modern age - unless you like using keys for ordering by creation time, or you have such enormous volumes that storage is a consideration - is that they are cumbersome for humans to read and compare, e.g scanning log files.

And in any case the German tank problem means that you often can't use incrementing numbers as surrogate keys if they are ever exposed to the public e.g in urls.


> they are cumbersome for humans to read and compare, e.g scanning log files.

I think the positive still outweighs the negative here: you can search your whole company's logs for a UUID and you won't get false positives like you would with serial integers.


I've been using UUIDv7 a lot lately and I've been quite happy with the results.


Imagine gatekeeping what and how you can ask questions because of some fear of redundancy. Who the fuck cares if the same questions comes up each month, that means it's an important fundamental topic and can be handled differently than just shooting down the poster. The same for answers. It's a horrible website full of elitists. I never really used it often but I did roll my eyes everytime I saw a shut down thread that would coincide with the question I wanted to find an answer for.


I just quit a company that was unable to negotiate.

I had an offer from a Silicon Valley company that would have paid a lot better, but it wasn't full remote (or even hybrid) and I would have had to relocate. So I went into the less well paid no-name with full remote. I tried to negotiate at the beginning with the very real argument of a better paid offer, but they denied. Well, whatever I guess, I tried and full remote was more important to me (still is).

1.5 years later, I realized the environment was a bit too dysfunctional and toxic for my liking and I learned all I wanted there. Fished a new job quite easily a couple weeks ago (several offers), with a bit better pay and a nicer project where I think I can make a better impact and have a nicer environment.

Before I quit I talked with some people and tried to see if they may offer me a better salary. They wanted to keep me and were not happy about me quitting at all, but they were absolutely unable to offer me even just 10k more. What was a bit shocking to me is that HR tried to blame the workers and pointed to a "broken market" for SWEs. I was so taken aback by this delusion that I had no regrets quitting after and I will care even less about "doing it for the team" and I will certainly consider hopping more often again.

So, it highly depends on the company if you can negotiate. However, if you cannot even get the smallest of gestures towards your direction, then that's a pretty big red flag. I don't have regrets. With just 2 years of experience, I'm happy I learn these lessons early on.

My ultimate goal is to start my own company and I'm actively working on that on the side, so hopefully this farce of pretending to care making profits for other people will end soon.


That's one of those misguided deep sounding statements that are based in a fundamental lack of knowledge. What you, or this guy at the CS department, said is completely bullshit, evidenced by simple statistics of memory safety in languages. There is a reason many big companies are adopting Rust, and it's not just because "they feel like it" or they lack coders that "can program in C". E.g. take a look at: https://security.googleblog.com/2022/12/memory-safe-language...


Rust works because the Rust type system and development environment are tutors. They train up the developer like magic nannies until they learn how to correctly manage memory.

So the power of Rust comes from the the cooperation of the developer with the compiler and development environment willing to work through their severe instruction.

Developers in general unwilling to learn how to manage memory walk away from Rust based devteams.


Ok, there's still Java, Kotlin, Go and other memory safe languages in the sense of that they are statistically way less likely to have any memory related bugs compared to C, C++ or similar low-level no GC, no reference counting languages and thus the original argument of "good developers" not doing memory related bugs as well as "Java is as unsafe as C after a certain size" is utter bullshit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: