Hacker Newsnew | past | comments | ask | show | jobs | submit | skdotdan's commentslogin

Personally, I use LLMs to write code that I would have never bothered writing in the first place. For example, I hate web front-end development. I'm not a web dev, but sometimes it's cool to show visual demos or websites. Without LLMs, I wouldn't have bothered creating those, because I wouldn't have had the time anyway, so in that case, it's a net positive.

I don't use LLMs for my main pieces of work exactly due to the issues described by the author of the blogpost.


Not true. VAT does way less economic damage. Tariffs introduce massive distortions in the market.


As an anecdotal data point, I’m happier than in my early and mid 20s, and I have hope for my 30s.


Would you be willing to share what is causing your unhappiness, and what has caused the changes you mention and optimism for the future?


Edited


This is not what hallucination means in the pre-LLM machine learning literature.


It’s used incorrectly. Hallucination has (or used to have) a very specific meaning in machine learning. All hallucinations are errors but not all errors are hallucinations.


What do you mean by “true” RL?


True RL is not limited by being tethered to human-annotated data, and it is able to create novel approaches to solve problems. True RL requires a very clear objective function (such as the rules of Go, or Starcraft, or Taboo!) that the model can evaluate itself against.

Andrej Karpathy talks about the difference between RLHF and "true" RL here:

https://www.youtube.com/watch?v=c3b-JASoPi0&t=1618s

> The other thing is that we're doing reinforcement learning from human feedback (RLHF), but that's like a super weak form of reinforcement learning. I think... what is the equivalent in AlphaGo for RLHF? What is the reward model? What I call it is a "vibe check". Imagine if you wanted to train an AlphaGo RLHF, it would be giving two people two boards and asking: "Which one do you prefer?" -- and then you would take those labels and you would train the model and then you would RL against that. What are the issues with that? It's like, number one -- that's just vibes of the board. That's what you're training against. Number two, if it's a reward model that's a neural net, then it's very easy to overfit to that reward model for the model you're optimizing over, and it's going to find all these spurious ways of hacking that massive model is the problem.

> AlphaGo gets around these problems because they have a very clear objective function, and you can RL against it.

> So RLHF is nowhere near [true] RL -- it's silly. And the other thing is that imitation is super-silly. RLHF is a nice improvement, but it's still silly, and I think people need to look for better ways of training these models so that it's in the loop with itself and its own psychology, and I think there will probably be unlocks in that direction.

In contrast, something like true RL would look like the Multi-Agent Hide-And-Seek training loop: https://www.youtube.com/watch?v=kopoLzvh5jY


The way I see it hallucination and factual error are particular cases of incorrect answer.



Rust is great, but one thing I’d like to see is an interpreted, dynamic, less strict version of it that could be used for prototyping and gradually typed into compiling Rust code. In other words, a new programming language doing to Rust the reverse of what Mojo is trying to do to Python.


Have you ever heard of Rune? Sounds like it might be what you're looking for.

https://rune-rs.github.io/posts/rune-0-13-0/


Cool! Thanks!


I'm sure Chapel has its merits, but one of the main selling points of Mojo is the aspiration to be part of the Python ecosystem, and so far I haven't seen any other programming language offering a similar promise, other than Python itself coupled with DSLs or other extensions for high performance.


Those interested in the intersection between Python, HPC, and data science may want to take a look at Arkouda, which is a Python package for data science at massive scales (TB of memory) at interactive rates (seconds), powered by Chapel:

* https://github.com/Bears-R-Us/arkouda * https://twitter.com/ChapelLanguage/status/168858897773200179...


> to be part of the Python ecosystem

I'd rather use Python if I'm in the Python ecosystem. So many attempts were made in the past to make a new language compatible with the Python ecosystem (look up hylang and coconu -- https://github.com/evhub/coconut). But at the end of the day, I'd come back to Python because if there's one thing I've learnt in recent years it's this:

    minimize dependencies at all costs.


I don't think those fill the same niche. They're nice-to-haves on top of Python. The promise of Mojo is that it's for when Python isn't good enough and you need to go deeper, but you want the Python ecosystem and don't want to write C.


I believe the main Mojo use cases are scenarios in which you'd need dependencies anyway. Code that you can't write in Python due to performance concerns, so you'd need to call C/C++/Rust/CUDA/Triton/etc anyway.


honestly that is the main thing that makes me pretty sure Mojo will fail. Right now, the types of things it doesn't support include keyword arguments and lists of lists. The place where python compatibility really matters is C API compatibility and they are hilariously far away from that for now.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: