Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a mostly LLM-skeptic I reluctantly agree this is something AI actually does well. When approaching unfamiliar territory, LLMs (1) use simple language (improvement over academia but also much professional intentionally obfuscated literature), (2) use the right abstraction (they seem good at ”zooming out” to big picture of things, and (3) you can move both laterally between topics and ”zoom in” quickly. Another way of putting it is ”picking the brain” of an expert in order to build a rough mental model.

It’s downsides, such as hallucinations and lack of reasoning (yeah) aren’t very problematic here. Once you’re familiar enough you can switch to better tools and know what to look for.



My experience is instead that LLMs (those I used) can be helpful there where solutions are quite well known (e.g. a standard task in some technology used by many), and terrible where the problem has not been tackled much by the public.

About language (point (1)), I get a lot of "hypnotism for salesmen to non technical managers and roundabout comments" (e.g. "which wire should I cut, I have a red one and a blue one" // "It is mission critical to cut the right wire; in order to decide which wire to cut, we must first get acquainted with the idea that cutting the wrong wire will make the device explode..." // "Yes, which one?" // "Cutting the wrong one can have critical consequences...")


> and terrible where the problem has not been tackled much by the public

Very much so (I should have added this as a downside in the original comment). Before I even ask a question I ask myself "does it have training data on this?". Also, having a bad answer is only one failure mode. More commonly, I find that it drifts towards the "center of gravity", i.e. the mainstream or most popular school of thought, which is like talking to someone with a strong status-quo bias. However, before you've familiarized yourself with a new domain, the "current state of things" is a pretty good bargain to learn fast, at least for my brain.


> My experience is instead that LLMs (those I used) can be helpful there where solutions are quite well known

Yes, that's a necessary condition. If there isn't some well known solution, LLMs won't give you anything useful.

The point though, is that the solution was not well known to the GP. That's where LLMs shine, they "understand" what you are trying to say, and give you the answer you need, even when you don't know the applicable jargon.


Agreed. LLMs pull you towards the average knowledge, and they suck when you're trying to find a creative solution that challenges the status quo.


Yes. LLMs are the perfect learning assistant.

You can now do literally anything. Literally.

Going to take a while for everyone to figure this out but they will given time.


I'm old enough to remember when they first said that about the Internet. We were going to enter a new enlightened age of information, giving everyone access to the sum total of human knowlege, no need to get a fancy degree, universities will be obsolete, expertise will be democratized.... See how that turned out.


The motivated will excel even further, for the less motivated nothing will change. The gap is just going to increase between high-agency individuals and everyone else.


I'm old enough to remember when they first said that about the Internet.

(Shrug) It was pretty much true. But it's like what Linus says in an old Peanuts cartoon: https://www.gocomics.com/peanuts/1969/07/20


I’d suggest we are much closer to that reality now than we were in the 90s, in large part thanks to the internet.


Since the Internet became ubiquitous, more people believe the moon landing, climate change, and vaccines are hoaxes.


Is it really that much worse today? When I was a kid, my great aunt died of skin cancer. She was a Christian Scientist and rejected medical treatment in favour of prayer.

As a teenager, I remember being annoyed that the newspapers had positive articles on the rejuvenating properties of nonsense like cupping and reiki. At least a few of my friends' parents had healing crystals.

People have always believed in whatever nonsense they want to believe.


It feels like the friction to get un-science has been completely removed now. Before you had to get lagged content and physically fetch it somehow. Now you can have it in the palm of your hands 24-7 with the bonus of the content being designed to enrage you to get you sucked in.


LLMs and the internet both make it easier for us to access more information, which also means we can reach dumber conclusions quicker. It does go both ways though.


As an aside, internet hasn’t officially been a proper noun since 2016 or so.


  > You can now do literally anything. Literally.
In theory.

In practice, not so much. Not in my experience. I have a drive littered with failed AI projects.

And by that I mean projects I have diligently tried to work with the AI (ChatGP, mostly in my case) to get something accomplished, and after hours over days of work, the projects don’t work. I shelve them and treat them like cryogenic heads. “Sometime in the future I’ll try again.”

It’s most successful with “stuff I don’t want to RTFM over”. How to git. How to curl. A working example for a library more specific to my needs.

But higher than that, no, I’ve not had success with it.

It’s also nice as a general purpose wizard code generator. But that’s just rote work.

YMMV


You just aren’t delving deep enough.

For every problem that stops you, ask the LLM. With enough context it’ll give you at least a mediocre way to get around your problem.

It’s still a lot of hard work. But the only person that can stop yourself is you. (Which it looks like you’ve done.)

List the reasons you’ve stopped below and I’ll give you prompts to get around them.


It's true that once you have learned enough to tell the LLM exactly what answer you want, it can repeat it back to you verbatim. The question is how far short of that you should stop because the LLM is no longer an efficient way to make progress.


From a knowledge standpoint an LLM can give you pointers at any point.

Theres no way it will "fall short".

You just have to improve your prompt. In the worst case scenario you can say "please list out all the different research angles I should proceed from here and which of these might most likely yield a useful result for me"


My skepticism flares up with sentences like "Theres no way it will "fall short"." Especially in the face of so many first hand examples of LLMs being wrong, getting stuck, or falling short.


I feel actively annoyed by the amount of public gaslighting I see about AI. It may get there in the future, but there is nothing more frustrating than seeing utter bullshit being spouted as truth.


First, rote work is the kind I hate most and so having AI do it is a huge win. It’s also really good for finding bugs, albeit with guidance. It follows complicated logic like a boss.

Maybe you are running into the problem I did early. I told it what I wanted. Now I tell it what I want done. I use Claude Code and have it do its things one at a time and for each, I tell it the goal and then the steps I want it to take. I treat it as if it was a high-level programming language. Since I was more procedural with it, I get pretty good results.

I hope that helps.


They seem pretty good with human language learning. I used ChatGPT to practice reading and writing responses in French. After a few weeks I felt pretty comfortable reading a lot of common written French. My grammar is awful but that was never my goal.


I don't know. I wouldn't trust a brain surgeon who has up til now only been messing around on LLMs.

Edit: and for that matter I also would not trust a brain surgeon who had only read about brain surgery in medical texts.


Practical knowledge is the at most important.

Weirdly you’ll get a lot of useful experience as you analyze yourself through 80 years.


I spent a couple weekends trying to reimplement microsoft's inferencing for phi4 multimodal in rust. I had zero experience messing with ONNX before. Claude produced a believably good first pass but it ended up being too much work in the end and I've put it down for the moment.

I spent a lot of time fixing Claude's misunderstanding of the `ort` library, mainly because of Claude's knowledge cutoff. In the end, the draft just wasn't complete enough to get working without diving in really deep. I also kind of learned that ONNX probably isn't the best way to approach these things anymore. Most of the mindshare is around the python code and torch apis.


This is an interesting.

AI leads to more useless dives down into the internets.


LLMs don't reason the way we do, but there are similarities at the cognitive pre-conscious level.

I made a challenge to various lawyers and the Stanford Codex (no one took the bait yet) to find critical mistakes in the "reasoning" of our Legal AI. One former attorney general told us that he likes how it balances the intent of the law. Sample output (scroll and click on stats and the donuts on the second slide):

Samples: https://labs.sunami.ai/feed

I built the AI using an inference-time=scaling approach that I evolved over a year's time, and it is based on Llama for now, but could be replace with any major foundational model.

Presentation: https://prezi.com/view/g2CZCqnn56NAKKbyO3P5/ 8-minute long video: https://www.youtube.com/watch?v=3rib4gU1HW8&t=233s

info sunami ai


"One former attorney general told us that he likes how it balances the intent of the law."

In a common law system you generally want actionable legal advice based on predictions on how a judge would rule in a case not "balances the intent of the law" whatever the heck that means.


.


The sensitivity can be turned up or down. It's why we are asking for input. If you're talking about the Disney EULA, it has the context that it is a browsewrap agreement. The setting for material omission is very greedy right now, and we could find a happy middle.


A former attorney general is taking it for a spin, and has said great things about it so far. One of the top 100 lawyers in the US. HN has turned into a pit of hate. WTF all this hate for? People just really angry at AI, it seems. JFC, Grow up.


[flagged]


I know you’re being disparaging by using language like “bake into their identity” but everyone is “something” about “something”.

I’m “indifferent” about “roller coasters” and “passionate” about “board games”.

To answer the question (but I’m not OP), I’m skeptical about LLMs. “These words are often near each other” vastly exceeds my expectation at being fairly convincing that the machine “knows” something, but it’s dangerously confident when it’s hilariously incorrect.

Whatever we call the next technological leap where there’s actual knowledge (not just “word statistics” I’ll be less skeptical about.


Your framing is extrapolative, mendacious and is adding what could charitably be called your interpersonal problems to a statement which is perfectly neutral, intended as an admission against general inclination to lend credibility to the observation that follows.

Someone uncharitable would say things about your cognitive abilities and character that are likely true but not useful.


They didn’t say that they were invested in it.


> invested

Very probably not somebody who blindly picked a position, easily somebody who is quite wary of the downsides of the current state of the technology, as expressed already explicitly in the post:

> It’s downsides, such as hallucinations and lack of reasoning


Probably all the hype and bs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: