Hacker Newsnew | past | comments | ask | show | jobs | submit | tovej's commentslogin

If you can run ITER for 20 minutes you've essentially proved the Tokamak concept is viable for commercial use.

No you don't. Commercial use means it makes economical sense. When you have to spend more on maintainance (and recycling/dumping contaminated wall material amd somehow get the fuel) then you never can hope to make any profit.

A running ITER with positive energy output for 20 minutes just proofs that the concept can actually work. From there to commercial use would still be a long way, if it ever can compete at all, except in niches, like deep space.

(I rather would bet on the Stelleratar design)


I'm not saying ITER would be a commercial machine, I'm saying the Tokamak design would be viable.

Stellarators are interesting, but have been studied much less in comparison.


If you're not sure, maybe you should look up the term "expert system"?

It was a polite way of saying "that's kinda bull".

And yes, I know what an expert system is.

Do you know that a neural network (or set of matrices, same thing really) can approximate anything else? https://en.wikipedia.org/wiki/Universal_approximation_theore...

How do you know that inside the black box, they don't approximate expert systems?


I'm not sure you do, because expert systems are constraint solvers and LLMs are not. They literally deal in encoded facts, which is what the original comment was about.

The universal approximation theorem is not relevant. You would first have to try to train the neural network to approximate a constraint solver (that's not the case with LLMs), and in practice, these kinds of systems are exactly the ones that a neural network is bad at.

The universal approximation theory says nothing about feasibility, it only talks about theoretical existence as a mathematical object, not whether the object can actually be created in the real world.

I'll remind you that the expert system would have to have been created and updated by humans. It would have had to have been created before a neural network was applied to it in the first place.


The original point, that LLMs are plagiarising inputs, is a very common and common sense opinion.

There are court cases where this is being addressed currently, and if you think about how LLMs operate, a reasonable person typically sees that it looks an awful lot like plagiarism.

If you want to claim it is not plagiarism, that requires a good argument, because it is unclear that LLMs can produce novelty, since they're literally trying to recreate the input data as faithfully as possible.


I need you to prove to me that it's not plagiarism when you write code that uses a library after reading documentation, I guess.

> since they're literally trying to recreate the input data as faithfully as possible.

Is that how they are able to produce unique code based on libraries that didn't exist in their training set? Or that they themselves wrote? Is that how you can give them the documentation for an API and it writes code that uses it? Your desire to make LLMs "not special" has made you completely blind to reality. Come back to us.


What?

The LLM is trained on a corpus of text, and when it is given a sequence of tokens, it finds a set of token that, when one of them is appended, make the resulting sequence most like the text in that corpus.

If it is given a sequence of tokens that is unlike anything in its corpus, all bets are off and it produces garbage, just like machine learning models in general: if the input is outside the learned distribution, quality goes downhill fast.

The fact that they've added a Monte Carlo feature to the sequence generation, which makes it sometimes select a token that is slightly less like the most exact match in the corpus does not change this.

LLMs are fuzzy lookup tables for existing text, that hallucinate text for out-of-distribution queries.

This is LLM 101.

If the LLM was only trained using documentation, then there would be no problem. If it would generate a design, look at the documentation, understand the semantics of both, and translate the design to code by using the documentation as a guide.

But that's not how it works. It has open source repositories in its corpus that it then recreates by chaining together examples in this stochastic parrot -method I described above.



No, you need to prove that it is not plagiarism when you use an LLM to produce a piece of code that you then claim as yours.

You have the whole burden of proof thing backwards.


Oh wild, I was operating under the assumption that the law requires you to prove that a law was broken, but it turns out you need to prove it wasn't. Thanks!

You don't?

If you reproduce something, usually you have to check the earlier implementation for it and copy it over. This would inevitably require you to look at the license and author of said code.

Assuming of course, you're talking about nontrivial functionality, because obviously we're not talking about trivial one-liners etc.


I recently asked an LLM to give me one of the most basic and well-documented algorithms in the world: a blocked matrix multiply. It's essentially a few nested loops and some constants for the block size.

It failed massively, spitting out garbage code, where the comments claimed to use blocking access patterns, but the code did not actually use them at all.

LLMs are, frankly, nearly useless for programming. They may solve a problem every once in a while, but once you look at the code, you notice it's either directly plagiarized or bad quality (or both, I suppose, in the latter case).


von Neumann did not invent the von Neumann architecture. Not even a little bit.

If you want to reason that the hardware is at fault, you should be blaming the Eckert-Mauchley architecture.


Internet-level companies are having more outages recently. Is the exposed surface area increasing or is the quality of service suffering?

C++ is not that complex, and honestly it's one of the best documented languages out there. The semantics are very clear and you can easily decide to stick to a smaller subset of C++ if you don't like the advanced features like concepts, template metaprogramming, and class hierarchies (I would in general advice against OOP in C++, just as I would in any other language).

Ruby does a lot of magic stuff to help beginners. That means the semantics are unclear. IMO this is similar to how Apple optimizes UI/UX for first impressions to drive sales. The journeyman user is neglected, simple things are easy to do, but the most powerful features are missing for journeyman and advanced users.

I'm not saying Ruby is a bad language. Just saying that I have the opposite view. I too love to learn, but Ruby did not help me learn, it actively got in my way.

You can make a simple language without confusing semantics, see Go, C, python.


> C++ is not that complex

C++ is so complex that you can take 10 C++ devs, put them in the same room, and none of them will be able to read any others' code because they've each written it in a mutually exclusive C++ feature set.


Have you written C++? There are no "mutually exclusive" feature sets. The only deprecated language features I can think of are auto pointers and trigraphs, and I have never seen them in the wild.

I was making a hyperbolic joke. But "mutually exclusive feature sets" means two feature sets without overlapping features.

It doesn't really work as a joke if there's no truth to it.

I understand what mutually exclusive means. There are no two people writing C++ with no overlapping language features. I struggle to understand what you might even mean.

Non-equal subsets of the full language feature set? Yes, that will happen with any nontrivial language.


Nevermind then, it's okay if you don't get the joke.

>C++ is not that complex

Relative to what, exactly?

It is very hard to take a statement like yours seriously when even veteran developers continue to ship software with memory bugs that exfiltrate data and crash systems to this day.


When I said it's not complex I did not mean it's perfect or that it is easy to write flawless code in C++. And obviously C++ is most relevant within its own niche: video games, scientific computing, and performance critical software. The issues you mentioned are tradeoffs that C++ suffers from the higher degree of control a developer has. They're real, and that's a valid point.

What I meant was that if you wish to fully understand how the language works, I bet you that C++ has a clearer path to get there than Ruby does. The documentation and surrounding ecosystem of conference talks, content creators, and longer texts on new features is excellent. Not to mention public online communities like the irc channels on libera.chat.

The original poster I answered to was saying that C++ requires a lot of dedication to understand, but I would say this is true of every language, and C++ is very good at getting you there.


> Relative to what, exactly?

Life


What do you mean? Programming languages all have different strengths and weaknesses that are completely orthogonal to LLMs.

Even if you vibe code an entire system, a human will eventually have to read and modify the vibe code. Most likely in order to refactor the whole thing, but even if by some miracle the overall quality is alright, somebody will have to review the code and fix bugs. The programming language and it's ecosystem will always be a factor in that case.


Yeah, but those perceived strengths and weaknesses I'd say more often than not end up being non-issues i.e. popular chatter about whether a language is good or not and how it is used in real life pretty much never line up.

And my guess is that this "disparity" only widens with AI.

I'm not saying discussions like this aren't theoretically interesting or that people who are into it shouldn't have them. But my guess is they overwhelmingly won't matter large-scale.


That is a naive view to have. Languages have massive differences which directly impact how software is developed, built, distributed, and executed at runtime. Not to mention how it is used and maintained.

And I've yet to see an LLMs have any impact on making any of these differences disappear. The one thing I've seen LLMs do is generate more work for senior developers who have to fix vibe coded spaghetti. There language matters a lot.


This reads as if it was written with ChatGPT, find-replacing all the em-dashes with elllipses. Nearly every paragraph ends in a "That's not X, that's Y" -type statement.

If this isn't AI slop it's certainly badly written.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: