Hacker Newsnew | past | comments | ask | show | jobs | submit | cudgy's commentslogin

Iron and fiber are durable and last for decades. Data centers (where the current flows for inference) consist of hardware that becomes obsolete within 5 to 10 years.

The question is can improvements in the hardware both in cost and performance outpace the increased demands on the LLMs and their future derivatives.


Well said. This also aptly describes the emergence of “script kiddies“ in the early 2000’s (now comically referred to as “engineers“). Promotion of simply combining libraries that are not understood by amateurish developers into mostly poorly implemented solutions thrust upon the end users. Corporations loved the lower salaries of web developers and the efficiency of utilizing open source libraries, thereby devaluing the skilled developers whose original intent was to share their knowledge via these oss libraries. Web development was the most affected by this trend initially, and as we see now is mostly impacted by the emergence of LLM‘s.

Isn’t Europe full of neoliberal ghouls stoking war with Russia, blowing up pipelines, and cheering on rogue nations like Israel?


Russia is stoking


> Isn’t Europe full of neoliberal ghouls

There are some, but not as many as you imagine.


perpetual war is great for neoliberalism because it forces public spending on private, consumable goods. no matter how many bombs you buy today you're gonna need more tomorrow, it's in the nature of bombs. then later if there's ever a break in the destruction the bombmakers can invest their windfall profits in construction companies, thus winning in both directions.


Won’t flooding the market with large supply of bonds being sold at one time cause the price of the bonds to drop, resulting in losses for the sellers of the bonds?

Meanwhile, the bond holders that don’t sell, can wait it out until the bond pays out or the selling mania stops, and the price returns to equilibrium.


That's kind of the point. Crash the bonds market and with it the US government.


That won't work... that will crash the price of [new] bonds, and more capitalized nations will buy the dip. You are operating under an assumption that the humans controlling a quantity of wealth enough to [quote] "crash the bonds market" make decisions based on principles. They don't - and if you compare history with a long term bond price chart then it will become apparent.


Now, pray tell, which other nations are more capitalized than the world's third largest economy? Who would buy the dip? China? I doubt it. And nobody else has the scale.


Except at that point the dollar will be so devalued that you are still losing in real terms.


So they can educate more students? Many university classes are lecture only with 200+ students in the class and no direct interactions with profs. Those courses might was well be online.


A good example is Many users looking to ditch Windows for Linux due to AI integrations and generally worse user experience. Is this the year of linux desktop?


Why unlimited? Populations are shrinking and there is only so much debt these economies can handle.


Given the products that the software industry is largely focused on building (predatory marketing for the attention economy and surveillance), this unfortunately may be the case.


But not an int, int32, or int64


“write a set of unit-tests against a set of abstract classes used as arguments of such unit-tests.”

An exhaustive set of use cases to confirm vibe AI generated apps would be an app by itself. Experienced developers know what subsets of tests are critical, avoiding much work.


> Experienced developers know what subsets of tests are critical, avoiding much work.

And, they do know this for the programs written by other experienced developers, because they know where to expect "linearity" and were to expect steps in the output function. (Testing 0, 1, 127, 128, 255, is important, 89 and 90 likely not, unless that's part of the domain knowledge) This is not necessarily correct for statistically derived algorithm descriptions.


That depends a bit on whether you view and use unit-tests for

a) Testing that the spec is implemented correctly, OR

b) As the Spec itself, or part of it.

I know people have different views on this, but if unit-tests are not the spec, or part of it, then we must formalize the spec in some other way.

If the Spec is not written in some formal way then I don't think we can automatically verify whether the implementation implements the spec, or not. (that's what the cartoon was about).


> then we must formalize the spec in some other way.

For most projects, the spec is formalized in formal natural language (like any other spec in other professions) and that is mostly fine.

If you want your unit tests to be the spec, as I wrote in https://news.ycombinator.com/item?id=46667964, there would be quite A LOT of them needed. I rather learn to write proofs, then try to exhaustively list all possible combinations of a (near) infinite number of input/output combinations. Unit-tests are simply the wrong tool, because they imply taking excerpts from the library of all possible books. I don't think that is what people mean with e.g. TDD.

What the cartoon is about is that any formal(-enough) way to describe program behaviour will just be yet another programming tool/language. If you have some novel way of program specification, someone will write a compiler and then we might use it, but it will still be programming and LLMs ain't that.


What is "formal natural language"? Can it be checked for validity by computers?


I agree (?) that using AI vibe-coding can be a good way to prooduce a prototype for stakeholders to see if the AI-output is actually something they want.

The problem I see is how to evolve such a prototype to more correct specs, or changed specs in the future, because AI output is non-deterministic -- and "vibes" are ambiguous.

Giving AI more specs or modified specs means it will have to re-interpret the specs and since its output is non-deterministic it can re-interpret viby specs differently and thus diverge in a new direction.

Using unit-tests as (at least part of) the spec would be a way to keep the specs stable and unambiguous. If AI is re-interpreting the viby ambiguous specs, then the specs are unstable which measn the final output has hard-time converging to a stable state.

I've asked this before, not knowing much about AI-sw-development, whether there is an LLM that given a set of unit-tests, will generate an implementation that passes those unit-tests? And is such practice used commonly in the community, and if not why not?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: