Hacker Newsnew | past | comments | ask | show | jobs | submit | delegate's commentslogin

It's very tempting to agree to the 'gambling' part, given that both a jackpot and progress towards the goal in your project will give you a hit of dopamine.

The difference is that in gambling 'the house always wins', but in our case we do make progress towards our goal of conquering the world with our newly minted apps.

The situation where this comparison holds is when vibe coding leads nowhere and you don't accomplish anything but just burn through tokens.


> The difference is that in gambling 'the house always wins', but in our case we do make progress towards our goal of conquering the world with our newly minted apps.

What? Your vibe coded slop is just going to be competing with someone else's vibe coded slop.


The motivations for wanting to make the slop could be commercial profit, or it could be simply you trying to solve a problem for yourself. In either case, the slop is the goal and, if the agent isn't giving you complete trash, you should be converging towards your goal. The gambling analogy doesn't work.

[flagged]


Sounds like you've had too much TV. It really does rot your brain, this is obvious to anybody who doesn't watch TV, but completely imperceptible to those who do.

> It really does rot your brain, this is obvious to anybody who doesn't watch TV, but completely imperceptible to those who do.

how do you block video on your PC? or do you literally mean audiovisual information broadcast onto actual television sets is the evil?


When you watch television, or television on your computer screen (that makes no difference) you get hypnotized by the tube into a passive state of consumption. Watch people when they watch TV. Watch their slack jawed faces when the commercials stay on and their attention stays glued to the advertisements pitching Alzheimer drugs. Critical though suspended, minds off in space.

In short, read a book.


what you said is true about books, and people made the exact same arguments when the printing press hit the scene

- "you get hypnotized by the tube into a passive state of consumption"

- "Watch their slack jawed faces....and their attention stays glued"

both statements apply equally to books. read here if you dont believe me.

https://engines.egr.uh.edu/talks/what-people-said-about-book...

youve got a case of the feelies my friend


Books are net positive for you, slop smoothens your brain IF you completely outsource your thinking to it. It’s not a rocket science.

unless the book contains instructions on how to do things... then youre just outsourcing thinking to the book right? people have to remember less with the printed word full stop. so whats the difference?

Great work! Obviously the goal of this is not to replace sqlite, but to show that agents can do this today. That said, I'm a lot more curious about the Harness part ( Bootstrap_Prompt, Agent_Prompt, etc) then I am in what the agents have accomplished. Eg, how can I repeat this myself ? I couldn't find that in the repo...

hello, thanks! all of the harnessing is in this repo: https://github.com/kiankyars/parallel-ralph/

Bottlenecks. Yes. Company structures these days are not compatible with efficient use of these new AI models.

Software engineers work on Jira tickets, created by product managers and several layers of middle managers.

But the power of recent models is not in working on cogs, their true power is in working on the entire mechanism.

When talking about a piece of software that a company produces, I'll use the analogy of a puzzle.

A human hierarchy (read: company) works on designing the big puzzle at the top and delegating the individual pieces to human engineers. This process goes back and forth between levels in the hierarchy until the whole puzzle slowly emerges. Until recently, AI could only help on improving the pieces of the puzzle.

Latest models got really good at working on the entire puzzle - big picture and pieces.

This makes human hierarchy obsolete and a bottleneck.

The future seems to be one operator working on the entire puzzle, minus the hierarchy of people.

Of course, it's not just about the software, but streams of information - customer support, bug tickets, testing, changing customer requirements.. but all of these can be handled by AI even today. And it will only get better.

This means different things depending on which angle you look at it - yes, it will mean companies will become obsolete, but also that each employee can become a company.


Yeah I’m very much seeing this right now.

I’m a pretty big generalist professionally. I’ve done software engineering in a broad category of fields (Game engines, SaaS, OSS, distributed systems, highly polished UX and consumer products), while also having the experience of growing and managing Product and Design teams. I’ve worn a lot of hats over the years.

My most recent role I’m working on a net new product for the company and have basically been given fully agency over this product: from technical, budget, team, process, marketing, branding and positioning.

Give someone experienced like me capital, AI and freedom and you absolutely can build high quality software and a pretty blinding pace.

I’m starting to get the feeling than many folks struggling with adopting or embracing AI well for their job has more to do with their job/company than AI


This gives me a lot of hope for a decentralized future for all kinds of service industries. Why would you go to a big-name accounting firm where the small number of humans can only give you a sliver of attention, when you can go to a one-man shop and get much more of the one human’s attention? Especially if you know that the “work” will be done by the same tools? So many of the barriers to entry in various services - law, accounting, financial advising, etc. - is that you need a team to run even the smallest operation that can generate enough revenue to put food on your table. Perhaps that won’t be the case for long - and the folks that used to be that “team” can branch out and be the captains of their own ships, too.

If every person is now a captain, with their own ship, the harbor may become rather crowded.

> The future seems to be one operator working on the entire puzzle, minus the hierarchy of people.

Given the rest of your argument that makes no sense. Why should that one operator exist? If AI is good at big picture and the entire puzzle, I don’t see why that operator shouldn’t be automated away by the AI [company] itself?


That's what I think will happen with a lot of SaaS and software platforms. They'll become the Sears to the future Amazon's being built.

It's worth remembering that this is all happening because of video games !

It is highly unlikely that the hardware which makes LLMs possible would have been developed otherwise.

Isn't that amazing ?

Just like internet grew because of p*rn, AI grew because of video games. Of course, that's just a funny angle.

The way I see it, AI isn't accidental. Its inception has been in the first chips, the Internet, Open Source, Github, ... AI is not just the neural networks - it's also the data used to train it, the OSes, APIs, the Cloud computing, the data centers, the scalable architectures.. everything we've been working on over the last decades was inevitably leading us to this. And even before the chips, it was the maths, the physics ..

Singularity it seems, is inevitable and it was inevitable for longer than we can remember.


Remember that games are just simulations. Physics, light, sound, object boundaries - it not real, just a rough simulation of the real thing.

You can say that ML/AI/LLM's are also just very distilled simulations. Except they simulate text, speech, images, and some other niche models. It is still very rough around the edges - meaning that even though it seems intelligent, we know it doesn't really have intelligence, emotions and intentions.

Just as game simulations are 100% biased towards what the game developers, writers and artists had in mind, AI is also constrained to the dataset they were trained on.


I think it's a bit hard to say that this is definitively true: people have always been interested in running linear algebra on computers. In the absence of NVIDIA some other company would likely have found a different industry and sold linear algebra processing hardware to them!

Almost certainly not at the scale of the consumer gaming industry, however!

Google is making millions of TPUs per year. Nvidia ships more gaming GPUs, but it's not like multiple orders of magnitude off.

I'm willing to bet TPUs wouldn't be nearly as successful or sophisticated without the decades of GPU design and manufacturing that came before them.

Current manufacturing numbers are a small part of the story of the overall lineage.


It's pretty interesting that consumer GPUs started to really be a thing in the early 90s and the first Bitcoin GPU miner was around 2011. That's only 20 years. That caused a GPU and asic gold rush. The major breakthroughs around LLMs started to snowball in the academic scene right around that time. It's been a crazy and relatively quick ride in the grand scheme of things. Even this silicone shortage will pass and we'll look back on this time as quaint.

Of course you are right, but in addition they wouldn't have even made them if GPUs hadn't made ML on CPU so relatively incapable. Competition drives a lot of these decisions, not just raw performance.

You are missing his point. They very likely don't start building TPUs if there were no GPUs.

I'm not missing the point. If you recall your computer architecture class there are many vector processing architectures out there. Long before there was nvidia the world's largest and most expensive computers were vector processors. It's inaccurate to say "gaming built SIMD".

You are missing the point - it's an economic point. Very little R&D was put into said processors. The scale wasn't there. The software stack wasn't there (because the scale wasn't there).

No one is suggesting gaming chips were the first time someone thought of such an architecture or built a chip with such an architecture. They are suggesting the gaming industry produced the required scale to actually do all the work which lead to that hardware and software being really good, and useful for other purposes. In chip world, scale matters a lot.


The Cray-1, which produced half a billion USD in revenue in today's dollars, at a time when computing was still science fiction, did not demonstrate scale? I just can't take you in good faith because there has never been a time when large scale SIMD computing was not advanced by commercial interests.

In this context scale = enough units/revenue to spread fixed costs.

I'll take your word on lifetime revenue numbers for Cray 1.

So yes, in todays dollars, $500 million of lifetime revenue - maybe 60-70 million per year, todays dollars - is not even close to the scale we are seeing today. Even 10 years ago Nvidia was doing ~$5 billion per year (almost 100x your number) and AMD a few bill(another 60-70x ish)

Even if you meant $500m in annual (instead of lifetime), Nvidia was 10x that in 2015. And AMDs GPU revenue which was a few billion that year, so it's more like 17x.

That's a large difference in scale. At the low end 17x and at the high end 170x. Gaming drove that scale. Gaming drove Nvidia to have enough to spend on CUDA. Gaming drove NVidia to have enough to produce chip designs optimized for other types of workloads. CUDA enabled ML work that wasn't possible before. That drove Google to realize they needed to move away from ML on CPU if they wanted to be competitive.

You don't need any faith, just understand the history and how competition drives behavior.


Google DeepMind can trace part of it's evolution back to a playtester for the video game Syndicate who saw an opportunity to improve the AI of game NPCs.

Type the word porn. Self censorship is wrong.

Sci-fi ideas of robots have been around for ages and work on AI and the term singularity kicked off around 1950, so a while ago - well before chips or me being born.

what a load of utter tripe

One thing that could happen is that someone might decide to vibe code a Discord clone, without all the extra crap. I'm sure there are people out there doing this already.

There's this interesting arc of growth for apps which are successful. At first users love it, company grows, founders get rich, they hire expensive people to develop the product and increase revenue until eventually the initial culture and mission is replaced by internal politics and processes.

Software starts getting features which users don't want or need, side effects of the company size and their Q4 roadmap to 'optimize' revenue|engagement|profits|growth|...

Users become tools in the hands of the app they initially used as a tool. This model worked well so far and built some of the biggest companies in history.

AI could make this business model less effective. Once a piece of software becomes successful and veers off into crap territory, people will start cloning it, keeping only the features that made that software successful initially. Companies who try to strong arm their users will see users jump ship, or rather, de-board on islands.

At least I hope this will be the case.


There's some irony in the fact that LLMs are in large part possible because of open source software.

From the tools which were used to design and develop the models (programming languages, libraries) to the operating systems running them to the databases used for storing training data .. plus of course they were trained mostly on open source code.

If OSS didn't exist, it's highly unlikely that LLMs would have been built.


Turns out that it's only in myth that a snake can eat its own tail without dying.


> If OSS didn't exist, it's highly unlikely that LLMs would have been built.

would anyone want SlopHub Copilot if it had been trained exclusively on Microsoft's code?

(rhetorical question)


Not really. This db allows traversing the (deeply nested) data structures without loading them into memory. Eg. In Clojure you can do ``` (get-in db [:people "john" :address :city]) ```

Where `:people` is a key in a huge (larger than memory) map. This database will only touch the referenced nodes when traversing, without loading the whole thing into memory.

So the 'query language' is actually your programming language. To the programmer this database looks like an in-memory data structure, when in fact it's efficiently reading data from the disk. Plus immutability of course (meaning you can go back in history).


I wonder who the managers are going to manage..


I share the vision of the author.

People use software for specific features, but most software have lots of features people never use or need. A lot of modern software is designed to handle lots of users, so they need to be scalable, deployable, etc.

I don't need any of that. I just need the tool to do the thing I want it to do. I'm not thinking about end users, I just need to solve my specific problem. Sure there might be better pieces of software out there, which do more things. But the vibe coded thing works quite well for me and I can always fix it by prompting the model.

For example, I've vibe coded a tool where I upload an audio file, the tool transcribes it and splits it into 'scenes' which I can sync to audio via a simple UI and then I can generate images for each scene. Then it exports the video. It's simple, a bit buggy, lacks some features, but it does the job.

It would have taken me weeks to get to where I am now without having written one manual line of code.

I need the generated videos, not the software. I might eventually turn it into a product which others can use, but I don't focus on that yet, I'm solving my problem. Which simplifies the software a lot.

After I'm finished with this one, I might generate another one, now that I know exactly what I want it to do and what pitfalls to avoid. But yeah, the age of industrial software is upon us. We'll have to adapt.


How about you try vibe coding a banking app or tax filing or pay roll app?

Most commercial software is nowadays integrated into the real world in ways that can't be replicated by code alone, software which isn't like this can be easily replaced yes, but that kind of software already had free alternatives.


I wonder if the team at id considered this when they released Doom: In 30 years rats will be forced to play it in exchange for sugar water.


I don't think they considered it, but I'm positive they would have found it absolutely hilarious


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: