Hacker Newsnew | past | comments | ask | show | jobs | submit | throwaway48540's commentslogin

It's not just non technical people. Even programmers around me are worried. Some think AI will replace programmers soon, some think partially, mostly everybody thinks that a lot of stuff will change soon.

I am personally not worried. Any outcome is great. Either I keep a well paid job and it becomes even more well paid, or AI is so good that there aren't any jobs at all anymore, which sounds good too.

Let's define soon - I understand it as "within a decade or so" in this context.


This is my belief as well. Either one of three things happen (in order of most likely to least likely)

1. AI replaces software engineers in the same way that "autopilot" in planes replaced pilots. Augmentation of boilerplate (particularly devop CI/CD, deployment tedium) sounds pretty appealing to me.

2. AI genuinely replaces software engineers. If that's true, there's no reason to believe it wouldn't soon disrupt other engineering domains as well. At that point we're basically closing in on AGI, which doomsday predictions aside, I would like to hope leads radical progression of technology and evolution of mankind.

3. AI is completely banned/suppressed Dune style. This only strengthens my position in the market.


AI genuinely replaces software engineers. If that's true, there's no reason to believe it wouldn't soon disrupt other engineering domains as well.

If that’s true, then…

How? Most engineering is physical and blueprint work which cannot be converted into embeddings and filled in the middle. Or if you’re assuming that something LLM/VLM/SD-like will appear for this domain, that’s a pretty wild guess. I even doubt that transformers are more suitable for that than now-classic boring ML. If you hope that the language will somehow “emerge itself” into building, schematics, industrial, etc behaviors, that’s also quite a guess.

Make sure you’re not coping, because transformers generalize over (fuzzy) painting and text, the only data we have in abundance. These are common attributes of programmers, writers and graphics artists and barely anyone else, and the latter two are already screwed. We don’t have readily accessible logs for abstract thoughts, professional conversations at work or inner monologues. Maybe in ten years China will realize it and start to collect it at scale.


This seems like a big assumption to me personally.

To be clear, I happen to be of the opinion that AI/ML/LLMs/whatever cannot truly replace software engineers without the true advent of AGI.

But to play devil's advocate. I'm an embedded SWE and work with circuit designers/have dabbled in some PCB design myself. Taking circuit design as an example:

From an abstract logic standpoint, PCB design is not that different from software development. It essentially boils down to taking some input and feeding it through a cascade of various discrete transformations until you have some desired output. The fact that PCB designs are captured through schematics is irrelevant, as a schematic is just a visual representation of a netlist. There are even DSLs that allow for the design of circuits through code alone. The mechanisms that make LLMs work show some level of adaptability to domains outside of traditional language. It is totally conceivable today to finetune an LLM on netlists (or even on images of schematics with the right encoder model) and have it be able to generate circuit designs. The training data exists, though it's not as plentiful as say code or english texts. I'm not an ML expert but I believe it's totally possible to have something where, for example, the text "555 timer-based LED-blinking circuit", could map in embedding space to a netlist that describes said circuit. There are in fact companies working on this exact thing today (Flux.ai)

I don't see why this wouldn't work for other engineering disciplines aswell. There's lots of public scad code for various 3d models. It's totally conceivable to build a model trained on scad or even raw 3d object file geometry and text embeddings to generate mechanical designs. Lot's of research is going into generative meshes which could be seen as an early form of this.

I think my point being that the hard part of engineering, discipline aside, is being able to take a fuzzy set of desires from a customer and a fuzzy set of constraints and being able to use the hard math and science to create and iteratively refine a product/system/whatever to make all stakeholders happy. LLMs can't really emulate this process today.


This is a great point, one i'd not considered.

People realise the AI capability is limited by the quality of the data used for training, so work out ways to collect better quality data and keep pushing on the LLM architecture as a path to AGI.


Let's keep physical engineering aside for a moment: If AI arrives at a level to replace software engineers then we can safely assume it can replace thousands of other knowledge jobs that are way less syntactically complex, from will writers to attorneys, doctors in general medicine, 100% of Excel/Word/PowerPoint white collars, business consultants, middle managers, senior managers, VC capitalists, investment bankers, etc., basically destroying the modern society. Not gonna happen.


I’m pretty sure that LLMs will never replace programmers. It’s way too hard to stitch together code snippets. Also, when you think about all the misunderstandings that arise from pseudo code, free form text input to an LLM has to be even more ambiguous. I don’t care whether someone calls themselves a “prompt engineer”.

But AI will replace programmers just like it superseded chess players.

We are putting the finishing touches to an AI that write code. And not in an LLM kind of way. It allows you to specify exactly what you want without needing to specify the logic. Check out Ac28R.com

We are excited about it, but it makes my heart ache when I think about the impact.

The world is speeding up. We are the cause but we will struggle to avoid getting swept up in it.


> I am personally not worried. Any outcome is great. Either I keep a well paid job and it becomes even more well paid, or AI is so good there aren't any jobs at all anymore, which sounds good too.

You certainly have a rose-tinted magical way of viewing the world.

You haven't thought through the consequences of this at all. I'll break this down a bit further for you hopefully in a way you can understand.

You work to provide value in exchange for a store of value; currency. You do this to spend that currency on food and other basic necessities which are dependencies for your survival, first, and then discretionary spending second.

Producers hire you so you can distribute labor and produce, so they can make a profit. They stop producing when that's no longer possible. Its an agreement between both parties which enable distribution of labor and exchange of goods flowing through an economy. Without either part, no exchange occurs, there may be a brief stalling period but when its stalled everyone's goose is cooked. It is a very fine balance, or supercritical state where deviations can cause exit conditions. Sound similar to an n-body problem; that's because it is, along with mathematical chaos (small changes in inputs create dramatic unpredictable changes in outputs).

When there are no jobs, you can't earn currency, and you suddenly can't feed yourself. You have nothing to do, no way to differentiate yourself either.

Worse, No one will listen to you because you no longer provide value, you are now a nameless worthless slave. The only thing you had was trading your time and education for money and this is now gone. Your children will be worse off because the cost of education will no longer provide benefit.

This will worsen as population grows.

Eventually because resources are inherently scarce, someone somewhere with power, will decide its just better to reduce the number of mouths to free up resources; and they'll decide for you that you and your entire family meet that criteria, along with many others, but not the majority who voted those people into those positions of power. It will be a genocide of the rational and intelligent, powered by big data and your tax dollars.

It won't be the old people because they represent the most power having started and accumulated power first, but they will eventually be on the chopping block as well.

Along with this, production systems will fail, either as a result of vandalism, or growing corruption all causing shortage, which causes unrest and violence. With no market, or medium of exchange (inflationary economies fail when debts exceed production), order wanes.

The rational and intelligent people will fight back weakly, trying to organize to survive by destroying the system of bondage and slavery that is imposed on them without their consent. Eventually everything fails as systemic issues cascade.

Because intelligent people have the capability of great harm, they will be eliminated first favoring others instead.

Are you worried yet? You should be, every single thing here logically follows the next with just a little educated background in economics and history.

The world you say sounds so good, is in fact a hellscape world of spiraling madness, and intolerable suffering with no one capable of the first step (recognition) that can stop the death march forward. Anyone that could will have been killed long before recognition could happen in ways that don't attract attention and it will only show up in the actuarial tables under mortality.

People will stop having children when they can no longer afford to.

Old will crowd out the young as medical technology improves, and then there will be a great dying and collapse.

Those that remain will have not been prepared, having developed in a disadvantaged environment and so these collapses may end in the annihilation of the civilization as a whole.

By the time you recognize the problem (if at all) its too late to do anything about it, and your heads on the chopping block along with your family and friends because you had the intelligence to do the job in the first place, but not enough for you to foresee the consequences of your choices.

You and others like you are so concerned with can you do something, that you didn't think about should you do something.

Older generations called this type of blindness and destruction evil, and viewed it as a curable malady at the turn of the century; but not a cure anyone would willingly choose.

Currently, we are on track for economic collapse by 2029, though the more they print the sooner it occurs.

Mises has a full breakdown on how centralized systems fail, written in the 1930s. Its why socialism is considered a failed economic system by rational people.


To use your phrase of: You haven't thought through the consequences of this at all. I'll break this down a bit further for you hopefully in a way you can understand.

Don't even have to go further than "You work to provide value in exchange for a store of value; currency."

Work assumes that there is work to be done. If everything is abundant, there is no work to be done because no currency needed. Who makes food? Machines. Who makes the physical infrastructure? Machines. Who generates the power? Machines.

I'm not saying that this is going to happen (I also don't think anything you're saying is going to happen either), and I think the GP was being facetious, but if you are going to take that statement seriously, then I think you do not understand what abundance truly means.


You are blind, and supporting something that is both incredibly evil and destructive.

If you can, take a look at what abundance does to people in the real world, objectively, do some actual research. Look at mortality for lottery winners, and business tycoon heirs; read their horror stories.

When there is no work, there is no purpose, no growth, no value, and no life, nothing new happens.

It is death, either quick and self-inflicted, or slow until madness from suffering takes over, where you can't notice it along the way like a person suffering from a progression of Alzheimer's. That is the abundance you seek.


I said "I'm not saying that this is going to happen" so I wouldn't say I am supporting it. Was just making a point that you looked at it too black and white and didn't think through everything. More of a response to what I felt was unnessecarry pompusness

Your argument feels more like an anarcho-primitivism/Unabomber idea of abundance or technological progress reducing humanity's satisfaction, so we should stay where we are, i.e., always have tasks(work) rather than pursue fulfilment.


That is probably not what he meant.

1. The weaknesses of the human condition might be the cause of our destruction. That's why I am not surprised when people give these anti doomer takes. History keeps repeating again and again and people haven't really learned from their mistakes.

2. The solution to this problem would be to fix or greatly diminish human greed and selfishness or anything else that I might be missing. We would then have greater chances of heading towards a utopia.

We will end up with either a utopia or a dystopia.


I don't disagree. I don't really have too much of an opinion on which one we end up with. I was more pointing out that, at the limit, it's not certain that we will end up in a dystopia through abundance, more that it is extremely uncertain of which one it will be and to extrapolate as the GP did is to not full take abundance to it's limit.


I would assume we are moving towards a dystopia. GP is correct on the dangers but his perspective is incomplete and takes you on a wrong direction as you may have said.

It would require a lot of people to understand the dangers and take action to solve the necessary problems before it's too late.

Governance and the other systems we have must fundamentally change. (All over the world)

We must acknowledge our flaws and frailty which causes us to be selfish,greedy and seek dominance and control over the rest. This is animalistic behaviour.

People must recognize that we are running out of time.

We are seeing a race whose direction is difficult to predict but the magnitude will be out of bounds.

If you are useless, you won't matter. Have you thought about how pests are treated?

This is relevant today and as time goes by, it's not difficult to predict.

I think we are heading towards a dystopian world.

I hope peace and goodwill prevails.


You sounds like those annoying religious nuts screaming the end is near and we all going to hell.


I understand what you have said but I have an argument why it can happen before 2029. How can I contact you securely? I asked this since you know the dangers, you might be extremely worried.


I'm aware of the severity but I don't have a secure way of being contacted at the moment. I don't know when I'll have it up and running.

Most of my devices have been repeatedly compromised on a fairly consistent and continuing basis (bricked at the firmware level; routers, modems, phones). HFC-head end services appears compromised on the ISP side, reports have been made; tickets opened; and closed 30 days later with no correction, and no record on future calls so its all BS.

I have been bogged down in the process of recovering from backups. Its slagged enough hardware that I now keep SPI firmware and parallel memory chip backup images for my important systems prior to connecting them to a network, and its a real chore accessing the hardware to revert to known working good images for quite a lot of these component interfaces; often they are buried destructively under pads/thermal heat sinks.

I don't have an ETA, its minute detailed work, and working directly with hardware really isn't my forte.

> you might be extremely worried.

That's been fairly consistent since this came to my attention a few years ago, not for myself but for family and friends that might survive this.

In fairness, I think its likely I'll be dead from vaccine injury by the time this rolls around. Nerve-function and muscle loss are progressive and worsening, and its stumped every expert, and worsening symptoms after reintroduction of booster, and infection later (from the grocery store, C19) so odds aren't good.

I've done what I can, and dwelling on the things you can't control doesn't do anything but create misery.

I'm aware there are a number of things that can accelerate the timetable where it would happen sooner, some outlier things as early as late 2025, but those aren't likely to happen.

2029 is the point at which the economic system fails to socialism/communism with no action needed from anyone; just business as usual. From there, its a steady death march to full collapse and most don't have a clue.


Do you need to run all experts at once?


No, but different ones are invoked for each word so typically they’d all be preloaded into VRAM. Also, because of batching, different users would use different experts concurrently.

There’s no detailed information about how GPT4 is hosted, but we can guess from competing models and the leaked info.


Yes, I do.


But that's most likely 1% of the market. They can have their own phones. The rest of us just want water resistance for accidental contact with water and easily replaceable batteries.


I think you’re in a techie bubble. I would wager more people care about better water resistance (for example, because they want to use their phone in the shower) than about easily replaceable batteries.

The overwhelming majority of people will never even contemplate trying to replace their own battery no matter how easy it is (unless it’s literally 90s/2000s-style snap on).


We have our own phone, it's the iPhone. I paid the money for it because I wanted the full package. You can buy your own kind of phone that's not the full package. Many different vendors are making that.


The iPhone is not waterproof, it's IP68 rated meaning it's water resistant. Swimming with your phone is absolutely not recommended and I don't know a single person that does this (unless we count people using special waterproof cases for filming).

So no, the average smartphone buyer does not swim with their phone. Manufacturers had other incentives to make changing the batteries harder and there was no pressure from customers to increase the IP rating. In fact all you hear is people ranting how much it sucks that batteries are so hard to change these days.


Is that all you hear? I don't hear that at all. People around me didn't change the battery when it was easy either. I was always like an alien when I suggested it.


I remember when the iPhone first came out non-tech friends had recurring nightmares about forgetting to take their phone out of their pocket before swimming.

Phones being waterproof is a huge QOL improvement


Who is us? Likely a similar sized segment that wants to tinker and are Louis Rossman fans.


Use USB-C to connect a hub with display and mouse, copy data over internet.


Could usb-c also connect an ethernet port and/or a flash drive?


Lol, I really want to see a 150-year old EURO6 200kw diesel engine coupled with a 150-year old 9-speed automatic dual-clutch allowing me to drive on the 150-year old roads for 4 liters per 100 km at German highway speeds.

BTW, electric cars are older technology than ICE cars.


The main innovation in modern electric cars isn't just about swapping fuel with electricity, but eliminating all junk in between that wasted power only to overcome ICE inherent flaws. More simple electronics aside, clutch, gearbox, then driveshaft are all gone, and they contributed to a huge waste of power, which burned a lot of fuel for nothing. This is so bad that where possible and economically feasible (trains and ships) they use mixed diesel-electric pairs with the diesel engine working as generator so that it would always run at maximum torque not being forced to start from zero rpm and not having its speed related with train and ship speed. The electric engine is indeed older than internal combustion one, however what makes it a better option also for cars required modern development, especially in batteries and rare earth magnets which would have been unthinkable only a few decades ago.


> clutch, gearbox, then driveshaft are all gone, and they contributed to a huge waste of power which burned a lot of fuel for nothing.

Practically, they are replaced by even more "junk" in the form of literal tons of batteries weighing more than the part they replaced, the electric motor having to expend more energy to carry this extra weight around..

> what makes it a better option also for cars required modern development, especially in batteries and rare earth magnets which would have been unthinkable only a few decades ago.

Is it, though? I'd like hard facts on that. From what I can find, battery chemistry hasn't seen any breakthrough, only their cost has steadily decreased, in large parts thanks to economies of scale and an oversight for externalities mostly in the extraction and refining supply chain. Similarly, high efficiency electric motors have been a solved problem for a century. I think the whole charger infrastructure and finally getting to the end of the chicken and egg conundrum has more to do with EVs becoming a thing than some new tech having enabled it.


That's not entirely true, most electric cars sold today have a driveshaft, clutch and gearbox.


I'd be very nervous if somebody told me they will host my business data or otherwise provide a service to me forever for a single time payment. Maybe that's what makes your customers go away.


Consumes? As in, the water disappears?

Also, it's pretty incredible. A very comparable job can be done with a local model on my MacBook Air in few minutes. How many bottles of water is that? That's so gigantic, immeasurable difference in efficiency I can't help but think the article is totally wrong.

BTW I used ChatGPT today. If I didn't, I would have to spend a week doing menial work. Instead, the job was done in few hours. How many bottles of water it saved from being pissed away by me? None, I still piss the same amount of water, but now I have a better ratio of job done vs water pissed.

What a weird way to measure something...


What is shady about them? Are they criminals?


Their agenda is to frame the self-serving interests of a very narrow group as the economic interests of everyone. That's shady.


Depending on the specifics, that sounds congruent with some of Adam Smith's economic theories so it's not surprising to see such arguments used, especially if some of the authors are from the Adam Smith Institute.

His whole invisible hand metaphor is about society advancing as a whole from agents acting in their self interest.

It's more likely a sincerely held belief than an attempt at misdirection.


Adam Smith is probably spinning in his grave.

> His whole invisible hand metaphor is about society advancing as a whole from agents acting in their self interest.

That theory is predicated on the market being composed of buyers and sellers of broadly similar financial power. This is no longer the case in most markets, certainly not in the UK as far as ordinary people or most small companies are concerned.


Yes. Markets pursue the interest of wealth-weighted humans, not evenly-weighted humans. Disguising "wealth weighted value" under the concept of "value" (which, colloquially, has a more even weight) is the propaganda coup of the century.


I doubt Adam Smith would agree with anything the Adam Smith Institute says.


As rjsw says, they refuse to disclose who funds them. See https://www.monbiot.com/2022/10/07/thinktanking-the-country/ for some background.


If you consider tanking the UK economy to the tune of hundreds of millions if not billions of pounds, then yes, absolutely.


How could they possibly do that?


They are referring to the liz truss budget.


What happened wasn't primarily due to her budget. It was due to a meltdown in the pension sector triggered by over-leverage, arguably caused by the BoE not doing its regulatory duties correctly.

Don't get me wrong, a budget that cuts taxes without cutting spending is no good. But the idea that what happened was a direct consequence of that doesn't make much sense as it had been telegraphed a long way in advance, giving the markets plenty of time to adjust. The central bank changed monetary policy a day before the mini-budget, and changes in that are kept secret until the moment of announcement. Additionally, UK spending has since blown through what the mini-budget would have created without any sudden market turmoil.

https://www.ft.com/content/4701b6ac-851e-43fd-a2b2-b38dd07c7...

Regulators failed to anticipate the dangers that borrowing by pension schemes posed to the stability of the UK’s financial system, according to a parliamentary report into the turmoil that hit the gilt markets following Liz Truss’s disastrous “mini” Budget in September last year.

Pension schemes suffered multibillion-pound losses after they were forced to sell assets to ensure that complex derivative-linked strategies — known as liability driven investments (LDI) — did not implode when gilt yields jumped as investors rejected the then prime minister’s economic strategy.

Also, Truss is basically correct that the UK needs more growth. Disagreeing on her tactics is reasonable, disagreeing on her goals isn't. She was unfortunately attempting to create growth from a position of weakness: in a party that didn't want to do anything hard like cutting spending, and with a fragile/over-leveraged financial sector.


Pull the other one. Only some of the tax cuts had been telegraphed in advance. Abolishing the top rate of income tax and cutting the basic rate early hadn’t been. They ignored all the warnings they were given and sidelined the OBR. The Truss/Kwarteng budget directly crashed the economy and trying to pretend otherwise is some serious revisionism.

https://www.bbc.co.uk/news/63229204


You're arguing with the conclusions of the official investigation, not me.


The committee report only looked at the regulator. To be sure, the regulator and the pensions industry should have done more. But the idea that Truss and Kwarteng are blameless innocents in the whole fiasco is ridiculous.


So it is the elected representatives who are responsible for choosing them and implementing whatever they suggested?


Being elected doesn't remove your obligations to follow sane, moral policies.


That's my point. Crazy economists are not responsible for the damage, it's the elected people who listened to them and followed through. To create an intentionally extreme parallel, imagine they have chosen a group of idk, satanists. Would you blame the group of satanists for being satanists, or the people in charge of a country who contacted a group of satanists for advice?


Surely you blame both? Being consistently evil and living up (or down?) to ones reputation for evil doesn't make one less evil.


They refuse to say who funds them.


Led By Donkeys' exposure of the Truss mini-budget and the Adam Smith Institute involvement is enlightening:

https://www.youtube.com/watch?v=IRDLIOME47c


Hats. The fedoras they wear shades their faces from view, m’lady


What exactly is the problem? That they're using any energy at all? Well, I don't like that people have lights on after 7 PM...


Yeah. The only way the sentiment "I don't like that ___ is using energy, even though the energy is completely clean" makes sense is if they've already made up their minds that ___ is bad, and thus any energy usage is bad. The energy usage argument is a red herring; it's actually just a value judgment on AI itself.

> If you're an AI hater, it's frustrating to see what you consider a useless technology growing to take up more and more of our energy mix, eating into climate gains being made from the immense growth of renewable energy. If you're an AI maximalist, on the other hand, the significant energy use projected for AI is a small price to pay for a technology that you think will revolutionize our lives much more than technologies like air conditioning, refrigeration, or the automobile ever did.

> The answer probably lies somewhere in the middle. In the long run, AI's energy use will likely level off at a significant but not grid-melting level that's roughly commensurate with the collective economic value we as a society get from it. Whether the trade-offs inherent in that shift are "worth it" involves a lot of value judgements that go well beyond how much electricity a bunch of servers are using.

Source: https://arstechnica.com/ai/2024/06/is-generative-ai-really-g...


You're claiming it would be terrible for the rest of us, without supporting that assumption in any way. It's not a fact, pseudo scifi action movies don't count as facts.


> You're claiming it would be terrible for the rest of us, without supporting that assumption in any way.

I literally explained it. I straightforwardly applied the technology to our existing social/economic structure.

And changing the social/economic structure is probably harder than developing the technology and requires precisely the kind of power that a successful AGI technology would remove (e.g. workers can't strike to keep their jobs when the boss is planning to lay them all off).

> It's not a fact, pseudo scifi action movies don't count as facts.

Honestly, the "AGI will be so great/everything will be fine" assumption relies less on facts and more on sci-fi fantasy than anything I said.


You did not explain anything, you just said that something would happen, without any reasoning why or why no counter effects would happen.

Yes, both sides of the debate use scifi as facts, I agree. I don't think the other side does it more than you do, though.


> You did not explain anything, you just said that something would happen, without any reasoning why or why no counter effects would happen.

If you think that, the problem's on your end of the connection. The most charitable read of your comment is you're expecting a level of exposition that is not actually required, especially given the common context of what exists now.

Personally, I think you're actually doing more of what you're accusing me, for instance your sibling comment of:

> The economic system is not set in stone. If everyone is irrelevant to it, the economic system becomes irrelevant to everyone, and a parallel system gradually replaces it.

You're basically hand-waving a future and saying "everything will be fine." And you're also misunderstanding some significant things in a kind of black and white way. E.g. I never said "everyone [would be] irrelevant [to the economic system]," I said labor would be. That's a lot of people, but not everyone.


He's talking about the economic consequences.

AGI in an internet connected world is capitalism end-game. Once you have AGI, labour (both physical and intellectual) becomes redundant, humans have a "value to the system" approaching zero.

Our economic system is built on a series of assumptions that fundamentally cannot survive AGI, and nobody is really even trying to grapple with that fact.


The economic system is not set in stone. If everyone is irrelevant to it, the economic system becomes irrelevant to everyone, and a parallel system gradually replaces it.


IMO we've had the EXACT same economic system since we were living in caves, it's called "supply and demand".

What do you do when "demand" for human labour drops to zero and "supply" stays at >8 billion.

No account of tinkering at the edges is going to fix that. We're in a much deeper fundamental problem than you might seem to think we are.


Sadly "earning a living" goes beyond economy, it is also deeply involved in societal and cultural values.


It really isn't. Just 100 years ago "earning a living" meant growing food themselves on fields behind the house for the majority of humans.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: