Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reading stuff like this, one thing I cannot stop wondering is this:

If Ai can be trusted to do all the trivial tasks and if non trivial tasks require a scaffold of trivial practice, where are we going to keep finding the people qualified enough to actually do the non trivial stuff?



This is what worries me the most when it comes to job losses and downward pressure on wages.

If we hit the point where a senior engineer reviewing the output from an AI can effectively replace a team with a senior engineer and 5 junior engineers, you’re right that it isn’t wise to simply replace all those junior engineers with the AI. Unless you’re confident that the AI will be able to replace that senior engineer soon, you _need_ junior engineers who can build up the experience needed to step into that role once the senior engineer moves on.

But you _can_ get rid of 3 or 4 of those junior engineers. The remaining junior engineer(s) will still need mentorship from the senior engineer and will still do “trivial” work that needs oversight, but they will be able to pump that out at a high enough rate to replace a few of their pre-AI peers.

Basically, I’d imagine the org chart at most companies will look pretty similar to now, except you’ll just have fewer people at each level.


You won't need people for that once AI systems start improving themselves.


So who will then certify them?


Other AI systems, presumably.


It's AI all the way down...


Certify them? What for?


Important work. It is a typical quality control architecture in the modern world to have an independent party evaluate whether one can perform an important task or not. Typical examples would be SE vs PE licensing on structural engineering or the various bar exams.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: