Yeah but the purpose of a company is not to employ people, it is to make money. The employment is a means to an end. If the money continues to flow without the people, that is better.
I can imagine creating a system designed to allocate profits to a broader set of stakeholders, but that's not the system we have.
> Canning people when you do well is just a way to milk the cow that others raised for you.
I don't think people should reasonably expect to be employed if a company doesn't need them for its future plans.
> Plus it shows a blatant lack of imagination and foresight.
It seems to me Block has tried a whole bunch of different things. Imagination isn't their problem. I'm not deeply familiar with their business, but my hunch is that it's more that they're giving up on some of their pie in the sky ideas and consolidating on what's working.
Because the type system gives you correctness properties, and gives fast feedback to the coding agent. Much faster to type check the code than let say write and run unit tests.
One possible disadvantage of static types is that it can make the code more verbose, but agents really don't care, quite the opposite.
Funnily enough, when programming with agents in statically typed languages I always find myself in need of reminding the agent to check for type errors from the LSP. Seems like it's something they're not so fond of.
Self-conscious efforts to formalize and concentrate information in systems controlled by firm management, known as "scientific management" by its proponents and "Taylorism" by many of its detractors, are a century old[1]. It has proven to be a constantly receding horizon.
This is too broad of a statement to possibly be true. I agree with aspects of the piece. But it's also true that not every aspect of the work offloaded to AI is some font of potential creativity.
To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.
We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.
It's weird being on here and seeing so much naysaying, because I see a radical change already happening in software development. The future is here, it's just not equally distributed.
In the past 6 months, I've gone from Copilot to Cursor to Conductor. It's really the shift to Conductor that convinced me that I crossed into a new reality of software work. It is now possible to code at a scale dramatically higher than before.
This has not yet translated into shipping at far higher magnitude. There are still big friction points and bottlenecks. Some will need to be resolved with technology, others will need organizational solutions.
But this is crystal clear to me: there is a clear path to companies getting software value to the end customer much more rapidly.
I would compare the ongoing revolution to the advent of the Web for software delivery. When features didn't have to be scheduled for release in physical shipments, it unlocked radically different approaches to product development, most clearly illustrated in The Agile Manifesto. You could also do real-time experiments to optimize product outcomes.
I'm not here to say that this is all going to be OK. It won't be for a lot of people. Some companies are going to make tremendous mistakes and generate tremendous waste. Many of the concerns around GenAI are deadly,serious.
But I also have zero doubt that the companies that most effectively embrace the new possibilities are going to run circles around their competition.
It's a weird feeling when people argue against me in this, because I've seen too much. It's like arguing with flat-earthers. I've never personally circumnavigated Antarctica, but me being wrong would invalidate so many facts my frame of reality depends on.
To me, the question isn't about the capabilities of the technology. It's whether we actually want the future it unlocks. That's the discussion I wish we were having. Even if it's hard for me to see what choice there is. Capitalism and geopolitical competition are incredible forces to reckon with, and AI is being driven hard by both.
Fair point. What it really does for me is give me a better UX for having a bunch of parallel workstreams. I could achieve a similar effect thing with scripting, and maybe some clever ways of getting something like the sidebar for seeing the status of everything on a single pane. But Conductor packaged it up in a way that I found much improved over multiple Cursor or VSCode windows.
They are not similar. A LLM is a complex statistical machine. A brain is a highly complex neural network. A brain, is more similar the perceptron of some AMD CPUs that to a LLM.
I am not having the exact same experience as the author--Opus 4.6 and Codex 5.3 seem more incremental to me than what he is describing--but if we're on an exponential curve, the difference is a rounding error.
4 months ago, I tried to build an application mostly vibe-coded. I got impressively far for what I thought was possible, but it bogged down. This past weekend, my friend had OpenClaw build an application of similar complexity in a weekend. The difference is vast.
At work, I wouldn't say I'm one-shotting tasks, but the first shot is doing what used to be a week's work in about an hour, and then the next few hours are polish. Most of the delay in the polish phase is due to the speed of the tooling (e.g. feature branch environment spin up and CI) and the human review at the end of the process.
The side effects people report of lower quality code hitting review are real, but I think that is a matter of training, process and work harness. I see no reason that won't significantly improve.
As I said in another thread a couple days ago, AI is the first technology where everyone is literally having a different experience. Even within my company, there are divergent experiences. But I think we're in world where very soon, companies will be demanding their engineering departments converge to the lived experience of the people who are seeing something like the author. And if they can find people who can actuate that reality, the folks who can't are going to see their options contract precipitously.
> But I think we're in world where very soon, companies will be demanding their engineering departments converge to the lived experience of the people who are seeing something like the author.
I think this part is very real.
If you’re in this thread saying “I don’t get it” you are in danger much faster than your coworker who is using it every day and succeeding at getting around AI’s quirks to be productive.
My wife manages 70 software developers. Her boss, the CIO, who has no practical programming experiece, is demanding her and her peers cut 50% of their staff in the next year.
But here's the thing I don't get. I can see the argument for AI endangering our jobs. But why does this also mean that rapid adoption of AI on a personal level of is so important? In an AI world, there could be a new bell curve of talent and a fight to stay ahead. So ... adopt early so you're the one not left behind. (and implicitly, most other people are left behind?)
If AI tightens down the job market I just don't see why there would need to be this frantic urgency to adopt it. Getting a small head start might not mean very much once the dust has settled. Employers will still be cutting, and there will still be new blood who will adapt to new technology faster than you can.
> The real danger is if management sees this as acceptable. If so best of luck to everyone.
Already happening. It's just an extension of the "move fast and break stuff" mantra, only faster. I think the jury is still out on if more or less things will break, but it's starting to look like not enough to pump the brakes.
> Be careful here. I have more coworkers contributing slop and causing production issues than 10x’ing themselves.
Sure, many such cases. We'll all have work for a while, if only so that management has someone to yell at when things break in prod. And break they will -- the technology is not perfected and many are now moving faster than they can actually vet the results. There is obvious risk here.
But the curve we're on is also obvious now. I'm seeing massive improvements in reliability with every model drop. And the model drops are happening faster now. There is less of an excuse than ever for not using the tools to improve your productivity.
I think the near future is going to be something like a high-speed drag race. Going slow isn't an option. Everyone will have to go fast. Many will crash. Some won't and they will win.
> I think the near future is going to be something like a high-speed drag race. Going slow isn't an option. Everyone will have to go fast. Many will crash. Some won't and they will win.
I think this is right. This is what we as engineers have to wrap our minds around. This is the game we're in now, like it or not.
> Many will crash.
Aside from alignment, and some of these bigger picture concerns, prompt injection looms large. It's an astoundingly large, possibly unsolvable vector for all sorts of mayhem. But many people are making the judgment that there's too much to be gained before the shocks hit them. So far, they're right.
If a company lets faulty code get to production, that's an issue no matter how it is produced. Agentic coding can produce code at much higher volumes, but I think we're still in the early days of figuring out how to scale quality and the other nonfunctional requirements. (I do believe that we're literally talking about days, though, when it comes to some facets of some of these problems.)
But there's nothing inherent about agentic coding to lead to slop outcomes. If you're steering it as a human, you can tweak the output, by hand or agentically, until it matches your expectations. It's not currently a silver bullet.
That said, my experience is that the compressing of the research, initial draft process, and revision--all which used to be the bulk of my job--is radical.
Yes, for me --moved past AI coding accelerating you to 80-90% then living in the valley of infinite tweaks. This past month with with right thinking working with say Opus 4.6 has moved past that blocker.
> But I think we're in world where very soon, companies will be demanding their engineering departments converge to the lived experience of the people who are seeing something like the author.
We already live in that world. It's called "Hey Siri", "Hey Google", and "Alexa". It seems that no amount of executive tantrum has caused any of these tools to give a convergent experience.
reply