Hacker Newsnew | past | comments | ask | show | jobs | submit | veunes's commentslogin

The problem isn't the abstraction level, it's the loss of the incubation period. Moving from Assembly to C didn't remove the need to think through data structures. Prompt engineering, however, triggers skipping the reflection stage entirely. Constantly managing bots leaves zero bandwidth for the "are we building garbage?" question. AI scales typing speed, not the speed of architectural decision-making

That's the hidden price of fast development

Validation is always harder than generation. Writing yourself means building context brick by brick. Reviewing AI means reconstructing someone else's - often broken - logic from zero. Juggling three projects in parallel fragmentizes mental context like a 90s hard drive. After a few hours, more energy goes into task switching and hallucination hunting than actual engineering. Deep work has been replaced by high-speed bot micromanagement

Character matters but so does having people around you who are willing to call it early, before you've rationalized yourself into ignoring it

But less about personal brilliance and more about how social power actually works when money, status, and weak accountability intersect

But it might've changed one decision, one meeting, one normalization step

I'd rephrase it as: nobody should be trusted with unchecked power, especially when it's exercised quietly and indirectly

Sometimes (sometimes) it just implies that someone sent an email, got ignored, and left a paper trail behind

Just being named in the files doesn’t mean you are guilty. In this situation being named in the files gave him an opportunity to demonstrate high moral character. “I turned down his money because he was scummy”

Yup. There's a few people like that in the files. But a distressingly large number of named people had ongoing correspondence.

All of this speedrun hits a wall at the context window. As long as the project fits into 200k tokens, you’re flying. The moment it outgrows that, productivity doesn’t drop by 20% - it drops to zero. You start spending hours explaining to the agent what you changed in another file that it has already forgotten. Large organizations win in the long run precisely because they rely on processes that don’t depend on the memory of a single brain - even an electronic one

This reads as if written by someone who has never used these tools before. No one ever tries to "fit" the entire project into a single context window. Successfully using coding LLMs involves context management (some of which is now done by the models themselves) so that you can isolate the issues you're currently working on, and get enough context to work effectively. Working on enormous codebases over the past two months, I have never had to remind the model what it changed in another file, because 1) it has access to git and can easily see what has changed, and 2) I work with the model to break down projects into pieces that can be worked on sequentially. And keep in mind, this the worst this technology will ever be - it will only get larger context windows and better memory from here.

What are the SOTA methods for context management assuming the agent runs with its tool calls without any break? Do you flush GPU tokens/adjust KV caches when you need to compress context by summarizing/logging some part?

Everyone I know who is using AI effectively has solved for the context window problem in their process. You use design, planning and task documents to bootstrap fresh contexts as the agents move through the task. Using these approaches you can have the agents address bigger and bigger problems. And you can get them to split the work into easily reviewable chunks, which is where the bottleneck is these days.

Plus the highest end models now don’t go so brain dead at compaction. I suspect that passing context well through compaction will be part of the next wave of model improvements.


My point wasn't that it's impossible, but that it creates a massive layer of overhead. Previously the project "state" lived in the developer's head; now for the agent to be effective, you have to constantly offload that state into design docs and task files. The job shifts from "writing code" to "managing external memory" for the agent. Those who have mastered this (like you) are indeed flying, but those trying to just chat with the code without this discipline are hitting a wall

This is the birth of Shadow AI, and it’s going to be bigger than Shadow IT ever was in the 2000s

Back then, employees were secretly installing Excel macros and Dropbox just to get work done faster. Now they’re quietly running Claude Code in the terminal because the official Copilot can’t even forma a CSV properly.

CISOs are terrified right now and that’s understandable. Non-technical people with root access and agents that write code are a security nightmare. But trying to ban this outright will only push your most effective employees to places where they’re allowed to "fly"


"they’re quietly running Claude Code" ... with their tokens or even worse full on usernames and passwords that have write/execute privileges.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: