My take on the real reason behind the OpenClaw acquisition:
OpenClaw isn't a chatbot; it's a 24/7 autonomous system that connects to your email, calendar, messaging platforms, and web browser, chaining multi-step workflows together with persistent memory across sessions. Every one of those operations consumes API tokens; the architecture ensures that consumption is extraordinary.
Interesting idea but I found out that AI is pretty inconsistent on how it structures or breakdowns commits, highlighy dependand on the model and/or user prompting. So there is a chance you might only get to replay single move or commits that are large enough to make it harder to know whats happening
Watching how a lot of people are using and deploying AI and Agentic coding made think about the Tommyknockers, a series/book from Stephen King; there is a quote in particular that really fits.
"As I say, we've never been very good understanders. We're not a race of super-Einsteins. Thomas Edison in space would be closer, I think."
-- Bobbi Anderson, The Tommyknockers (Stephen King, 1987)
I don't disagree with you entirely here. I probably wasn't clear enough on what I was trying to convey.
Right now AI / Agentic coding doesn't seem is a train we are going to be able to stop; and at the end of the day is tool like any other. Most of what seems to be happening is people let AI fully take the wheel not enough specs, not enough testing, not enough direction.
I keep experiment and tweaking how much direction to give AI in order to product less fuckery and more productive code.
Sorry for coming off combative - I'm mostly fatigued from "criti-hype" pieces we've been deluged with the last week. For what it's worth I think you're right about the inevitability but I also think it's worth pushing a bit against the pre-emptive shaping of the Overton window. I appreciate the comment.
I don't know how to encourage the kind of review that AI code generation seems to require. Historically we've been able to rely on the fact that (bluntly) programming is "g-loaded": smart programmers probably wrote better code, with clearer comments, formatted better, and documented better. Now, results that look great are a prompt away in each category, which breaks some subconscious indicators reviewers pick up on.
I also think that there is probably a sweet spot for automation that does one or two simple things and fails noisily outside the confidence zone (aviation metaphor: an autopilot that holds heading and barometric altitude and beeps loudly and shakes the stick when it can't maintain those conditions), and a sweet spot for "perfect" automation (aviation metaphor: uh, a drone that autonomously flies from point A to point B using GPS, radar, LIDAR, etc...?). In between I'm afraid there be dragons.
@_dwt don't worry you didn't I appreciate good discussion and criticism. The publication is new and I'm still trying to calibrate my voice and style for it.
>I don't know how to encourage the kind of review that AI code generation seems to require. Historically we've been able to rely on the fact that (bluntly) programming is "g-loaded": smart programmers probably wrote better code, with clearer comments, formatted better, and documented better. Now, results that look great are a prompt away in each category, which breaks some subconscious indicators reviewers pick up on.
I don't anyone knows for sure, we all are on the same boat trying to figure it out how to best work with AI; the pace of change is making it so incredibly difficult to keep or try things. I'm trying a bunch of stuff at the same time:
-https://structpr.dev/ - to try to rethink how we approach PR reading, organizing review (dog-fooding it right now so is mostly alpha)
- I have an article schedule next week talking about StrongDMs Software factory, there are some interesting ideas there like test holdouts
- Some experiments in the Elixir stack for code generation and verification that go beyond it looks great. AI can definetively create code that _looks_ great but there is plenty of research that shows a lot of AI generated code and test can have a high degree of false confidence.
Author here; I think it can be a useful metric depending on the circumstance and use; the reason I decided to write that article is I'm starting to hear more and more of CTOs using as the sole metric from their team; I know of at least one instance where CTO is pushing for Agentic coding only and measure each dev based on LoC output.
There is also the x.com crowd that is bragging about their OpenClaw agents pushing 10k lines of code every day.
You are stuck on "If is not my brand of liberal, then they are MAGA and worse than Hitler" there is nuance in political opinions; people can be conservative and condemn and disagree with this war/administration/president.
Western society is turning into this tribalistic bullshit, is very visible in ithe US but to my suprise is now also in full force in Canada.
OpenClaw isn't a chatbot; it's a 24/7 autonomous system that connects to your email, calendar, messaging platforms, and web browser, chaining multi-step workflows together with persistent memory across sessions. Every one of those operations consumes API tokens; the architecture ensures that consumption is extraordinary.