Also has made me nervous for years that there's been no schema against which one can validate HTML. "You want to validate? Paste your URL into the online validation tool."
Just remove "parent" and "account" from the mix and all these. Tie the screen to the human and most of these challenges go away. This is what is trying to be achieved with these laws, so we may as well institute it that way.
Right we have studies, and they are 30 page documents that only academics read, and so they get poorly summarized by articles that try to make them sound more noteworthy than they are. Usually by saying "X linked to Y" without establishing causality or even if that link is a significant risk (is it a .1% increase in risk?)
When it's a drug more than 10% of the US population uses, we can immediately say the risk increase can't really be that big or we'd have noticed it by now.
Edit: after looking at the paper, it looks like among the weed group the prevalence is roughly twice as high -- so instead of 1/100 having psychotic issue it'd be 2/100... and again for people who used when they were 13-17 year olds, which is underage in every state.
So you could frame that as doubling the risk OMG, or a 1 percentage point increase in risk, or it could all just be self-medicating, we really don't know much. Probably still safer than alcohol.
Personally I agree with the alternative opinion that it will be a golden age. I'm embarking on a project that involves refactoring something I did 18 years ago. I'm assuming that it'll take 1/10 the time to make a much better modern version with the assistance of LLMs.
I think that AI will do the vetting of repos - just as humans do that now. Perhaps AI will do a better job. The only way we're gonna fight AI slop is with AI.
And I've been had a long enough to go through that whole progression. Actually from the earlier step of writing machine code. It's been and continues to be a fun journey which is why I'm still working.
It's after we come down from the Vibe coding high that we realize we still need to ship working, high-quality code. The lessons are the same, but our muscle memory has to be re-oriented. How do we create estimates when AI is involved? In what ways do we redefine the information flow between Product and Engineering?
I always feels like I'm in a fever dream when I hear about AI workflows. A lot of stuff is what I've read from software engineering books and articles.
I'm currently having Claude help me reverse engineer the wire protocol of a moderately expensive hardware device, where I have very little data about how it works. You better believe "we" do it by the book. Large, detailed plan md file laying out exactly what it will do, what it will try, what it will not try, guardrails, and so on. And a "knowledge base" md file that documents everything discovered about how the device works. Facts only. The knowledge base md file is 10x the size of the code at this point, and when I ask it to try something, I ask Claude to prove to me that our past findings support the plan.
Claude is like an intern coder-bro, eager to start crushin' it. But, you definitely can bring Claude "down to earth," have it follow actual engineering best practices, and ask it to prove to you that each step is the correct one. It requires careful, documented guardrails, and on top of it, I occasionally prompt it to show me with evidence how the previous N actions conformed to the written plan and didn't deviate.
If I were to anthropomorphize Claude, I'd say it doesn't "like" working this way--the responses I get from Claude seem to indicate impatience and a desire to "move forward and let's try it." Obviously an LLM can't be impatient and want to move fast, but its training data seem to be biased towards that.
Be careful of attention collapse. Details in a large governance file can get "forgotten" by the llm. It'll be extremely apologetic when you discover it's failed to follow some guardrails you specified, but it can still happen.
reply