I have mixed feelings. On the one hand, I totally feel what this author is saying. On the other hand , I love that I am now able to push into areas that I could have never touched before, and complete successful projects in them.
This is the fracture in the industry I don't think we are talking about enough.
It overwhelms everyone's ability to keep track of what it's doing. Some people are just no longer keeping track.
I have no idea if people are just doing this to toy projects, or real actual production things. I am getting the sneaking suspicion it's both at this point.
Orchestration buys parallelism, not coherence. More agents means more drift between assumptions. Past a point you're just generating merge conflicts with extra steps.
I created this Claude plugin that allows you to set a target for the percentage of human generate code in your project, and when you are below that target it hard blocks Claude from writing code.
Like many people, I am simultaneously excited about the power that AI gives me, and also fearful of what will happen if I give up coding altogether, both to my competence and to my personal satisfaction. I find that using this plugin allows me to set a balance that falls between pure vibe coding and writing everything by hand.
If this resonates with you, I hope you find some use from this plugin.
You're trained on a massive dataset that includes tens of thousands of practice exams with nearly the exact same questions and answers, just slightly different words/template to adjust for your answer.
The MBE exam is mostly multiple choice elimination, and these multiple choices are filled with legal jargon and case law. GPT-4 picks the best predicted answer and can mimic reasons for the choice. This ability to mimic reasoning is good enough to receive a passing score on the MBE and many other exams.
And there is often certain common indicators to what answers exclude. So just excluding enough of these is already quite efficient.
Also the questions are unlikely to be very creative. I think it could be possible to train someone with good enough memory just based on existing tests.
Exam are designed to filter out entities that are already assumed to be able to reason based on their knowledge of some specific domain. A hypothetical entity with no ability to reason but great ability to remember facts and able to pass an exam, is conceivable.
That's a high bar. Let's shut down malls, competitive sports, grades in schools, hell schools themselves, teen magazines, television, arcades, even suicide hotlines, etc because they all made at least one person feel more suicidal.
And then you could say, well, maybe some of those places didn't do the research. In which case, isn't that worse? If they are making people more suicidal and they don't even care enough to research and find out, how are they possibly going to get better? I would much rather an institution research the harms (and benefits) that it may be causing than to just turn a blind eye.
While we're at it, we should start tearing down any large or particularly beautiful bridges and condemning their architects and engineers, there's a ton of researching showing how those things increase suicides.
I wasn't aware there was any question that foreign groups (state supported or otherwise) actively coordinate on Facebook to influence US political campaigns.
Most of the research and claims you see about bots manipulating people on social media fall apart when examined. For example they often rely on a badly trained ML model that labelled nearly half of Congress as "bots". This sort of thing is never admitted in the media - if you don't double check for yourself you'd never realize.
Another version of this is extremely sloppy methodology where users are labeled "influencer bots" for merely having certain foreign IP ranges and being active in discussions about certain topics.
Twitter in particular has been banning tens of thousands of accounts based on that flimsy and circular reasoning.
And because the affected people are locked out of the only system, that would realistically allow them to draw attention to the problem, they are all out of luck in bringing attention to their situation and the problem.
That's a good example of the problem. Well, it's not about bots, but the same definitional and logic problems are evident. The story defines "troll farms" as "professionalized groups that work in a coordinated fashion to post provocative content, often propaganda, to social networks". That description is so vague it could describe almost all news outlets and political parties, along with many charities. But, they aren't going to classify CNN, PETA or the White House itself as a "troll farm" although it would be easy to argue otherwise.
Facebook obviously has big problems with internal activists who are trying to convince the company to pursue an ever-spiralling purge against their ideological enemies, and good evidence of that is the unfalsifiable nature of the descriptions of the enemy.
I think the facts on the ground preclude the moral panic angle. Skyrocketing teen depression since 2012, a genocide in Myanmar, ethnic violence in India, a riot/insurrection borne out of fake news on election integrity, woke cancellation mobs empowered by Twitter and the power of wokeness over institutions, large amounts of vax hesitancy. All of these nasty things are circumstantially tied back to social media in one way or another.
I hate how "conspiracy" has been turned into a way to dismiss inconvenient truths.
There is a long and well established history of interference and manipulation of foriegn elections (often by the US). It predates social media and can't be just be blamed on facebook. Pretending this isn't happening is just burying your head the sand.
It isn't a "conspiracy" to think that the most influential elections in the world draws more attention and are more heavily influenced.
reply