That's one theory. Another one I can think of is that sharp edges are scary, and most distress calls are high pitched.
Also, the thing about high frequencies and sharp edges lead to a contradiction: babies are more round than adults and produce higher pitched sounds, this is almost universal across all species.
There are other tentative explanations, such as how the vocal tract acts when producing these sounds, with "bouba" sounds being the result of smoother movement more reminiscent of a round shape.
"kiki" is not just higher pitched, it is also "shaped" differently if you look at the sound envelope, with, as expected, sharper transitions.
So to me, the mystery is still there. Is is the kind of thing that sounds obvious, in the same way that kiki sounds obviously sharper than bouba, but is not.
Because it is not how computers work. It doesn't matter much for high level languages like LUA, you rarely manipulate raw bytes and pointers, but in system programming languages like Zig, it matters.
To use the terminology from the article, with 0-based indexing, offset = index * node_size. If it was 1-based, you would have offset = (index - 1) * node_size + 1.
And it became a convention even for high level languages, because no matter what you prefer, inconsistency is even worse. An interesting case is Perl, which, in classic Perl fashion, lets you choose by setting the $[ variable. Most people, even Perl programmers consider it a terrible feature and 0-based indexing is used by default.
All of what you describe besides environmental stressors are symptoms of depression. So, to solve depression, you must first solve depression.
And even the environmental stressors are something we could work with as a society though welfare, environment regulations, etc... But how people react to them is also a symptom. There are depressive people with the best of situations, and people who enjoy life in the worst of situations. What often strikes me in documentaries about warzones, places of repression and extreme poverty or crippling diseases is how "normal" people seem to be, they enjoy themselves as if everything was fine. So, while reducing environmental stressors work, it is not the end of it.
The take may be that treating the symptoms of depression could work in treating the root cause, a positive feedback loop. But if done through lifestyle change, unfortunately, the treatment is coercion, you can't rely on the willpower of people who have none because of the disease.
> In the end it will be the users sculpting formal systems like playdoh.
And unless the user is a competent programmer, at least in spirit, it will look like the creation of the 3-year-old next door, not like Wallace and Gromit.
It may be fine, but the difference is that one is only loved by their parents, the other gets millions of people to go to the theater.
Play-Doh gave the power of sculpting to everyone, including small children, but if you don't want to make an ugly mess, you have to be a competent sculptor to begin with, and it involves some fundamentals that does not depend on the material. There is a reason why clay animators are skilled professionals.
The quality of vibe coded software is generally proportional to the programming skills of the vibe coder as well as the effort put into it, like with all software.
It really depends what kind of time frame we're talking about.
As far as today's models, these are best understood as tools to be used as humans. They're only replacements for humans insofar as individual developers can accomplish more with the help of an AI than they could alone, so a smaller team can accomplish what used to require a bigger team. Due to Jevon's paradox this is probably a good thing for developer salaries: their skills are now that much more in demand.
But you have to consider the trajectory we're on. GPT went from an interesting curiosity to absolutely groundbreaking in less than five years. What will the next five years bring? Do you expect development to speed up, slow down, stay the course, or go off in an entirely different direction?
Obviously, the correct answer to that question is "Nobody knows for sure." We could be approaching the top of a sigmoid type curve where progress slows down after all the easy parts are worked out. Or maybe we're just approaching the base of the real inflection point where all white collar work can be accomplished better and more cheaply by a pile of GPUs.
Since the future is uncertain, a reasonable course of action is probably to keep your own coding skills up to date, but also get comfortable leveraging AI and learning its (current) strengths and weaknesses.
I don't expect exponential growth to continue indefinitely... I don't think the current line of LLM based tech will lead to AGI, but that it might inspire what does.
That doesn't mean it isn't and won't continue to be disruptive. Looking at generated film clips, it's beyond impressive... and despite limitations, it's going to lead to a lot of creativity, that doesn't mean someone making something longer won't have to work that much harder to get something consistent... I've enjoyed a lot of the StarWars fan films that have been made, but there's a lot of improvements needed in terms of the voice acting, sets, characters, etc that arre needed for something I'd pay to rent or see in a thaater.
Ironically, the push towards modern progressivism and division from Hollywood has largely been a shortfall... If they really wanted to make money, they'd lean into pop-culture fun and rah rah 'Merica, imo. Even with the new He-Man movie, the biggest critique is they bothered to try to integrate real world Earth as a grounding point. Let it be fantasy. For that matter, extend the delay from theater to PPV even. "Only in theaters for 2026" might actually be just enough push to get butts in seats.
I used to go to the movies a few times a month, now it's been at least a year since I've thought of going. I actually might for He-Man or the Spider-Man movies... Mixed on Mandalorean.
For AI and coding... I've started using it more the past couple months... I can't imagine being a less experienced dev with it. I predict, catch and handle so many issues in terms of how I've used it even. The thought of vibe-coded apps in the wild is shocking to terrifying and I wouldn't wany my money anywhere near them. It takes a lot of iteration, curation an baby-sitting after creating a good level of pre-documentation/specifications to follow. That said, I'd say I'm at least 5x more productive with it.
It is more of a cultural thing. Package managers encourage lots of dependencies while programmers using language with no package managers will often pride themselves in having as few dependencies as possible. when you consider the complete graph, it has an exponential effect.
It is also common in languages without package managers to rely on the distro to provide the package, which adds a level of scrutiny.
The only thing that you seem to argue about is the naming of the branches.
If you call the git-flow "develop" branch "master" and the "master" branch "release-tags" it will be exactly as you describe. The names of the branches don't really matter in practice, so much that they could just decide to use "main" instead of "master" by default without much problems.
Maybe what bothers you is that you have a branch for tags, yeah, that's an extra level of indirection, but this lets you separate between user facing information in the master branch commits and developer facing information in the release branches commits.
Having the master (default) branch only contain releases let users who pull the project without knowledge of the process get a release version and not a possibly broken development version, which I think is nice.
Anyways, these are just details, I don't think the "git gods" (Linus) care about how you organize your project. There is only one sacred rule I am aware of: don't destroy other people history. Public branches you pushed that others have pulled is other people history.
> Maybe what bothers you is that you have a branch for tags, yeah, that's an extra level of indirection, but this lets you separate between user facing information in the master branch commits and developer facing information in the release branches commits.
That's such a marginal niche use case to build your entire organization around… why would you make this the default approach?
And what's wrong with not wanting to write functions yourself? It is a perfectly reasonable thing, and in some cases (ex: crypto), rolling your own is strongly discouraged. That's the reason why libraries exist, you don't want to implement your own associative array every time your work needs it do you?
As for plagiarism, it is not something to even consider when writing code, unless your code is an art project. If someone else's code does the job better then yours, that's the code you should use, you are not trying to be original, you are trying to make a working product. There is the problem of intellectual property laws, but it is narrower than plagiarism. For instance, writing an open source drop-in replacement of some proprietary software is common practice, it is legal and often celebrated as long as it doesn't contain the original software code, in art, it would be plagiarism.
Copyright laundering is a problem though, and AI is very resource intensive for a result of dubious quality sometimes. But that just shows that it is not a good enough "plagiarism machine", not that using a "plagiarism machine" is wrong.
If I use a package for crypto stuff, it will generally be listed as part of the project, in an include or similar, so you can see who actually wrote the code. If you get an LLM to create it, it will write some "new original code" for you, with no ability to tell you any of the names of people who's code went into that, and who did not give their consent for it to be mangled into the algorithm.
If I copy work from someone else, whether that be a paragraph of writing, a code block or art, and do not credit them, passing it off as my own creation, that's plagiarism. If the plagiarism machine can give proper attribution and context, it's not a plagiarism machine anymore, but given the incredibly lossy nature of LLMS, I don't foresee that happening. A search engine is different, as it provides attribution for the content it's giving you (ignoring the "ai summary" that is often included now). If you go to my website and copy code from me, you know where the code came from, because you got it from my website
Modern society seems to assume any work by a person is due to that person alone, and credits that person only. But we know that is not the case. Any work by an author is the culmination of a series of contributions, perhaps not to the work directly, but often to the author, giving them the proper background and environment to do the work. The author is simply one that built upon the aggregate knowledge in the world and added a small bit of their own ideas.
I think it is bad taste to pass another's work as your own, and I believe people should be economically compensated for creating art and generating ideas, but I do not believe people are entitled to claim any "ownership" of ideas. IMHO, it is grossly egoistic.
Sure, you can't claim ownership of ideas, but if you verbatim repeat other people's content as if it is your own, and are unable to attribute it to its original creator, is that not a bit shitty? That's what LLMs are doing
If a human learns to code by reading other people's code, and then writes their own new code, should they have to attribute all the code they ever read?
Plagiarism is a concept from academia because in academia you rise through the ranks by publishing papers and getting citations. Using someone else's work but not citing them breaks that system.
The real world doesn't work like that: your value to the world is how much you improve it. It would not help the world if everyone were forced to account for all the shoulders they have stood on like academics do. Rather, it's sufficient to merely attribute your most substantial influences and leave it at that.
If a human copies someone else's code verbatim, they should attribute the source, yes. If they learn from it and write original code, no, they don't have to cite every single piece of code they've ever read
Yes, you've stated the current social and legal rule we have to follow.
But I don't think you've given any moral justification for the rule, and in particular, why LLMs (who are not humans and have no legal rights or obligations) have to follow it.
And it is the kind of things a (cautious) human would say.
For example, that could be my reasoning: It sounds like a stupid question, but the guy looked serious, so maybe there are some types of car washes that don't require you to bring your car. Maybe you hand out the keys and they pick your car, wash it, and put it back to its parking spot while you are doing your groceries or something. I am going to say "most" just to be sure.
Of course, if I expected trick questions, I would have reacted accordingly, but LLMs are most likely trained to take everything at face value, as it is more useful this way. Usually, when people ask questions to LLMs they want an factual answer, not the LLM to be witty. Furthermore, LLMs are known to hallucinate very convincingly, and hedged answers may be a way to counteract this.
Microsoft also gets millions of dollars from both hospitals, probably. There is a good chance hospitals have computers running Windows and MS-Office. Microsoft also works closely with the Pentagon and whatever "evil" organizations, selling Windows license, cloud services, etc...
Same idea here. Hospitals need some data analytics, which was probably done in Excel before but wasn't sufficient, so they turned to Palantir, because it what they do.
I wish they turned to other solutions that would make better use of public money, I also wish they also didn't use Microsoft software.
Also, the thing about high frequencies and sharp edges lead to a contradiction: babies are more round than adults and produce higher pitched sounds, this is almost universal across all species.
There are other tentative explanations, such as how the vocal tract acts when producing these sounds, with "bouba" sounds being the result of smoother movement more reminiscent of a round shape.
"kiki" is not just higher pitched, it is also "shaped" differently if you look at the sound envelope, with, as expected, sharper transitions.
So to me, the mystery is still there. Is is the kind of thing that sounds obvious, in the same way that kiki sounds obviously sharper than bouba, but is not.
reply