I can write a similar article that: most people problems are technical problems.
This article hits on a pet peeve of mine.
Many companies and individuals can benefit from better processes, communication skills, and training.
And also people who proclaim "Most technical problems are people problems" and "It's not what you know, it's who you know" are disproportionately those who are attempting to influence others that "My skillset is more valuable than your skillset." The people who believe the opposite are heads-down building.
The truth is that nearly all problems are multifactorial and involve long chains of causality. They can be patched at multiple points.
And so while there are standard stories about "If you do the 5 Why's and trace back causality, the issue becomes a deeper human problem," you can nearly always do something else and find an alternative technical solution.
The standard story goes: "This thing failed because this this other thing crashed, because this thing was misconfigured, because the deployment script was run incorrectly, because no-one trained Bob how to use it." See, the human problem is the deepest one, right?
But you can find an alternate technical fix: why was it possible to run the deployment script incorrectly?
Or you can ping-pong it back into a technical problem: he wasn't trained because everyone is stressed with no time because things keep breaking because of bad architecture and no CI. So actually the technical problem is deepest.
But no, because that only happened because the CEO hired the wrong CTO because he didn't know anyone who could evaluate it properly....
...which only happened because they didn't use <startup that helps you evaluate engineers> (technical problem)
...which only happened because said startup didn't have good enough marketing (human problem)
...which only happened because they were too slow to build from their own tech debt and didn't have the money (technical problem...)
And so on. Ping, pong.
The article says: we had too much tech debt because humans weren't trained enough.
One can also say: we had too much tech debt because we didn't have good enough linters and clone detectors to keep out the common problems, and also we had made some poor technical choices that required hiring a much larger team to begin with.
If you have a pet problem, you can always convince yourself that it's responsible for all woes. That just keeps you from seeing the holistic view and finding the best solution.
I've worked in about 40 languages and have a Ph. D. in the subject. Every language has problems, some I like, some I'm not fond of
There is only one language that I have an active hatred for, and that is Julia.
Imagine you try to move a definition from one file to another. Sounds like a trivial piece of organization, right?
In Julia, this is a hard problem, and you can wind up getting crashes deep in someone else's code.
The reason is that this causes modules that don't import the new file to have different implementations of the same generic function in scope. Julia features the ability to run libraries on data types they were never designed for. But unlike civilized languages such as C++, this is done by randomly overriding a bunch of functions to do things they were not designed to do, and then hoping the library uses them in a way that produces the result you want. There is no way to guarantee this without reading the library in detail. Also no kind of semantic versioning that can tell you when the library has a breaking change or not, as almost any kind of change becomes a potentially-breaking change when you code like this.
This is a problem unique to Julia.
I brought up to the Julia creators that methods of the same interface should share common properties. This is a very basic principle of generic programming.
One of them responded with personal insults.
I'm not the only one with such experiences. Dan Luu wrote this piece 10 years ago, but the appendix shows the concerns have not been addressed: https://danluu.com/julialang/
It is discouraged to override internal internal functions, hence, one often only needs to monitor the public API changes of packages as in every other programming language. Updates for packages in my experience rarely had broke stuff like that. Updates in Base somrimes can cause issues like that, but those are thoroughly tested on most popular registered packages before a new Julia version is released.
Interfaces could be good as intermediaries and it is always great to hear JuliaCon talks every year on the best ways to implement them.
> Imagine you try to move a definition from one file to another. Sounds like a trivial piece of organization, right?
In my experience it’s most trivial. I guess your pain points may have come by making each file as a module and then adding methods for your own types in different module and then moving things around is error prone. The remedy here sometimes is to not make internal modules. However the best solution here is to write integration tests which is a good software development practice anyway.
The [] is as well. For things that have no shared interface. (E.g.: Int64[1][1] contains two calls to the [] generic function, which have no interface in common.)
Plenty of definitions of (+) are not commutative. E.g.: for strings.
There is some package which uses the (+) generic function internally, meant to be used on numbers. You call it instead with some kind of symbolic expression type. And it just works. Yay! This is the kind of stuff Julia afficcionados preach.
Then suddenly the package gets updated so that it uses (+) in a way which assumes commutativity.
Your code breaks.
In your world, how would you be notified of a change that some internal use of the (+) function now assumes commutativity?
Or when Julia afficionados preach the amazingness of being able to just throw a differential operator or matrix or symbolic whatever in a place, are they just overselling something and should stop?
I had a quick skim of the link you gave -- it looks to be the same as it was 6 years ago when I experienced this issue.
So basically:
Generic function F is defined in file A, and has a method definition DEF in file B.
File C imports files A and B, and then calls function G that internally uses generic function F. Somewhere internally, it winds up running F with method definition DEF.
You move DEF to file D, but don't update the imports of file B.
When file B calls G, it calls F with some default implementation instead of DEF, and then you get an error from function G.
Some details here might be wrong over how exactly to imports need to be set up, as I haven't used Julia being traumatized by it 6 years ago, but that's basically it.
Well, with generic programming constructs you have more freedom, and with more freedom you have more ways to shoot yourself in the foot. I don't think Julia has some illogical or inconsistent design decisions of its module system. In fact, it's better than many other because the order in which you import modules doesn't matter.
I had been thinking about things along those lines. I might do that as a separate site. Perhaps with regular resets, and with something to keep 4chan out.
This article hits on a pet peeve of mine.
Many companies and individuals can benefit from better processes, communication skills, and training.
And also people who proclaim "Most technical problems are people problems" and "It's not what you know, it's who you know" are disproportionately those who are attempting to influence others that "My skillset is more valuable than your skillset." The people who believe the opposite are heads-down building.
The truth is that nearly all problems are multifactorial and involve long chains of causality. They can be patched at multiple points.
And so while there are standard stories about "If you do the 5 Why's and trace back causality, the issue becomes a deeper human problem," you can nearly always do something else and find an alternative technical solution.
The standard story goes: "This thing failed because this this other thing crashed, because this thing was misconfigured, because the deployment script was run incorrectly, because no-one trained Bob how to use it." See, the human problem is the deepest one, right?
But you can find an alternate technical fix: why was it possible to run the deployment script incorrectly?
Or you can ping-pong it back into a technical problem: he wasn't trained because everyone is stressed with no time because things keep breaking because of bad architecture and no CI. So actually the technical problem is deepest.
But no, because that only happened because the CEO hired the wrong CTO because he didn't know anyone who could evaluate it properly....
...which only happened because they didn't use <startup that helps you evaluate engineers> (technical problem)
...which only happened because said startup didn't have good enough marketing (human problem)
...which only happened because they were too slow to build from their own tech debt and didn't have the money (technical problem...)
And so on. Ping, pong.
The article says: we had too much tech debt because humans weren't trained enough.
One can also say: we had too much tech debt because we didn't have good enough linters and clone detectors to keep out the common problems, and also we had made some poor technical choices that required hiring a much larger team to begin with.
If you have a pet problem, you can always convince yourself that it's responsible for all woes. That just keeps you from seeing the holistic view and finding the best solution.