There are no challenges. The code I wrote in 2000 still works in 2020 and will work in 2030. That’s why it’s boring tech - it just works without thinking or deleting node_modules directory every day.
> it just works without thinking or deleting node_modules directory every day
That kind of hyperbole isn't useful. If you're deleting node_modules directory every day, you're doing something wrong. Perhaps due to lack of experience. You're claiming 22 years of experience in PHP. So of course that familiar workflow is going to work better for you personally.
My favorite interview question is - how often do you delete node_modules directory. I find that the most experienced candidates will answer “every day”, which is the correct answer. I can catch people lying quickly if they say they don’t delete node_modules.
That's a horrible heuristic. All you are doing is selecting for engineers who have failed to fix the problem like you have failed. Misery loves company, eh?
I have decades of experience, mostly work in JS, am often in the top 5% on StackOverflow and very rarely need to delete node_modules. Neither do most of the people on my team. Some of our junior devs reach for that sledge hammer out of frustration. Which is fine. I used a lot of sledge hammer when I was first learning PHP too.
Code security company SonarSource today published details on a severe vulnerability impacting Packagist, which could have been abused to mount supply chain attacks targeting the PHP community.
If I was to rewrite my blogpost to fit your second sentence I would title it: "Please consider onboarding new staff as a need for your software". This is just another consideration when architecting. I absolutely will use cutting edge tools if they are needed. But we just need to think about it a little.
Indeed the availability of developers who know the tech stack is a key requirement that should be defined and assigned an importance level.
And if you’re working in a business building a modern web application then it’s extremely hard to imagine the stakeholders being happy with software developed using absolute lowest common denominator tools techniques and libraries, as advocated by the “boring software” movement.
Competitive advantage and the expectations of customers leads to the need to use modern tools techniques and libraries.
And in fact developers deserve these things too because they tend to make things easier more powerful and more reliable.
If the stakeholder who is paying for the project says “can we have an animated user education intro”, and you say “no because we use boring technologies and our developers might not understand how to use the animation APIs” them I think your job would be at risk pretty quick.
I agree with you in general but I disagree in the specific case.
I think using frameworks encourages developers to follow some standards, much the same way everyone was very excited about microservices was a concept that could encourage decoupled software. Somehow we end up with a mess in both, and it's exactly for the point in your link: "There’s a good chance you’ll end up creating an “ad hoc and informally-specified implementation of a framework"". I think this is ultimately true for any element of software design that is not specified and enforced, and over time with enough complexity added to some areas, it creates the mess.
So the real "feature" is formal specification and enforcing the spec. In other words: rules and constraints for designing software. An architecture. You get a kind of "off the shelf" architecture with any framework you pick up and use, and this of course segways into the age old story about outgrowing frameworks and frameworks not being good at some specific problem, the code base slowly got too complicated for what we do, etc. The "spec" the team is using (wittingly or not) doesn't get updated.
When you have two devs implement the same concept in 2 different ways, it's a hidden disagreement about the project. It should get discussed but often it doesn't.
I like this argument because using a framework, or a library, implies that it might be established and that there’s already a lot of resources, docs, and stackoverflow question on it. Not using a framework most often means no doc.
Yeah I think this is why I liked "novelty budget" as a term. To me it implies a limit, but it also implies something which you should spend. Doing something a little bit different can be immensely valuable as you've highlighted. Also everything was new at one time.
Hah, this is a good point, but in my eyes lots of things that were new... never really grew up and were just deprecated and died.
For example, if someone based their setup on IronFunctions, they might have run into a bit of a painful situation, seeing as the project has been largely abandoned: https://github.com/iron-io/functions
Same for a database solution like Clusterpoint, the support for which just ended and you were left to migrate away to something else: https://github.com/clusterpoint
Ergo, I'd say that it's good for others to suffer the consequences (in a manner of speaking) of being trend setters and making wild bets on new and risky products and to just reap the benefits of their efforts later yourself, when things are safer. If a project has survived for a reasonably long time, it's a good indicator that it'll probably keep surviving in the future as well (there was a name for this, sadly can't recall what that was).
This is a very good point. There are definitely "risky novel" choices that could make your company a success. But I've also seen many teams drowning in a soup of random tech choices.
I'd love to be able to write some more specific advice on this topic but mostly I just want people to be mindful of the impacts of their choices and actively choose risk rather than having it sneak up on them.
My first time as a team lead I saw value in little side experiments on non-core parts. Now I think that's only OK if there is time and budget to roll them back if they prove to be a bad fit. Otherwise they accrete and become a drag on velocity.
I already have a docker registry too, so I guess I missed the point of your entire article. You can use the registry as a cache for build artifacts... that's what it's for. Your article was written like you just figured that out, and I was providing context to the other confused HN users.