I gave up on dependency injection frameworks a while ago. Now there's just some "wiring" code somewhere that wires up the components. It's small (one statement per component), trivial to write, easy to understand, and makes any kind of customisation easy (disabling whole subsystems under config, having alternative implementations for subsystems, etc), because it's just code.
It's also testable! The setup code is factored in such a way that it's harmless to run (eg sockets aren't opened during wiring), and it does all the config parsing and resolution and so on. So i have a suite of tests which run it, and then do some trivial checks like "all the necessary config is available", "handlers are defined for all the market data we subscribe to", etc. They've caught a bunch of schoolboy errors which would otherwise only have been found in staging.
I think anyone arguing for frameworks should spend some time making a serious attempt at frameworkless dependency injection. The frameworks are really doing so little for you, at occasionally horrendous cost.
> I think anyone arguing for frameworks should spend some time making a serious attempt at frameworkless dependency injection. The frameworks are really doing so little for you, at occasionally horrendous cost.
That is what I did, and decided a DI framework was much better. If you have a single scope, like singletons, its pretty easy to do the wiring manually. If not, then you see very quickly that your scope management code and rewiring of same things at different layer quickly becomes tedious, error prone (becoming out of sync with another wiring), boilerplate.
Passing a single "context" struct/class/whatever into everything basically solves DI. I can see using a framework for this, but it doesn't seem necessary.
Furthermore, this enables creating sub-contexts (like run this operation but with this different configuration) which is something almost impossible to do with DI frameworks.
Yeah, it's weird how such heavy frameworks often end up lacking basic features like this. That's like the first thing I'd look up how to do after the basic tutorial.
> The setup code is factored in such a way that it's harmless to run (eg sockets aren't opened during wiring
The other side-benefit of doing this is it can become much easier to hot reload components, since you just call the Stop / Start methods of just those components.
> I need to know everything that’s going on in my code. I need simple, straightforward function calls. Nothing else! I want to be able to start at main() and trace through the code. I want to look at callers and find where every parameter came from. Reading code is hard enough already. Magic frameworks make it harder.
But these frameworks aren't magic. They're just code. Sure it means you have a bit more code to read through to work out what's causing a problem but it's still just code. The time cost of potentially more difficult debugging when things go wrong is nothing compared to the time saved not having to wire things together manually.
I also find DI frameworks actually encourage good design by making it easier to write small, single purpose classes. You don't need to spend time working out where to initialise them so they can be passed to all the dependent classes.
Yes I’m aware. My point was it’s not that concealed once you’ve invested the time to read the docs and peek at the code of whatever framework you’re using. The time to do that is nothing compared to the time saved using these frameworks.
I wasn’t sitting there thinking Harry Potter wrote Spring Boot.
Sometimes you can easily peek at this, sometimes you can't. I can see there being a good DI framework, but there's easy potential for them to be terrible. Like one of them used at my job cannot be understood from code at all (it uses special build rules) and has awful documentation. The recommended way to understand it is by copy-pasting what others have done. Once you do that, it's tolerable.
> But these frameworks aren't magic. They're just code
Some of them definitely go beyond "just code", in the sense that they actually change the normal behavior of the code. They intercept method calls, replace classes, etc. Spring is sort of famous for this: https://docs.spring.io/spring-framework/docs/3.0.0.M3/refere...
The magic happens at so many levels. Apparently a third-party DI framework is too much magic, but the third-party compiler is not, nor is the out-of-order, speculating third-party CPU.
A DI framework is just another level of magic, which once you accept/embrace it and play by its rules (like using a compiler), makes developing other code easier.
> You shouldn’t need logging everywhere, for example. Your logic code should be side-effect free, and most of the rest of your code should be throwing detailed exceptions or returning errors rather than writing to the log and returning null.
Being side-effect-free doesn't mean that you don't need logging to understand how the outputs were computed from inputs, when things do go wrong (and they will go wrong).
Code throwing detailed exceptions is great, but if you only log them at the point where they're caught several levels up the stack, you're losing a lot of context. If that particular exception is, well, actually exceptional - which should be the norm - logging it at the point where you throw it makes it much easier to debug later.
But "logging everywhere" isn't necessary to log information about the state of the system. Libraries, for example, should not have any logging, or else it should be minimal and optional. One should prefer to return errors from library code that the application code then logs, with specific relevant context and only at a point where the log message would be useful.
Libraries should still have extensive logging, but it should be such that it can be plugged easily into whatever logging framework the app as a whole is using.
In any case, DI is generally out of scope for libraries in the first place already, so I don't think the article was complaining about that.
> Libraries, for example, should not have any logging, or else it should be minimal and optional.
You know, a year ago when log4j was having problems, a lot of people were like "why should this even exist!?" and this is exactly why.
If every single library just splats everything out to console then yes, you have a problem. But that's why logging frameworks exist - preventing a library (or more specifically a package) from splatting errors everywhere is literally a one-line change. Most of your libraries should probably run at WARN or ERROR - because their business events are not your business events.
But in a big application when something goes wrong it's super useful to turn that logging level up and see what the libraries think is going on. Jackson will usually tell you exactly why it's making the decisions it's making when it's picking deserializers or handling data. Spring or Hibernate or whatever will tell you exactly why it's making the decision it's making wiring up the dependencies and data layer and mappings.
> One should prefer to return errors from library code that the application code then logs, with specific relevant context and only at a point where the log message would be useful.
no, the "operation result wrapper class" pattern is incredibly tedious to deal with inside your program - it's one thing when it's the result of an RPC call (or HTTP service call, etc) but you absolutely do not want your operations returning a little {result : ERROR, output: null} wrapper class as a general course of business in the application.
Like yes your statement is generally true that your code should always throw exceptions that are properly scoped, that's literally table stakes here, don't let IO Exception bubble up to the top level, turn that into a RemoteServiceException or a ServiceConfigurationException etc, so that higher-level code can understand what is going on without handling 20 zillion low-level errors. That's junior coder level competence.
But don't be afraid to throw exceptions either, those are the way for higher-level processes to bail out of their processing! The Java standard library gets so crazy with IO exceptions and other low-level exceptions (I think incorrectly so, in many cases) that people get gunshy about it and get in this mindset that they have to catch everything so they don't constantly put "throws" decorators on everything. A lot of code should throw! And instantiate that higher-level exception with the low-level exception passed in so you can understand why.
> Every line of code in your system adds to your maintenance burden, and third-party code adds more to your maintenance burden than well-designed and tested2 code your company builds itself.
Every SAAS vendor and framework advocate should have to put this on their product in black letters in a white background. Same typography as “Smoking is addictive…”
In a company, code you write yourself is a dead end. You want as little of it as possible.
Staff turn over. What was a first party piece of code well understood within the company inevitably turns into a poorly documented piece of code written by a third party no longer employed, and there is no community of users to help out with problems.
Write and own code which is fundamental to the business model's value proposition, the code which delivers product market fit. Eliminate other code where possible. Upstream or open source improvements that aren't part of the competitive edge.
There are exceptions of course, for trivial functionality whose fully loaded cost of integration and upkeep as a third party is higher than home grown, but it's not a lot.
The other alternative is to be such an awesome company that nobody who contributes a lot quits.
Before « move fast and break things », there used to be a thing called « documentation ». It included things like « design documents » and would ensure people were able to quickly understand a piece of code.
Maybe, maybe there was. In my experience when someone talks about how much better "it" used to be and how low we've all sunk these days, "it" never really was as good as they're saying. But maybe.
In any case though I live and work now, when companies who prioritize and support the work of creating and maintaining thorough reliable documentation are rare. So I plan my work for situations I can expect to encounter now, not ones that may have been once.
documentation is an ongoing process. You document your code before, during, and after you write it.
I don't understand how one can maintain a piece of code without it for more than a few months, even as a single person. One always forget things.
i also don't understand how one can design a piece of architecture without diagrams and maps and all kind of design documents. They are used both for clarity, as well as to discuss between team members. Then keeping them somewhere is also part of documenting the code.
It's not that I don't see it this way or don't value those things. It's that companies I work for haven't valued them. Doing this won't positively affect my evaluations, but taking time away from other work to do it well, will negatively affect them.
If the company would prefer to pay the long-term cost of having no or poor documentation than to pay me to do it now, that's up to them I guess. I take my own notes and go on with my life.
Yeah, good idea, but company architects should make a similar advice about NIH though.
A dependency is a dependency, there may be tradeoffs between using third-party software and developing new in-house code, but using "Invented here" code does not vanish any kind of complexity away, it just manages it differently.
The best third-party code has benefitted from industry-wide testing and fixes no single team could match. People we hire might come in already knowing it.
Competence is partly in recognizing that a problem has been solved and is no longer a good use of time, at least until we have a plan to make an improvement so great to support the redundant maintenance forever.
I remember the first time I used Spring and I had to debug a traceback that included not a single line of code I had written. It was hell. I almost gave up being a programmer.
Even today I work with half baked frameworks that have the same problem and I hate it.
The difference is that when something like, say, a web framework does this it is buying me something valuable in exchange for the frustrating occasions when the magic fucks up requiring deep dive debugging.
DI frameworks that do this buy you nothing of value except the paternalistic approval of people who dont have the imagination to think beyond unit tests.
>DI frameworks that do this buy you nothing of value except the paternalistic approval of people who dont have the imagination to think beyond unit tests.
I know I need to hop off the internet for a while whenever I hit a comment arrogantly asserting such ignorance.
CDI has a specification, _an extensive specification_, defining the _exact_ behavior of the framework. It is not magic, it’s consistent, predictable, and deterministic. The implementation we use OpenWebBeans, and the alternative implementation Weld, have extensive extensive self tests. I don’t think ever had an issue upgrading over 12 years of using the frameworks.
> Furthermore, dependency injection frameworks encourage you to think in terms of globals. That’s what they inject! A single, globally-configured instance of a class. Think about it. If you one day want two different instances of an injected variable, you’ll need an impact driver to express just how screwed you are. This has all kinds of knock-on effects in terms of reducing encapsulation and separating state from behavior.
What? I legitimately do not understand what this bit of the article is about. Surely the author knows of instance scopes[0] (as they are called in the DI framework I tend to use, Autofac), right? Expressing this kind of instance configuration does not require an "impact driver," whatever that means; it's just a simple matter of replacing .SingleInstance() or whatever in your bootstrapper function with an InstancePerDependency (the default) or Named or Keyed or _whatever_ kind of relationship/instance scope you want. Does this actually represent some horrible crufty sin?
Yes, this guy clearly does not understand the tools involved here.
This is actually very important because injection scope can lead to program correctness errors. It is possible for DI frameworks to GC an injected class that hasn't been used for a long time, so the one you get back may not be the one you expected, or if it contains an object map it may not contain the objects you expect, etc.
Not sure why the author decides to single out DI frameworks as a problem. In most cases DI is just an implementation detail for how the framework serves up its abstraction layer over the underlying mechanisms. For example, Spring is built around injecting beans so it can wrap them in proxies which provide transaction managements, security, etc. Sounds like he's really just criticizing frameworks in general.
On the topic of DI, it's such a simple and common-sense design "pattern" that it shouldn't even have a buzzword label. All it means is that, given service A which uses service B, it's not service A's job to instantiate B and provide B with its required sub-dependencies (DataSource, config params, etc). A should only consume B without concerning itself as to how B was created in the first place. This is usually handled by some "container" service whose job is to build-up every other service and make them accessible to one another so they may be strict consumers without transitive dependencies.
> “This implementation is difficult to unit test.” Horsepucky. You can still have dependency injection without a framework. Just make a constructor that takes the dependency as an optional parameter. Done. Applause. Early lunch.
Ok so it's specifically the frameworks that's disliked.
> Furthermore, dependency injection frameworks encourage you to think in terms of globals. That’s what they inject! A single, globally-configured instance of a class. Think about it. If you one day want two different instances of an injected variable, you’ll need an impact driver to express just how screwed you are. This has all kinds of knock-on effects in terms of reducing encapsulation and separating state from behavior.
I would expect any decent DI framework can name things when you want different flavours.
The only real problem I've had was with slow startup using Spring/Boot which I blame on DI auto/scanning.
In spring, you can have multiple beans (=DI object instances) of the exact same class/interface. You can define one as primary and have to give them different names.
You can also automate bean creation per thread, per request, per session or whatever else floats your boat. Instance/bean persistence is easy too, if you really want to go that far (you should not).
For regulatory reasons, I once even had to implement a datasource selector for spring, that would pick the database connection based on userId.
Why do people that have zero idea about what they are writing find so much attention on hacker news?
Try Dagger, it generates DI code during build. So it's kinda hard code, but the framework does it for you. So the startup is much faster, and easier for JIT compiler to reason (no reflection)
The first dependency injection framework i learned was Nucleus [1]. An unusual feature of Nucleus is that it has no type-based autowiring. You write a little properties file for every component, and to inject a component into another, the recipient uses the path to the other component's properties file. It is shockingly basic, but it works really well. Everything is explicit, but simple enough that it's not laborious to use. Having multiple instances of components is trivial, because they're just separate properties files. Indeed, the driving use case for Nucleus, the ATG commerce framework (since bought by Oracle) had multiple instances of many classes (eg the generic ORM repository class, for different siloes of data). I was really surprised when i first used an autowiring dependency injection framework, where this is either impossible, or you have to jump through hoops to do it.
And this is how we end up with spaghetti code. Dependency management is a critical design decision. If you're polluting your global namespace with random classes that get injected everywhere, you end up with a massive tree of intertwined dependencies. Not relying on a DI framework forces you to see that god-awful mess of spaghetti and do something about it or live with the consequences. If you're not thinking about how the major systems in your code interact and where the dependencies flow, then you are missing one of the most important design decisions.
Designing dependencies is critical but managing them is not.
I often found that unless you have CI you don't have flexibility at all to make more than superficial design changes. People spend days passing instances down convoluted hierarchies because they don't have any other way. Much better to use DI and start designing who needs what as a direct dependency.
A dependency injection framework also helps you encapsulate dependencies into contexts which can be used instead of global namespace. At least it should if your DI framework isn't just doing glorified singletons.
You seem to be missing my point. Every time you inject a dependency, you are adding another node in the graph of object interactions in your codebase. What's more, DI frameworks only specify the lifetime of injected classes in the configuration file. So if there's an object with a scoped lifetime to the class it's being passed into, this isn't apparent without checking the configuration file. If you just look at it and see that it's added via a `new` in the constructor of the class, it's immediately apparent that the scope of that object is tied to the scope of the parent class.
> People spend days passing instances down convoluted hierarchies because they don't have any other way.
And this is a sign that either: you're architecture is flawed and this dependency is probably doing something more than intended, or it's actually a global dependency and probably doesn't even need to be an object. This can be solved by cleaning up your architecture, or making the "dependency" a stateless function. One immediate example I can think of is a logging interface. I don't get why programmers think you need a "logger" (probably because of the warped idea that everything in a program must be an object). Instead, you could just make a log function that's available in the global namespace with the appropriate thread safety.
Some things are global in nature, and that's ok. Adding a convoluted DI framework to hide that fact is not ok. I like to know which interfaces are truly global instead of hunting through a codebase to find out what the mess is actually doing.
> A dependency injection framework also helps you encapsulate dependencies into contexts which can be used instead of global namespace. At least it should if your DI framework isn't just doing glorified singletons.
Sure, by hiding the lifetimes of all these objects in some massive configuration file. Now if I'm looking at class `Foo` all I see are a bunch of dependencies injected into the constructor. Any notion of which dependency is tied to the lifetime of `Foo` or global in nature or shared is now lost. Additionally, ditching the DI framework allows you to be more explicit about the lifetimes of all these constraints and formulate your code in a logical manner. One where dependency chains flow strictly one direction instead of the mess that a lot of code bases are left with.
Lastly, you didn't really refute my claim. You even seem to agree with me that managing dependencies is important, otherwise you wouldn't be using a DI framework.
My point is, this is an important design decision and should be treated as such. Using a magic DI framework allows you to hide all the messy chains that you're creating. If you ditch the framework and manually configure stuff, it forces you to really think about whether your architecture makes sense or not.
When you cross service boundaries, those globals get reset. I don't know how to say this exactly, but sometimes you're writing crappy code in a crappy language (Java), and nothing matters more than doing it quickly and integration-testing it. Worst case your code becomes such spaghetti that someone rewrites it, which is fine cause it's self-contained.
Are you evaluating the points made by James from your context and limiting your understanding? If you work for a company where software is not a differentiator, but a cost to doing business, then using frameworks, DI or not, is probably the right thing to do. But if your code is a core part of the business, you probably don't want to give control to some third party that may screw you.
All successful companies that I have worked for where the code is core to the business, rolled most of their own software (NPM for the web aside). Long term you need that control, understanding and speed of change if required.
What major upheaval examples should we fear? DI frameworks seem quite reloable, trustworthy, & consistent. I cant think of any examples of a community being burned by trusting their framework. I cant think of any cases or blogs where someone has been left up a creek, has ended up hard clashing with their framework
I dont see what justifies this fear, uncertainty, and doubt.
Yes, I evaluate it from the perspective of developing enterprise software where I need to designate extension points for a fluid number of team members. Only using CI can I balance the flexibility of offering the interfaces people need with the oversight needed.
Also just develop your own DI if you consider it business critical but not yet commodity (you don't do your own logging/crypto/math libs, right?).
Until you want to add an argument to that constructor and find yourself modifying lots of files just to update that call everywhere.
Or need a value from the application config, and have to patch the configuration instance through, several levels of classes deep. After wasting a day or two with those shenanigans, you’ll gladly take the DI framework, which makes both scenarios a single-line, 10 second change.
The Dependency injection pattern is just not that great in general. There are alternative patterns that are better.
It is not a problem with frameworks. Think about it. If the pattern was good, then a good framework must exist. If no good framework exists then logically it is very likely that Something is wrong with the Pattern itself.
Anyway the reason why DI is bad is because it's too complex. In your program, you should have logic, and then have data move through that logic to produce new data or to mutate.
When you have dependency injection, not only do you have data moving through logic, but you have logic moving through logic. You are failing to modularize data and logic and effectively creating a hybrid monster of both data and logic moving through your program like a virus.
The pattern that replaces dependency injection in this: functions. Simple.
Have functions take in data and output data then feed that data into other functions. Compose your functions into pipelines that move data from IO input to IO output. If you want to change the logic you simply replace the relevant function in the pipeline. That's it.
One very typical pattern is to have IO modules get injected into modules so that one can replace these things with Mock IO during unit testing. With function pipelines things like IO modules should be IO functions, not modules injected into other modules. When you want to unit test your function pipeline without IO simply replace the IO functions with other mock IO functions. That's it. I will illustrate with psuedo code below.
compose(a,b) = lambda x : a(b(x))
a * b = compose(a,b)
pipeline = IOoutput * x * y * z * f * IOinput
pipeline()
The above is better then:
class F(IOinputClass):
f(x) = IOinput()
class Z(F)
z(x) = F.f(x)
class Y(Z)
y(x) = Z.z(x)
class X(Y)
x(x) = Y.y(x)
class IOOutput(X)
print(x) = print(X.x(x))
pipeline = X(Y(Z(F(input))))
pipeline.print()
You can see the second example is more wordy and involves unnecessary usage of state when you inject logic into the module. (I left out the constructor that assigns the class instance to state but the implication is there).
Dependency injection is a step backwards. It decreases modularity by unionizing state with logic. It's a pattern that became popular due to the prevalence of using classes excessively. If you can I would avoid this pattern all together.
No. I know what DI is. The debate is similar to imperative vs. OOP. But this is not exactly what I am referring to. Literally look at the second example, it's DI over and over and over again.
I am referring to DI 100%. Function composition works better and is a replacement for DI. You're the one that isn't getting it.
Either way, imperative programming and OOP are orthogonal concepts. There never really was an argument about imperative vs. OOP.
Additionally function composition is an FP concept. I'm not promoting FP over OOP here... far from it, I am simply saying that specifically for DI, you can borrow a pattern from FP and use it in place of DI because function composition is a much better pattern.
Dependency injection frameworks don't have to be "massive kitchen-sink things". They can be minimal and predictable. Ideally, they should just be a more declarative way of defining function dependencies and execution order.
I prefer koin instead. It does no magic. It's simple function calls packaged up as a nice Kotlin DSL. Pure declarative. Easy to debug. It does not even use reflection. I've used it with ktor, and with kotlin-js in a browser (it's a kotlin multi platform library). There's basically no overhead relative to the code you'd otherwise be writing manually. I'm not an Android developer but I hear it's pretty popular there as well.
I've use spring dependency injection as well. It's actually not that bad if you use constructor injection only. No Autowired in any code I touch. You don't need it. Constructor injection makes everything easy. And it makes it easy to test as well. My unit tests do not depend on Spring. There's no need. And with the recent declarative way of creating beans, it can be pretty similar to koin.
I've done some diy dependency injection as well on occasion. It's not that hard. Just separate your glue code (construction) from your logic. Your main function would be a good place. Constructors don't get to do work. I've seen some bad frontend code that violate these rules and it's a mess where nothing is testable because trying to run any bit of code you end up with half the code base firing up. Lack of a framework is no excuse for bad design.
Koin is all about Android. I tried it briefly, but all the examples don't work for me, e.g. 'by inject()` looks cool, if it worked, but it doesn't, because it requires a Class parameter. Inconveniently, no examples show what to "import" as well.
We use this clunky C++ dep injection framework at my job that our most veteran SWEs swore never to use, but one day it became mandatory (lmao) for sorta unrelated reasons. The biggest issue is the initial learning curve; you don't gain much out of it for what you put in. It doesn't even catch bugs at compile time. Once you get it, you think of your code as having multiple main.cc's, and everything below that is regular code. So I think it's dumb, but it's not that big a deal.
When I write my own NodeJS backends, I just pass around a big object of all the global-ish deps like DB handles and thick clients. That works predictably and avoids the headaches you get meticulously plumbing everything through. So far I haven't felt any pain from this to push me to a DI framework.
There are far more important issues to consider in your system. If you're so concerned about globals, maybe you're making too big of a monolith service.
FWIW a fellow who has published books on the topic calls Service Locator an anti-pattern[1]. I take no position here as usage of any pattern is contextual but it's worth a read.
I like these 'pendulum swings' kind of posts. In couple of years this will be more widespread. Then, in another couple of years somebody will invent containers for easy testing, but, in meantime we will learn some useful things.
At work, all of my function has at most 3 parameters: deps (dependencies), params (for parameters), ctx (for context), which covers all of my use cases, easy to test, debug, isolate.
This is the equivalent of a relational database schema where there's only a couple of tables with columns such as: (EntityId,RowId,ColumnId,Value).
In very rare cases this type of design is required, but the key word here is "rare". It shouldn't be the norm for ordinary apps such as typical web apps! If you find yourself doing this type of thing regularly, then you've likely made some sort of mistake.
The code is absolutely maintainable, simple by design, simple to test in isolation , simple to debug in isolation, simple to scale features, simple to replace,...
I'm not sure what you want more for a production-ready code.
> “This implementation is difficult to unit test.” Horsepucky.
No, this implementation is difficult to unit test. The rebuttal, “Just make a constructor […]” changes the implementation. The author’s zeal to decry DI frameworks has made him forget for a moment that constructor injection looks the same whether a framework is involved or not.
I agree with this sentiment. Manual DI works well for small to medium projects.
I didn’t want to adopt a framework for a bigger one, so I’ve built a manual DI helper.
It is for typescript. It is really helpful for me.
Zenject Dependency Injection for Unity has been an absolute game changer for me. It’s wonderful to be less reliant on the Editor. Picking up old projects is quick because there’s an easy to follow structure.
I like explicitly listing dependencies (as interfaces, as you shouldn't depend on abstractions). Golang's context are also a nice pattern for bundling them opaquely based on scope if you just need to pass them through (for logging, tracing and other ubiquitous purposes).
I guess my main problem with a dependency injection framework would be, why do you need a framework besides a class system and import statements to do dependency injection?
this feels like suicidally bad advice. letting Fear Uncertainty & Doubt about bringing in dependencies rule your decision making is... not smart. We have all been able to build great things because we have relied on open source.
The author talks about the disadvantage that your developers jave to understand larger codebases, including code you might not actively be using. Ok to some degree sure. But thta codebase may have countless books & blog posts about it, may have existing tests and example apps that show how to work it. If you hire someone, they stand a >0% of having worked eith that framework before.
The capabilities built into these frameworks is immense. Mamy jave iterated on their initial design a number of times, bringing a battle-won level of coherency that DIY may not reach. These frameworks often bear many modes of articulation, so that you can grow & expand the festure-set of the framework younuse over time, as need arises, where-as even if you do build just-the-right-framework today for yourself, it may, tomorrow, lack who realms of features thay could help you. For example, things like the Spring Framework's "Aware" interfaces provide enormous capabilities to see what's happening, and to perform subtle modifications & tweaks to object instantiation or usage processes.
The protest against magic is another messure of foolhardy conservatism. It's true that, alas, many DI systems are not great at helping folks understand the "magic". Visualizing & seeing whats injected where, whats loaded how, often requires some expertise, some knowing where to look. But there are well defined rules and patterns here; it's knowable, and as a dev if you learn it that knowledge can stick with you across projects & jobs. Many frameworks have really good introspection capabilities- another example of code you might not need in most cases, but which can be enormously powerful to have when you need it. With Aware classes, there is huge ability to write very small scripts that make the DI runtime tell you what it's doing. Being this capable, tbis flexible, tbis prepared on your own, creating your own DI, seems remarkably unlikely.
This is such an ubuntu case. Not the distro, the meaning of the word. If you want to go fast, go it alone. If you want to go far, go it together. The risks portrayed here are unbelievably minor, have caused real harm & damage almost never for DI. People going off and cobbling together their own very partial patchwork solutions have done incredible mis-service to themselves, their team mates, the devs that inherit the project, the org, & the customer. Use good software, adopt it, embrace learning it, and dont let fear rule, dont convince yourself down out of worry.
I did not downvote but your original response seems like it’s talking about dependencies (eg. 3rd party libraries) and the article is about dependency injection which is a different thing. So when I first read your comment it didn’t make much sense to me but maybe I missed something.
It's also testable! The setup code is factored in such a way that it's harmless to run (eg sockets aren't opened during wiring), and it does all the config parsing and resolution and so on. So i have a suite of tests which run it, and then do some trivial checks like "all the necessary config is available", "handlers are defined for all the market data we subscribe to", etc. They've caught a bunch of schoolboy errors which would otherwise only have been found in staging.
I think anyone arguing for frameworks should spend some time making a serious attempt at frameworkless dependency injection. The frameworks are really doing so little for you, at occasionally horrendous cost.