The scenarios where SL is justified are rather specialized. But as the author points out, they still get pushback for using SL even in these scenarios, where using it is a very much reasonable decision.
That something is an anti-pattern in most cases, doesn't mean we should just reflexively reject it ignoring the context of a particular task.
I read the anti SL article and maybe I build things a little differently, but I tend to build applications that interact with services differently depending on what layer they belong to.
1) Application entry: create context and provide to top tier participants. Creates the service locator and provides to children.
2) top tier participants: Command/Request routing and configuration. Receives service locator, provides concrete services to children
3) features: plain old functions (or classes) that recieve dependencies when called or constructed.
No frameworks necessary.
Start up is fast.
Things mostly get built/called only when needed.
My service locator generally just looks like a bag of getters. Some of which operate like a Singleton, others return a new instance every time.
Testing is easy and obvious.
There aren't a ton of places where args are being painfully or magically forwarded.
I've had this pattern work for UIs, servers and embedded projects.
It's super fast, ergonomic and light weight, but most importantly, testable.
Maybe all the fuss is about C#/Java magic entity registration and creation frameworks?
If that's the case, then maybe I agree?
IMO those feel like a promising experiment that didn't work out great in practice.
I've built systems (even recently) where too much stuff found it's way into the SL, and can agree that's not good.
Nowadays, I just put the things in there that one would be tempted to make a Stinkleton out of. 1's or at worst, 10's not 100's of things.
I think your model works for a small scale and/or tight knowledge/architecture control. I had very bad experiences with SL at scale.
I guess a lot of us would let the DI container create a transient object (2: commands) for your case (2) which then does injection of the required services into the command (constructor) which then can get forwarded.
FWIW - I've also had bad experiences with specific implementations of SL (both in the small and large). I'm especially frustrated by the magical creation patterns of some popular DI libraries.
I think there are difficult trade offs to balance and a huge area of focus for me, is making tests easy to write, fast to run and extremely reliable.
I find that creating and managing global state often leads to less reliable and more frustrating tests, these can cause test discipline to collapse, so I'm willing to deal with some pain to avoid those outcomes.
I can understand how other folks might feel differently and may find deep offense at some boilerplate that I think is a little smelly.
The same folks might be willing to put up with more pain in the test environment, or may not place as much value on tests as I do. That's okay too.
I do use this pattern in some cases, but none of them would be technically justified if I were to do a clean rewrite.
Being able to grab arbitrary instances of things at runtime is a superpower in my view. The ability to tolerate various "anti-patterns" in your codebase - even more so. Maintaining complex codebases over many years and developers sometimes requires these sorts of compromises.
That is exactly the point why dynamic factories are okay but service locator is not. A dynamic factory has a purpose and typically a base type. It is not arbitrary or superpower.
Just last year I had the pleasure of starting a new project and deciding on software architecture for it.
First try had a Dependency Injection framework, with automatic creation of instances at the time the request was routed to a dispatcher class, because it was the default "best practice" of the framework I was using.
Soon realized that I was relying on magic object creation which was too magic for my taste, and also only happened at one preset point in the request cycle, which was not flexible enough, and would not let a subcomponent have its dependencies managed without involving its callers all the way up to the request dispatcher.
Second try toned down the magic by using a Service locator and passing it all around. Quickly realized that all the code was burdened by passing an extra param for absolutely no benefit. If whole chains of functions are passing the same param without making any changes to it, and it only ever takes a single value in the whole program, you may have reinvented global variables, badly.
Third try decided to bite the bullet and embrace carefully managed global state. The running environment is global. So the SL might as well be a single global. This stage did not make it into code because...
Why would you want to create a single monster class called SL or "Context" or some such, artificially bringing together all sorts of unrelated things like your DB hostname, your AWS credentials and your password hashing secrets?
If you follow object orientation as a dogma, then sure, you have a problem rooting the whole system of creating objects. Just like in religious dogma, if every existent has to be created by a previous existent, your logic inevitably leads to ask for a first existent with God-like powers. So the Service Locator becomes the demiurge or creator God of object orientation...
In the end I decided OO is not my religion. I write modular code, but whether a module is implemented as a class, or a folder with a few closely interrelated classes, a bag of functions, a whole bunch of closures, or something else, is an implementation detail. What I know for sure, from John Ousterhout's holy teachings, is that I want my modules to be the right size, not just minimal, and certainly not "single function", so they encapsulate as much complexity as needed while presenting a small and clear interface to the rest of the word.
Which means, in the case of my project, that "get me a DB handler" becomes a static method of a DB management class, which knows all about DB management: where the credentials are found, what things should be done differently in production or development, how to reuse DB connections, etc. Over time this class can grow to accommodate and encapsulate more DB-handler-management logic, such as getting connections for writing vs connections to a replica for reading, or returning mock DB connections for integration testing, and whatnot, without overextending its scope.
And whatever code needs a DB connection, just calls DB::get().
DB::get() is a static method that figures out from its environment what kind of DB connection to give you.
DB connections themselves belong to another class (say, DbConn), which is not a singleton, and does not keep global state. But user code never instantiates that directly, and only ever calls DB::get(), which does know about global state, and figures out whether an existing connection should be reused.
If you need to set up things in a unit test so that DB::get() will return some kind of mock object, then you call some kind of DB::setupMock() method first.
Basically, I'm taking the kind of code that would go into the DB section of a DI configuration, and putting it in a separate class of its own, which is fully in charge of that particular bit of functionality: providing DB connections to the application.
And since it's functionally a global, I'm embracing that and making its method(s) and state static. Application startup calls DB::init() once, and then it's ready to go.
That is all well and good until the day you need to it to return two different results depending on which context its running in.
Relying on fixed global state is asking for trouble down the line if it's something that will be evolved over time. DI makes such changes possible because you're injecting it with the constructors, while still allowing you to have a singleton DB managing class today.
Another major benefit for long term maintainability is the ability to see _all_ dependencies in the single constructor. It certainly makes writing tests simpler from unit tests to in memory integration tests.
One problem I've run into, is that these things tend to proliferate into a dozen or so global things that people need to remember to deal with (clean up and/or configure) in the test environment.
Since my brain doesn't do memory well, I start getting intermittent test failures from interacting tests that can be painful to debug.
Another problem is that they tend to make the test environment really slow because the fix is often to add a pre or post handler to every single test globally.
Also, the feature might be designed for environment A, but later need to run in environment B. Global state patterns tend to blow up when that happens.
The DB::SetupMock() thing tends to work okay for direct consumers, it's bad when the transitive dependencies need to start doing that for 6-10 different services.
FWIW, I commented elsewhere with an alternative structure that's been working well for me with a different set of trade-offs.
Also FWIW, I've been fighting against global state for so many years, it makes me really sad to see it getting promoted again.
Best of intentions here, not trying to be critical, just trying to share some hardwon experience.
What you say makes lots of sense, I think the best design depends on many other factors, like the size of the codebase, the size of the team, and how modularity is handled within the whole system. What made sense for my project might not make sense for another. And yes, I agree that if something needs to be a module that can be used in unrelated projects, it should not depend on global state... unless those projects are explicitly defined to share a common programming environment, in which case they are not pure modules anymore.
My point, maybe going a bit meta, is that no matter how you organize it in terms of which language features you are using, application code effectively does depend on its running environment, and the information from those dependencies needs to get there somehow.