That's where the DI part comes from -- to enable us to load a different dependency based on the customer. Now, you could provide a different bootstrapper class per customer or start building a mega class with a lot of ifs-and-buts(elses), or you specify this kind of stuff in a configuration file.
We opt for configuration files. We opt for IDEs that can interpret Spring configuration files which means typos or incorrect dependencies do show up. This allows us to swap out implementations in case shit goes down without having to recompile, connect over a VPN to a remote desktop and hop through a few more hoops to get our class file on the other side. If you're not running your own apps on infrastructure you control (we should be so lucky), you have to take this sort of stuff into account. I'd love to be able to say "This is how stuff's set up. Deal with it."
(And when shit goes down, it is usually not at the place you were expecting it to happen, which means all kinds of configuration options for your bootstrapper would probably still fall short.)
Guice is a nice IoC framework, but look at what they say on their website:
"Think of Guice's @Inject as the new new."
If your code has a hard-static dependencies to DB call, you now have excuses not to write test against it because "it's hard to prepare the environment to do such things."
Not all projects are as small as a Todo-List written using Rails.
Medium-to-Large size projects with large DB schemas exist in which it will take hours to run your Rails tests: "testing-my-active-record-models-as-unit-tests"
The problem is that when people learn about new techniques then they end up wanting to use those techniques in their next project. It reminds me of something a doctor said to me once, "Every time a new disease is identified there's an epidemic." Meaning that when doctors have something they can now identify as the new disease they are much more likely to do so since they've just heard about it, when before they would have identified it as "flu".
One of the advantages spring gives you in your scenario is multiple options for configuring your app. So in your case you can use annotations for parts of the app that don't change and then specify the parts that are different for each customer using xml(I seem to recall you might even be able to wire up a spring container using vanilla java).
Spring actually brings a lot more to the table, properly qualifying it as an IOC container. It offers mechanisms for applying cross-cutting aspects such as transaction and security management.
When you spend a long time learning how to effectively work with static languages, a few things come out. Dependency inversion and composition.
Where does this leave us? It often means we have classes that need one or more other objects when they are instantiated. This often results in chains of objects. For example,
new Repository(new SessionFactory(new Configuration()))
Writing that code is pretty painful. Rather than dealing with long chains of dependencies, manually wiring everything up. You define mappings and let an IoC container do it for you.Why not just wire everything up manually in static functions and be done with it? It is really hard to change static code and often impossible to test it in a language like Java/C#. There is almost nothing you can do to test a piece of code that calls out to User.find(1) in one of it's methods.
Why are there no IoC practices in ruby? Primarily because the second point is no longer true (you can test everything easily). I still think there is merit in knowing the dependency chain rather than just calling User.find(1) and hoping that something somewhere conjures up a database connection and configures everything correctly, at the correct time.
IoC is control and automation. Those are both powerful tools to any programmer.
If you think you are making things less painful by obfuscating the code snippet above I would challenge you to think about what you are doing. The above is incredibly clear. There is no doubt what is happening and if you screw up the compiler will yell at you.
If you obfuscate it through annotations and force readers of your code to try to reason about how the system will behave runtime, you have added exactly zero value. You have only degraded the readability and understandability of the system.
A little extra typing never killed anyone.
I'd argue a few things here:
1. An IoC container that relies on annotations or XML configuration is not an IoC container that you should be using. Unless you're using an IoC container to help with field configuration, component registration needs to be done through a fluent API so that it's strongly typed and right there in the code where everyone can see it. A well-crafted registration routine should be every bit as readable as the hand-coded factory it replaces, if not more so.
2. Code that's architected in such a way that you need to understand explicit details about the order or manner in which objects are instantiated is code that is not ready to be used with an IoC container. The SOLID principles are a non-negotiable prerequisite of any IoC container. If you haven't drank that Kool-Aid, an IoC container doesn't really have a lot to offer.[1]
3. It's not about making individual snippets of code less painful. It's about making the long-term maintenance of the application - or, for preference, an entire suite of applications - less painful.
1: http://kozmic.net/2012/10/23/ioc-container-solves-a-problem-...
...
public Repository(ISessionFactory factory)
...
The repository never knows about IoC or where things come from. Nor should it.I've run into situations with Windsor along the lines of what he describes, and it really is a distressing situation to be in. You're happily coding along, whipping the project together, hacking decorators, auto-registering everything with a great set of conventions, and generally having a grand old time. And then something goes kachunk and suddenly it's damn near impossible to figure out how to tell the container exactly how everything needs to be wired up without creating a raft of custom dependency providers to deal with special cases that the container just wasn't designed to handle properly. And the configuration becomes increasingly diffuse, devolving into a mishmash of conventions, manual registrations, and registrations that are implied by custom implementations of IFooResolverProviderFactory, until eventually you realize that you're working with a modern-day edition of what Dijkstra was talking about when he wrote GOTO Considered Harmful. And at that point it might be time to tearfully say goodbye to the IoC container and hand code some abstract factories, because not only will they be less opaque but you'll soon discover to your own horror that they actually reduce the SLOC count.
But - butbutbutbutbut - where I depart with the author is in thinking that this means that IoC containers are bad. Quite the contrary. Avoiding an IoC container because your dependency graph might, just might turn out to be more complicated than what the container was designed to handle is a premature optimization. And will probably lead to worse code than what you'll get if you start out with an IoC container and are forced to give it up later, since using an IoC container really does encourage SOLID code.
I guess what I'm saying is, IoC containers are only "mostly amazingly great." It's still a young technology with some kinks to work out. Even venerable old Windsor is still experiencing significant API flux. They don't necessarily have an answer to every use case yet, even if they are getting damn close.
Oh, and writing libraries. If you're a library, you are not the composition root, by definition. And if you're not the composition root, you don't get to use an IoC container. Sorry. Have a facade.
I always keep hearing this argument "but something will change". No, it won't. Is it really worth introducing all this boilerplate nonsense into the application just for the off chance of something changing at some point? And even if that happens, don't you think you're better off just doing whatever modifications are necessary vs. putting all this crap into your application?
The only scenario where I've found IoC to make sense is when an application depends on a module and that module REALLY needs to be swapped out with another one. I once had that with a stock market analysis application that depended on X service for data feeds, but X service suddenly stopped working and I could just inject Y service instead without having to change anything else. But even in that case IoC container would've been redundant as plain old DI does the job just fine. That was the only thing in the application that I wrote that way because I didn't want to hard code an external dependency (that I have no control over) into my application.
But what I keep encountering is the cargo cult approach to programming where IoC is done for the sake of IoC and everything is injected for the sake of being injected, with no rational explanation as to why. It's quite terrible really.
Ditto turning everything into a "service" a la WSDL and now SOA. CORBA lives!
This problem will typically surface whenever you add a dependency between existing components. If the dependency is currently instantiated after the changed component, it will have to be moved up so it is instantiated earlier. But then it turns out the dependencies of that component also need to be moved up etc.
To deal with this you then have to calculate a "dependency rank" for each component (i.e. 0 for components without dependencies, and x+1 for components where the max rank among the dependencies is x) just to figure out a new order that works.
This problem doesn't exist if you use a dependency injection container with configuration in code, because the order of component registration is not important.
You don't need the big machinery for smaller projects, though. What I end up doing in Python is having builder methods in __init__.py which can instantiate objects from the package (possibly instantiating objects from sub-packages in the process), with any dependency not in the package or sub-packages being a parameter. You get some granularity in your dependency injection, you avoid magic and you don't need to use the builder methods if you don't want to. It may not scale to large class hierarchies, but for mid-size projects it's fine.
> In the Java community there's been a rush of lightweight
> containers that help to assemble components from
> different projects into a cohesive application.
> Underlying these containers is a common pattern to how
> they perform the wiring, a concept they refer under the
> very generic name of "Inversion of Control". In this
> article I dig into how this pattern works... and
> contrast it with the Service Locator alternative. The
> choice between them is less important than the principle
> of separating configuration from use.
[1] http://martinfowler.com/articles/injection.html(edit: Also, the Spring Framework has had alternatives to XML configuration for years: http://www.ibm.com/developerworks/webservices/library/ws-spr...)
I guess it's super-obvious from context (and the claim of the title) so I guess I'm just not very smart, today. Oops.
Things like callbacks, observers, events, ... are also types of IoC.
This component tree is the core of your application architecture. If the classes are well named, a big part of the application structure should become clear to someone reading your hand-coded main(). Also, if this is such a pain to figure out, why are you using components at all?
I find the idea of splitting functionality into classes but having no clue how these classes relate to one another a tad scary, really. I usually have a picture of the entire component tree on the wall near our team. It's great for pointing at when discussing design, and it's trivial to draw.
Writing a main() really shouldn't be more work than copying that picture into code. A tad tedious, but never more than an hour of work. Why add all that IoC complexity for saving 60 minutes. ? Why replace an excellent starting point for new team members to dive into code by a library that does magic?
Deployment variability doesn't cut it for me. Nearly always, you can foresee which things may need to be variable, and which aren't. Explicitly make those configurable, instead of all classes and their arguments like you'd do in Spring. In your main(), just if or switch over these config settings to instantiate the right class. Again, very readable, and you strongly convey intent to code readers.
I know there are many people here that might crucify me for the opinion but I call bullshit. They just have never had to work on something sufficiently complex where IoC and frameworks like Spring can become a huge time savings. Obviously with any tool you have to make sure you don't chainsaw off your leg, and its really stupid to do certain things with any tool. The black magick of aop and IoC can make reasoning about things a little harder, but the shear power of the tool makes solving hard problems in complex applications tractable. Try auditing transaction management in an application with 10 thousand tables written by hundreds of people of various experience levels. Or just use spring transactions and inject the well tested behavior across the entire application.
If people need the assistance of a framework to wire up their application, it is probably because the API design sucks ass and there are too many moving parts the programmer is forced to pay attention to. (And when I say API design: all code design is about APIs -- every time you create a class or an interface you are creating an API of some sort). Good APIs hide and abstract. And good implementations take care of things you don't need to see or know.
And I say if, because most of the time, applications are not so complicated that you need help to manage wiring them up. Most of the projects I have worked on the past 15 years can usually be wired up in less than 40-50 lines in a Main.java. I have worked on some projects that needed considerably more, but interestingly, managing that code was never a problem.
Explicit wiring you can read from start to finish beats vague declarative shit that you may be able to figure out if you pay close attention.
I don't really understand what problem using the dynamic language is trying to solve there, and you've just added in a whole new branch of different code that won't work with static analysis, and will need to be maintained in addition to your normal code.
Examples (c#) * Autofac * Ninject
I agree with a lot of what he writes. At the same time I believe there are cases where I would appreciate some kind of automation for wiring (f.ex. if I want a fresh instance for each session or request). And there are certainly cases when runtime wiring - as in the mentioned case of plugins - is useful.
* by interface * by enum * by parametrized constructor (Func<..>) * by (arbitrary) string * per process / per thread / per whatnot
IoC containers in C# have rejected config in XML, in favour of config in code, using either generics and lambdas (e.g. register Component.For<IFoo>.ImplementedBy<Bar> ) or via scanning (e.g. AllTypes.FromAssembly(x).BasedOn<Foo>().WithService.FirstInterface() ).
Spring.net was an exception and in comparison it really sucked.
IoC Config in code is much better. But can't this be done in java too?
I wouldn't go and dismiss one of the most popular Enterprise frameworks, as a user of Spring / Hibernate for last 7+ years, it makes a lot of sense to me in the enterprise context, just like Ruby on Rails and Play Framework makes sense to me in the SaaS contect, for instance, having Play framework with a Spring module is a nice interim solution for bringing newer technology to the enterprise world.
It all depends on what you need to be doing. Spring is great for a large enterprise app, that have long lifetime, needs to connect to legacy stuff, needs common enterprise integration patterns etc.
Also if you check SpringSource latest stuff, you'll find out they are investing in lot's of cool new things, e.g. the scripted text editor - (https://github.com/scripted-editor/scripted/) and are influenced a lot by the Rails, Django, Node.js and Play communities.
The enterprise world is not that stupid, it's perhaps just a bit behind but it's still making tons of money from it's software, in billions, and it's more likely to be written in Spring / Hibernate than with Rails / ActiveRecord.
Just give it some time and patience.
Managerial idiots like overambitious single-program IT projects because it makes it easier to allocate "headcount" when the programmer-to-program relationship is inverted (many programmers to one program, instead of the right way, which is one programmer working on many programs).
The truth is that every programming approach fails you when you do this. For one example, static typing fails on versioning issues and compile-speed problems, while dynamic typing ends up with silent failures and integration bugs.
There is one case I can think of where large single programs work, and that's in the database space. You have a lot of requirements (transactional integrity, concurrency, performance in a heterogeneous world) and they all have to work together. It has also taken decades for some of the brightest technical minds out there to get it right.
Eventually programs written in these languages can grow huge, but it takes longer, and the preference tends to then be to break a program down into smaller libraries (gems, eggs, etc.) and separate applications that communicate with each other.
Or switch to Scala like me.
I think the issue with the enterprise, large-single-program Java world is that there isn't a feedback cycle. Curious, motivated engineers hate that kind of environment because they want to see things actually work. Clojure and Scala have the REPL, but large-program Java development doesn't.
I would argue that it is based on the idea that there's going to be too much code to read it all, and that knowing you need to use something shouldn't automatically entail knowing how to create something.
For a serious response, I've seen tons of Java code of varying quality. Most was worse than horrible, but I have seen well-written Java. The Java code for Clojure is excellent, for one example. Most corporate Java code is indistinguishable from profanity.
There are many programmers working in the Real-World [TM], powering great apps and webapps, using saner technologies who have never heard of the term.
I wrote my own IoC container (a clone of picocontainer IIRC) to see what the fuss was about when the term started to become fashionable. No big deal. It can help reproduce various states (for examples in testing environments) to more easily test your app.
Sure people will say: "You need an IoC container to do DI, what you're talking about is DI, not IoC".
But, fundamentally, it is once again related to "state" and the difficulty to recreate a state in {dev,pre-prod,prod,whatever}.
So it's once again a "solution" missing the bigger picture.
Sure, people stuck in the "stateful OO" mindset will love IoC and, honestly, if you're stuck in such an hell IoC can be useful. With the caveat that IoC makes it much harder to reason about what's going on when some shit hits the fan.
"Every problem can be solved with another layer of abstraction, except the problem of too many layers of abstraction"
But there's light at the end of the tunnel. There are other ways to design great apps.
And to make you feel even better: being "smart" as nothing to do with knowing every technology out there ; )