Passing a function
Decoupling initialization from memory allocation (eg. making constructors work) (This problem is also shared by C# before 3.0 but you can kind of get around it by passing a function that does the initialization, or using an anonymous constructor in 3.0 and up)
Avoiding the FactoryFactoryFactory pattern where it's all factories all the way down which is a pattern designed to get around the constructor anti-pattern. Because constructors are somehow special and not just a function that returns a specific data-type. So in C# you'll wrap a constructor in a function so it can be passed, and in Java you'll use DI. (eg. Func<String> s = () => new string())
DI is primarily a euphemism for programming in XML or another language that sucks less than Java. Primarily it's a euphemism designed to assage the egos of Java programmers who don't want to admit you can't solve problems elegantly in Java so they move their code to other languages that interact with Java to pretend Java solves more problems than it creates.
I've used DI in C++, Perl and Scala - the only difference being in these languages I didn't need a framework: I used mixins (or if mixins were not available, multiple inheritance in mixin-style) to "wire" the interfaces to implementations.
In Java I ended up either having a big class where I do all the wiring (in my current project: < https://github.com/voldemort/voldemort/blob/master/src/java/... >) or using Guice (Spring is a little too "enterprisey" to be my cup of tea). Yes it's more awkward, but it's certainly possible to write clean and readable (if more verbose) code in Java.
DI comes about as a reaction to use of static methods and more-harmful-than-useful design patterns such as singleton, which have the issue of making it more difficult to isolate specific layers of your code for unit testing.
I also think it's wrong to conflate DI with XML. One of the first major DI frameworks, Spring, did use XML for configuring and wiring-up components, but it's also quite popular to do this part purely in the code, without any external configuration files.
Re-read fleitz message. He never stated what he believed DI to be but what he believes DI to be used for.
> I also think it's wrong to conflate DI with XML.
Which, again, is not what he said at all.
Passing a function
If you will pardon my English: What a load of bullshit.
I'm sure it can be used for that as well (i.e. in implementing the delegation pattern, due to lack of delegates and first class functions), but saying that is what it all that is good for simply makes you look inexperienced.
For any system with a reasonable complexity you will find yourself wanting to separate your code into modules. It might be that you want to thoroughly implement SOC (seperation of concerns), it might be that you want your code to be more flexible (i.e. be able to replace a file-store with a db-store later) or simply that you realize that your system is so big, that you need to be able to work with components separately to be able to properly test your modules.
Decoupling initialization from memory allocation
First you criticise Java, then you bring up a point which (usually) is not a very big concern in garbage-collected languages.
Maybe you mean "controlling initialization" which is crucial for testable code, but given how your first point is completely off base I'm not really sure I would give you the benefit of the doubt.
DI is primarily a euphemism for programming in XML
You can implement DI without any XML. And no, for reference, I don't do Java.
What problems can be solved with DI that can't be solved by passing a function? Or 'controlling initialization'
Allocation issues are present in GC languages because of the overhead of GC when you could just reinitialize an existing piece of memory. (You also get to keep your L1/L2 caches hot by not initializing new memory when old will do) Allocation has non-trivial costs. This is why using the methods that allow you to pass a byte[] buffer are often more efficient than those that allocate their own buffer. And we're not even getting into the fragmentation that can occur when you're rapidly allocating memory and some of that memory is locked by the OS so it can't be collected.
How is it that DI frameworks are necessary for large code bases yet most operating systems don't have DI frameworks?
When most people writing OO code have a problem, they think 'I know I'll use a DI Framework', now they have two problems.
Of course it is for a number of reasons (instance sharing or reusing, caches, registries, ...), and most GC'd languages (significantly, not Java or C#) give separate access to these two operations (allocating an instance and initializing it).
See Ruby (new/initialize), Python (__new__/__init__), Obj-C (alloc, init), etc...
Dynamic languages don't suffer many of the overheads with using DI. So do those with Mixins (see the Scala's Cake pattern for more on this http://jonasboner.com/2008/10/06/real-world-scala-dependency...)
This blog post says more about it: http://weblog.jamisbuck.org/2008/11/9/legos-play-doh-and-pro....
http://stackoverflow.com/questions/2407540/what-are-the-down...
> The same basic problem you often get with object oriented programming, style rules and just about everything else. It's possible - very common, in fact - to do too much abstraction, and to add too much indirection, and to generally apply good techniques excessively and in the wrong places...[answer continues]
Am I missing some magic goo that makes Python not require DI, or do I just not understand what Java programmers mean by DI?
Parameter (Dependency) Injection.
> I might make the connection object a parameter to the constructor
Constructor (Dependency) Injection.
> construct it in a helper method that I can override for testing purposes.
Not DI.
http://martinfowler.com/articles/injection.html
Edit:
For languages such as Java and C#, wiring up these objects is quite a pain.
For duck typed languages such as Python and Ruby, it isn't that big of a pain. So we just do it. Just because you don't have an IoC framework doesn't mean you aren't practicing DI :)
Sometimes it's easier to assume Python-developers are blind to architectural problems and common solutions to those.
Not saying Python isn't used in large, complex problems or that all Python-code is spaghettti-code, but there are a lot of wannabe cool Python-hackers ("ninjas") out there and part of being "cool" in that sense is disregarding everything which looks even slightly like design patterns.
In those circles "Design patterns" means enterprisey, verbose code with FactoryFactories and not guidelines for how to structure your code to solve problems which have been solved before you.
Would you mind telling me how you would handle this in an example?
Let's say you were building a web app which had to communicate with some sort of payment gateway API. Naturally since you are communicating with some third-party API you want to prevent the rest of your codebase from knowing too much about the API - in case you ever need to change gateways - and so you wrap it up in an interface/module/etc to abstract away the details of Gateway X.
If it ever came time to switch payment gateway backends, in a Java application using DI you would just need to switch which implementation of your interface that the rest of your code is wired up with (either in XML, or if you are wiring up collaborators explicitly in your code, etc).
How would you generally manage this type of thing in a Python system? By giving the web/controller code a different payment module that implemented all of the same method signatures?
Maybe I'm naive, but I wouldn't usually bother:
1) Maybe I will never change gateways, then there was no point in writing a wrapper.
2) If I do change gateways, I can just write a wrapper that uses the old gateway's API as the interface for the new one.
If the new gateway is sufficiently different than the old one that it can't be mapped to the old API, then whatever wrapper I wrote back in the beginning before I needed it wouldn't have been sufficient anyway.
Your strategy sounds like YAGNI to me. I only write separate interfaces when I actually have different components that need to be swappable.
But maybe that's just because I'm using Ruby and I can be lazy like that?
You simply pass in the desired payment processor as a parameter. End of story.
This is part of the reason Java is such a terrible language for actually learning about programming. "Using Dependency Injection" is an absurd way of phrasing "pass in varying parameters as parameters to your function instead of using global variables". What should be the normal way of doing business has instead been elevated to Something Special (that Lesser Languages Can't Do because they don't call it Dependency Injection because they would never think of giving Passing Parameters to Functions a special name!) because Java makes something as easy as breathing and in some cases hard to avoid (try programming Haskell or Erlang with too many global variables) a production.
In Java, you use absurd and unnecessary machinery. In every other language, you just do it. It's hard to even explain how you do it because there's hardly any doing in the it. It's a parameter passed in. It hardly seems worth discussion.
blah blah blah dependency injection blah blah blah (This is a bit off-topic, but it came to mind, so what the hell.) You are correct in your observations that given most programming languages (even those such as Haskell), it is difficult to see exactly how Curry-Howard is useful. I recently stumbled across someone mentioning something called "dependency injection". I didn't know what it was, so I googled (I guess this is lowercase nowadays!) it and read Martin Fowler's article on it. It is a bit on the long side, and I kept waiting for the punch-line; you know, the point at which the author hits you with the insight which justifies the preceding verbosity and the hi-tech-sounding name ("dependency injection" — I can't help but think of "fuel injection", and gleaming motor engine showcases), but it seemed indefinitely postponed. And in the end, it turned out that "dependency injection" just means "abstraction" specifically by parametrization, by update and by what I think amounts to type abstraction plus update. (Apparently these are called — I kid you not — type 3 IoC, type 2 IoC and type 1 IoC...!) To me this all seemed rather obvious and it got me thinking about why it isn't obvious to the author or his readership. In Haskell, if I am given some type B which I need to produce somehow, and I realize that the B-values I need depend on some other values of type A, the first thing I do is write down "f :: A -> B". Then I write down "f a =", and then I start writing stuff after the equals sign until I have what I need. I do that because I know once I have the type that if there is an inhabitant of the type "A -> B" it can be expressed as "\a -> b" for some b, so the "f a =" part is always part of my solution and I will never have to change that unless I want to. So once I've written that down I feel one step closer to my solution. I know that for three reasons. First, because of my experience as a functional programmer. Second, because it is part of the universal property of exponentials ("factors uniquely"), that is, of function types. And third, because by the Curry-Howard correspondence with natural deduction, I can start any proof of A which depends on B by assuming A, that is, adding it as a hypothesis. So, why is it so obscure in Java? I think part of the reason is that in Java you have update, so there are complications and additional solutions. But part of the reason is also that it largely lacks structural typing, and that makes it hard to see that a class('s interface) is a product of exponentials. (With nominal typing, you tend to think of a class by its name, rather than its structure.) You could also blame the syntax of method signatures, which obscure the relationship with exponentials and implication. But is the syntax the cause or just a symptom? (You know what I think about syntax...) If CH could be readily applied to Java, perhaps Java's designers would have chosen a more suggestive syntax. But even if they had decided to stick anyway with C-style syntax, the idea of using abstraction to handle dependencies would have been more obvious.
This is the real challenge. I don't think many companies, outside of perhaps small startups, will let you view their code - at your own perusal so you know they aren't just showing off the good stuff - to candidates.
..Gaming is Life