They're cool and all—I've characterized them (partially, perhaps) as CPS encoded flatmaps over lists and also as stateful left fold transformers, the latter of which being much more general—but they're more like a rich ecosystem for list transformations than any kind of fundamental abstraction.
I would love a comparison in the form of Haskell, converted to Lisp syntax, and wired up to JVM.
Haskell is strongly typed and lazily evaluated, which easily enables functions to take only one argument at a time. Although a function could take a tuple parameter, it can be, and usually is, rewritten to take each component of the tuple as a separate parameter, which makes the strong typing and builtin currying simple, higher structures like monads possible, and the syntax can designed to suit this style.
Lisp functions, on the other hand, must take many arguments at a time to cater to the isomorphicity of the language. It's therefore much more difficult for parameters to be typed, or to curry them. The syntax requires explicit visual nesting. Inter-translating between this style and the Haskell style therefore is difficult.
I'd even suggest the Haskell and Lisp styles are two different peaks on the ladders of programming language abstraction, and the poster child of each, monads and macros, just don't interoperate very well, simply because of the different required foundations of each language.
True story: the transducer announcement has mostly made me read up on the Haskell fold and lens libraries...
Function composition usually operates at non-collection type inputs. If you compose f and g, that usually means f takes some input and then g takes as input the output of g.
Transducers are a generalization of composition in the sense that it takes a collection of those inputs f would accepts.
So, in that sense, function composition could be a specific instance of transducers with number of inputs = 1.
Transducers are a subset of functions. They cannot be generalizing them.
Re: the characterisation, the only thing I'd add is that they have explicit start and end calls. Start resets the state, end can clean up resources if necessary.
I tried thinking about what output structure a generalised transducer would form, and came to the conclusion it was a foldable monoid i.e. pretty list-like.
[1]: http://www.reddit.com/r/haskell/comments/2cv6l4/clojures_tra...
EDIT: s/composed functions/composed logic functions/
I think Rich's innovation here is extremely clever and quite subtle, but it's pretty Clojure-specific, both in terms of the problem it solves and the way it solves it.
Now what if this is done to reduce itself? What should a reduced arity (reduce fun) return? I think that is a noop: nothing needs to be done to fun to make it reduce under reduce, so (reduce fun) -> fun. I.e. to transduce the reducer "fun", an arbitrary function, into a function X such that (reduce X list) will do the job of (reduce fun list), we simply equate X with fun.
Clojure's are more fundamental than other languages?
This already avoids the creation of intermediate structure since you just keep transforming the reduction function, but you have this sort of useless "reducible" thing attached. Mostly, the trouble is that you were afflicted by the kingdom of nouns---you don't really need a structure to think of first class objects.
Instead, you can just consider the various ways of transforming reduction functions. They all compose as (reverse) functions (you can see them as a category) and you can take your resultant "transducer" and apply it to a source and sink structure to map out of the source and into the sink.