Looking forward to your feedback as you try it out.
You can't please everybody, as if they listen or not to users people still complain. If both are making effort to improve themselves though, the community has only to benefit from this competitiveness.
Doing image classification, object localization, and homography (given an input image, which of my known template images is matches it and in what orientation).
There's a lot of work being done on this specific part. If you have a standard RNN architecture you want to run, you can probably use the cudnn code in tf.contrib.cudnn to get a super fast implementation.
There is some performance work that needs to be done on properly caching weights between time steps of an RNN if you use a tf.nn.RNNCell. Currently if you want to implement a custom architecture, or a seq2seq decoder, or an RL agent, this is the API you would want to use. Several of the eager benchmarks are based on this API; so that performance will only improve.
I'm hopeful that for the next major release, we'll also have support for eager in tf.contrib.seq2seq.
Eager is actually not as innocent as "open-source projects borrowing the best parts from each other", as some commenters here suggest.
Google is attempting to dominate the machine-learning API and the Python ecosystem for scientific computing.
The company that controls the API influences which apps are built on it and how. Think about how Google bundled Android services on top of Android, and how that posed an existential threat to other companies. That's what's coming for TensorFlow. Many developers are too naive to realize it, or too short-sighted to care.
I wouldn't compare a permissively licensed library to Android services at all.
I didn't compare Tensorflow to Android services. I said that Tensorflow would serve as the basis of a service bundle, much like Android did. Let's come back in a couple years and I'll tell you I told you so.
It has to be consistent and there has to be one way to do it.
I personally have a 10 message thread with Google cloud support on exporting a Cloud trained model to tensorflow and nobody could figure it out [Case #13619720].
In fact if you dig up the case, then even official support told me that savedmodel needs some freezing using bazel otherwise it doesn't work.
The github page and stackoverflow are full of these. If you can, please take the message to the other side :(
I don't think the cloud guys (where training will happen in distributed mode) talk to the android guys (where models will be used after quantization). There is a huge serialization problem that all of us are currently struggling with.