Congrats to the TensorFlow team!
To answer your questions:
- We don't (yet) have a tensor contraction op -- just a matter of getting some dev time to call the existing Eigen contraction code in an op. Hopefully in the next release!
- More casting between types I think is in this release.
- dynamic RNNs is not yet in this one, but also in our sights.
And with all of that, we still need to work on better performance, memory efficiency. Still lots to do!
(hopefully this saves others asking the "what is it?" question some trouble...)
I'm not a big fan of vendor "standards", but I have very limited sympathy for OpenCL here.
I think the best hope for portability is at the higher level programming API layer. For example TensorFlow is careful to make switching between CPU and GPU painless.
However the benchmarks for OpenCL look about 5x slower than CUDA
I'm really looking forward to updated CUDNN and CUDA 7.5 support. My machines are all configured for Theano, and I've been sorta waiting to try TensorFlow until I can install it without downgrading everything, as it was tricky to get things working and I'd rather not reconfigure things I don't have to.
I'm running a Tensorflow Google developer group meeting in boulder every couple weeks if any of the authors/contributors is in town and wants to come and say hi to the group we'd love to have you. gdgboulder.github.io
Otherwise, please file an issue at github and we'll do our best to help!
Thanks!
Right now I can only run TensorFlow in a Docker container in a VirtualBox Linux virtual machine running in Windows... so I guess there's that!