I think the beauty of keras was the perfect balance between simplicity/abstraction and flexibility. I moved to PyTorch eventually but one thing I always missed was this. And now, to have it leapfrog the current fragmentation and just achieve what seems to be a true multi-backend is pretty awesome.
Looking forward to the next steps!
PyTorch adoption: back when Keras went hard into TensorFlow in 2018, both TF and PyTorch adoption were about the same with TF having a bit more popularity. Now, most of the papers and models released are PyTorch-first.
PS: sorry I'm a bit salty by my user experience of Keras.
Will Keras Core support direct deployment to edge devices like RPi or Arduino?
Will the experience of defining and training a model in JAX/PyTorch and then deploying to edge devices be seamless?
Anything related on the roadmap?
This caught my eye:
> “Right now, we use tf.nest (a Python data structure processing utility) extensively across the codebase, which requires the TensorFlow package. In the near future, we intend to turn tf.nest into a standalone package, so that you could use Keras Core without installing TensorFlow.”
I recently migrated a TF project to PyTorch (would have been great to have keras_core at the time) and used torch.nested. Could this not be an option?
A second question. For “customizing what happens in fit()”. Must this be written in either TF/PyTorch/Jax only, or can this be done with keras_core.ops, similar to the example shown for custom components? The idea would be you can reuse the same training loop logic across frameworks, like for custom components.
If you want to make a model with a custom train_step that is cross-backend, you can do something like:
def train_step(self, *args, *kwargs):
if keras.config.backend() == "tensorflow":
return self._tf_train_step(*args, *kwargs)
elif ...
BTW it looks the previous account is being rate-limited to less than 1 post / hour (maybe even locked for the day) so I will be very slow to answer questions.Fast forward to now, and my biggest pain point is that all the new models are released on PyTorch, but the PyTorch serving story is still far behind TF Serving. Can this help convert a PyTorch model into a servable SavedModel?
To use pretrained models, you can take a look at KerasCV and KerasNLP, they have all the classics, like BERT, T5, OPT, Whisper, StableDiffusion, EfficientNet, YOLOv8, etc. They're adding new models regularly.
https://github.com/keras-team/keras-nlp/tree/master/keras_nl... https://github.com/keras-team/keras-cv/tree/master/keras_cv/...
My pytorch code from years ago still works with no issues, my old keras code would break all the time even in minor releases.
"We're excited to share with you a new library called Keras Core, a preview version of the future of Keras. In Fall 2023, this library will become Keras 3.0. Keras Core is a full rewrite of the Keras codebase that rebases it on top of a modular backend architecture. It makes it possible to run Keras workflows on top of arbitrary frameworks — starting with TensorFlow, JAX, and PyTorch."
Excited about this one. Please let us know if you have any questions.
I actually have developed (and am developing) sth very similar, what we call the RETURNN frontend, a new frontend + new backends for our RETURNN framework. The new frontend is supporting very similar Python code to define models as you see in PyTorch or Keras, i.e. a core Tensor class, a base Module class you can derive, a Parameter class, and then a core functional API to perform all the computations. That supports multiple backends, currently mostly TensorFlow (graph-based) and PyTorch, but JAX was something I also planned. Some details here: https://github.com/rwth-i6/returnn/issues/1120
(Note that we went a bit further ahead and made named dimensions a core principle of the framework.)
(Example beam search implementation: https://github.com/rwth-i6/i6_experiments/blob/14b66c4dc74c0...)
One difficulty I found was how design the API in a way that works well both for eager-mode frameworks (PyTorch, TF eager-mode) and graph-based frameworks (TF graph-mode, JAX). That mostly involves everything where there is some state, or sth code which should not just execute in the inner training loop but e.g. for initialization only, or after each epoch, or whatever. So for example:
- Parameter initialization.
- Anything involving buffers, e.g. batch normalization.
- Other custom training loops? Or e.g. an outer loop and an inner loop (e.g. like GAN training)?
- How to implement sth like weight normalization? In PyTorch, the module.param is renamed, and then there is a pre-forward hook, which on-the-fly calculates module.param for each call for forward. So, just following the same logic for both eager-mode and graph-mode?
- How to deal with control flow context, accessing values outside the loop which came from inside, etc. Those things are naturally possible eager-mode, where you would get the most recent value, and where there is no real control flow context.
- Device logic: Have device defined explicitly for each tensor (like PyTorch), or automatically eagerly move tensors to the GPU (like TensorFlow)? Moving from one device to another (or CPU) is automatic or must be explicit?
- How to you allow easy interop, e.g. mixing torch.nn.Module and Keras layers?
I see that you have keras_core.callbacks.LambdaCallback which is maybe similar, but can you effectively update the logic of the module in there?
Keras is a higher-level API. It means that you can prototype architectures quickly and you don't have to write a training loop. It's also really easy to extend.
I currently use PyTorch Lightning to avoid having to write tonnes of boilerplate code, but I've been looking for a way to leave it for ages as I'm not a huge fan of the direction of the product. Keras seems like it might be the answer for me.
Also, are there any examples using this for the coral TPU?
Coral TPU could be used with Keras Core, but via the TensorFlow backend only.
The creator of Keras was then employed by Google to work on Keras, who promised everyone Keras would remain backend agnostic.
Keras become part of Tensorflow as a high-level API and did not remain backend agnostic. There was lots of questionable twitter beef about Pytorch by the Keras creator.
Keras is now once again backend agnostic, as a high-level API for Tensorflow/PyTorch/Jax. Likely as them seeing Tensorflow losing traction.
Additionally would it always be at least a step behind any framework depending on the wrapper's release cycle?