I love that the tooling for ML experimentation is becoming more mature. Keeping track of hyperparameters, training/validation/test experiment test set manifests, code state, etc is both extremely crucial and extremely necessary. I can't count how many times I've trained a great model only to lose the exact state and be unable to reproduce it. It's extremely frustrating. When I found sacred (
https://github.com/IDSIA/sacred) it changed my team's workflow in a very positive way. We already have this approach of saving default experiment workbenche images, but formalizing it is much nicer.