Hah, that's what I get for skimming the article and assuming I knew what was going on here.
With that said, I guess the quickest thing that comes to mind is wanting to run my Jupyter notebooks on a machine with much beefier CPU and memory than my laptop. I was recently working on some lightweight ML stuff, which required training 3 SVR models. Each model really only took 30 seconds to train on my laptop (with a small, synthetic training set), but if cpu was in my workflow, I would have just done it on a beefier machine and saved a minute or two of time every time I wanted to test a new iteration.