The kind of optimizations a static compiler might apply can be done by a JIT as well, with the added benefit of actually knowing what kind of workload is going to run. Most of deep learning is applying comparatively small computation graphs to very large arrays of numbers in parallel, so the overhead of compilation is only a small portion of the overall computation time. A smart JIT that decides on the optimal tiling pattern for the array dimensions observed at runtime and rewrites loops accordingly can easily pay for itself.