(Never even mind the Single Static Information form.)
I wonder if part of the reason SSA is not implemented from the start by many compilers, is precisely because it came too late to be included in seminal works like A Catalogue of Optimizing Transformations; so people that rely on those works as a canon of "classes of optimization techniques that work" won't even be aware of it.
-----
More snarky: the earliest dataflow analysis paper I can find is from 1972. :)
More seriously, my recollection (having been about 20 years since I last took a compilers course) is that it took a while for even researchers to be convinced of the advantages of SSA, as the transformation causes a quadratic blowup in code size for the worst case (but it turns out to be less in real-world cases).
Also Sussman & Steele proposed CPS in 1975 which is closely related to SSA.
(Also, plenty of the compilers and/or JITs that I'm talking about are far newer. The first attempt to get a JVM to use SSA optimization during JIT — within SafeTSA — only occurred in 2000. Such an approach was copied by pretty much every JVM implementation by 2005, suggesting that a legacy of incompatible architecture was never the problem, but rather that JVM implementors just didn't think it was a worthwhile technique until they saw it demonstrated for their particular workloads.)
CPS is closely theoretically related to SSA (it carries equivalent information across lexical boundaries) but CPS isn't a basis for optimization transforms in the same way that SSA is. You can't hoist loop invariants using CPS... as far as I know, at least.
Dependence analysis, automatic vectorization, and automatic parallelization were invented after 1971.
Fast algorithms for computing dominators[2] were late-70s.
Graph coloring register allocators were introduced in the early 1980s.
[0] https://en.m.wikipedia.org/wiki/Very_long_instruction_word
[1] https://en.m.wikipedia.org/wiki/Trace_scheduling
[2] https://en.m.wikipedia.org/wiki/Dominator_(graph_theory)