Very inaccurate and misleading. You're confusing learning an API (easy) with doing actual graphics programming. Graphics programming involves deep understanding of the target hardware. The level of sophistication of the hardware determines which graphics techniques to pursue for the best results. Often it's a balance of your ability to optimize along with achieving the highest fidelity of output.
For example, availability of floating point render targets -- how do you use them, and for what? How does the hardware handle them? It's different across devices even in the same generation! How does the hardware optimize rendering of opaque vs transparent objects? It's different across devices. Let's get really specific -- how many cycles does a medium precision square root take? Do you use pow or not? How much does a texture lookup cost? Hopefully you can guess the answer by now -- it's different on every device.
It gets exciting when a blend of API and hardware (which includes additional supported and unsupported extensions, which yes, also change with every OS/hardware combination) requires the invention of a novel technique to fully utilize the resources at hand. It's a continual balancing act of visual fidelity and performance with the end goal of squeezing out every last bit of memory and computational bandwidth.
Enough about hardware which is really just an important detail of the field. Being a great graphics programmer requires keeping up to date with the community of blogs and published papers, all of which are a constantly updating source of experimentation and novel techniques. This doesn't even touch on the artistry involved. Being at the top of the field takes extreme dedication and is absolutely not a "learn once and refresh now and then" activity.
Summary: real-time graphics programming is one of -the- most difficult fields to stay at the forefront.