No, you are not. There is no wrong and right, it's just a stupid statement. OpenGL is years behind D3D (not DirectX, it's not comparable!) in many areas, and writing using the whole DX stack can be much nicer than setting up a GL stack with a lot of different libraries.
I'm not a video games programmer, but judging by the benchmarks it seems the most efficient graphics stack is based around AMD's Mantle: http://hothardware.com/News/AMD-Mantle-vs-DirectX-Benchmarks...
John Carmack suggests the OpenGL extensions that NVidia have developed give comparable performance to Mantle: http://n4g.com/news/1376571/john-carmack-nvidias-opengl-exte...
Explain please?
You can practically issue render commands only from one thread. And there is no way to save bunch of commands anymore as display lists were deprecated. Also it's still very much state machine based. So you have to do a lot of individual calls to set everything up for actual draw call.
I personally love OpenGL and use it on my work. However this is one of the biggest drawbacks on OpenGL currently.
OpenGL has context sharing, which means several contexts in different threads sharing the same objects. You can issue commands as long as you synchronise access to objects yourself. In practice, that means filling buffers, rendering to an off-screen framebuffer, etc. from other threads.
This leads to developers pursuing various less optimal solutions that all involve more startup time for users and less predictable performance and robustness for developers (at least when compared to the solution D3D has offered for more than 10 years). So when people say OpenGL is years behind D3D this is one of the things they mean. D3D isn't perfect here either. There is a fair amount of configuration-specific recompilation going on, but the formats are more compact than the optimized/minimized GLSL source formats people are pursuing on OpenGL and while the startup time (shader create time) is still too long, it is still much better than OpenGL. Shader robustness is generally more predictable and better on D3D but it's hard to disentangle shader pipeline issues from driver quality.
To be fair multicore is also an issue for OpenGL, but D3D isn't great at that either. The current spec for D3D11 includes a multicore rendering feature called "deferred contexts" but performance scaling using that feature has been disappointing so it isn't a clear win for D3D. Other APIs (e.g. hardware-specific console graphics APIs) expose more of the GPU command buffer and reducing abstraction there allows for a real solution to the multicore rendering problem. There should be a vendor neutral solution here, but so far neither of the APIs has delivered one that is close to the hardware-specific solutions in performance scaling.