Shouldn't be a problem anymore under Linux as most distros today install Nouveau drivers by default. https://nouveau.freedesktop.org/wiki/
CUDA only exists because Nvidia is attempting to pretend OpenCL, Vulkan, and DX12 don't exist [1]. These require hardware scheduling on the GPU to switch shaders. Rather then dedicated X amount of chip hardware to Y shader for Z ms.
It should be noted for GPGPU compute Nvidia is not the correct choice. AMD RX 480 has 5.8TFLOPS @$200 ($37/TFLOP) vs Nvidia GTX1080 8.9TFLOPS @$600 ($67/TFLOP). In reality you should be doing GPU programming in OpenCL so you are GPU agnostic. You can switch vendors or platforms seamlessly (in most cases if you avoid proprietary extensions) even target AMD64, ARM, and POWER8/9 hardware.
That being said I own a boat load of Nvidia stock because their marketing is excellent. Really marketing is all 80% of people pay attention too. CUDA has some great marketing around it. In reality CUDA is slower then OpenCL (on Nvidia's platforms even) and no easier to work in.
Let's hope it gets picked up by machine learning frameworks etc., because this market badly needs the competition, as your comparison of per-dollar raw performance numbers shows.
But I'm curious in how the FLOPS on these cards were measured. For example one concern I have is that presumably these two cards have slightly different levels of parallelism. So it may be more or less difficult to extract the full performance from a particular card due to parallelism overhead. Then there's driver overhead, ease of programming, etc.
Sample applies to integer math, long double math and so on.