Does NVidia have a solution here that scales down? Clearly they're focused on scaling up, as that's where the money is today.
Yes, they have invested heavily in this. Everytime they talk about their "car/auto segment" (which is all the time if you listen to their investor calls) it's mostly about scale-down.
Cars, especially the sort that will need to do self-driving, will have massive batteries in them. I'm talking about Roomba-sized devices where they can't power a full GPU worth of gear.
They may have large batteries if they are electric cars, but it won't be for the GPUs.
The NVidia PX2 platform[1] - which is what the Telsa self-driving features uses[2] - is available in a 10W config. I presume this isn't the full self-driving mode, but the Jetson TK1 can do full image tracking and recognition in less than 30W (and that is a 2 year old platform).
Where did you get the nonsensical idea that they are going to use desktop GPUs in cars? The board they specifically designed for self driving cars merely needs a few watts for 500 GFLOPS. Why would this not be suitable for a "Roomba-sized device"?