https://github.com/NVIDIA/libcudacxx/blob/main/docs/releases...
Yes, this is extraordinarily common. The ABI is an interface, a promise that new versions of the machine code for a library can both be used by binaries compiled against the old one. There's new machine code, but there's no "by definition" of whether they make this promise or not.
glibc (and the other common libraries) on basically all the GNU/Linux distros does this: that's why it's called "libc.so.6" after all these years. New functions can be introduced (and possibly new versions of functions, using symbol versioning), but old binaries compiled against a "libc.so.6" from 10 years ago will still run today. (This is how it's possible to distribute precompiled code for GNU/Linux, whether NumPy or Firefox or Steam, and have it run on more than a single version of a single distro.)
Apple does the same thing; code linked against an old libSystem will still run today. Android does the same thing; code written to an older SDK version will still run today, even though the runtime environment is different.
Oracle Java does the same thing: JARs built with an older version of the JDK can load in newer versions.
Microsoft does this at the OS level, but - notably - the Visual C++ runtime does not make this promise, and they follow a similar pattern to what Nvidia is suggesting. You need to include a copy of the "redistributable" runtime of whatever version (e.g. MSVCR71.DLL) along with your program; you can't necessarily use a newer version. However, old DLLs continue to work on new OSes, and they take great pains to ensure compatibility.
The C++ Standards Committee has been prioritizing ABI compatibility at the cost of performance for the last decade or so (mostly in the standard library, as opposed the language itself, as I understand it). Some people (especially people from Google) have been arguing that this is the wrong priority, and that C++ should be more willing to break ABI. See:
https://cppcast.com/titus-winters-abi/
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2019/p186...
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p213...
Disclosure: I work at Google with several of the people advocating for ABI breaking changes.
It's just your object files that aren't compatible, so that you can't mix and match libraries built with different CUDA versions into the same binary.
Win32 is a great example of this. It has been extensively overhauled, and best practice for writing a new application today is quite different from 25 years ago, but unmodified Windows 95 applications still usually run correctly.
Wait NVidia actually get it? Neat!
Looking at the functions, chrono/barrier etc require CPU level abstractions, so using the STL versions (which are for the CPU) aren't going to work really.
Our end goal is to enable the full C++ Standard Library. The current feature set is just a pit stop on the way there.
Seems the big addition of the Libcu++ to Thrust would be synchronization.
For those of us who can't adopt it right away, note that you can compile your cuda code with `--expt-relaxed-constexpr` and call any constexpr function from device code. That includes all the constexpr functions in the standard library!
This gets you quite a bit, but not e.g. std::atomic, which is one of the big things in here.
It gets worse if you try to spell libcu++ without pluses:
libcuxx libcupp (I didn't hate this one but my team disliked it).
We settled on `libcudacxx` as the alphanumeric-only spelling.
It doesn’t appear to be in Ubuntu any more but still in openbsd, netbsd, and macos!
You can’t win win these namespace collisions: I have friends whose names are obscenities in other languages I speak.
Although maybe short words that are slang in languages different from what something was written in aren't a big deal.
Wasn't there something related about Microsoft Lumia phones?
2. How compatible is this with libstdc++ and/or libcu++, when used independently?
I'm somewhat suspicious of the presumption of us using NVIDIA's version of the standard library for our host-side work.
Finally, I'm not sure that, for device-side work, libc++ is a better base to start off of than, say, EASTL (which I used for my tuple class: https://github.com/eyalroz/cuda-kat/blob/master/src/kat/tupl... ).
...
partial self-answer to (1.): https://nvidia.github.io/libcudacxx/api.html apparently only a small bit of the library is actually implemented.
Yep. It's an incremental project. But stay tuned.
> I'm somewhat suspicious of the presumption of us using NVIDIA's version of the standard library for our host-side work.
Today, when using libcu++ with NVCC, it's opt-in and doesn't interfere with your host standard library.
I get your concern, but a lot of the restrictions of today's GPU toolchains comes from the desire to continue using your host toolchain of choice.
Our other compiler, NVC++, is a unified stack; there is no host compiler. Yes, that takes away some user control, but it lets us build things we couldn't build otherwise. The same logic applies for the standard library.
https://developer.nvidia.com/blog/accelerating-standard-c-wi...
> Finally, I'm not sure that, for device-side work, libc++ is a better base to start off of than, say, EASTL (which I used for my tuple class: https://github.com/eyalroz/cuda-kat/blob/master/src/kat/tupl... ).
We wanted an implementation that intended to conform to the standard and had deployment experience with a major C++ implementation. EASTL doesn't have that, so it never entered our consideration; perhaps we should have looked at it, though.
At the time we started this project, Microsoft's Standard Library wasn't open source. Our choices were libstdc++ or libc++. We immediately ruled libstdc++ out; GPL licensing wouldn't work for us, especially as we knew this project had to exchange code with some of our other existing libraries that are under Apache- or MIT-style licenses (Thrust, CUB, RAPIDS).
So, our options were pretty clear; build it from scratch, or use libc++. I have a strict policy of strategic laziness, so we went with libc++.
There appears a llvm libcxx bundled in as part of the repo. What's the purpose of that libcxx?
Isn't this exactly what a GPU firmware is expected to do ? Why do they need to run software in the same memory space as my mail reader ?
> Why do they need to run software in the same memory space as my mail reader ?
It is a lot more expensive to build functionality and fix bugs in silicon than it is to do those same things in software.
At NVIDIA, we do as much as we possible can in software. If a problem or bug can be solved in software instead of hardware, we prefer the software solution, because it has much lower cost and shorter lead times.
Solving a problem in hardware takes 2-4 years minimum, massive validation efforts, and has huge physical material costs and limitations. After it's shipped, we can't "patch" the hardware. Solving a problem in software can sometimes be done by one engineer in a single day. If we make a mistake in software, we can easy deploy a fix.
At NVIDIA we have a status for hardware bugs called "Won't Fix, Fix in Next Chip". This means "yes, there's a problem, but the earliest we can fix it is 2-4 years from now, regardless of how serious it is".
Can you imagine if we had to solve all problems that way? Wait 2-4 years?
On its own, our hardware is not a complete product. You would be unable to use it. It has too many bugs, it doesn't have all of the features, etc. The hardware is nothing without the software, and vice versa.
We do not make hardware. We make platforms, which are a combination of hardware and software. We have a tighter coupling between hardware and software than many other processor manufacturers, which is beneficial for us, because it means we can solve problems in software that other vendors would have to solve in hardware.
> I really do not understand why a (very good) hardware provider is willing to create/direct/hint custom software for the users.
Because we sell software. Our hardware wouldn't do anything for you without the software. If we tried to put everything we do in software into hardware, the die would be the size of your laptop and cost a million dollars each.
You wouldn't buy our hardware if we didn't give you the software that was necessary to use it.
> Isn't this exactly what a GPU firmware is expected to do ?
Firmware is a component of software, but usually has constraints that are much more similar to hardware, e.g. long lead times. In some cases the firmware is "burned in" and can't be changed after release, and then it's very much like hardware.
The source data needs to appear on the GPU somehow. Similarly, the results computed on GPU are often needed for CPU-running code.
GPUs don’t run an OS and are limited. They can’t possibly access file system, and many useful algorithms (like PNG image codec) is a poor fit for them. Technically I think they can access source data directly from system memory, but doing that is inefficient in practice, because GPUs have a special piece of hardware (called copy command queue in d3d12, or transfer queue in Vulcan) to move large blocks of data over PCIe.
That library implements an easier way to integrate CPU and GPU pieces of the program.
How would a firmware help me write heterogeneous bits of c++ code that can run on either cpu or gpu?
Actually, the basis of our modern GPU compute platform is a technology called Unified Memory, which allows the host and device processor to share access to memory spaces. We think this is the way going forward.
Of course, there's still the process isolation provided by your operating system.
When will we be able to use a future riscv-64 CPU with an nvidia GPU ? we will let the answer to nvidia ?
Their decision making seems rational, of course it's not ideal if you're consumer. We would like the ability to bid off NVidia with AMD Radeon.
Convergence to a standard has to be driven by the market, but it's impossible to drive NVidia there because they are the dominant player and it is 100% not in their interests.
It doesn't mean they're a bad company. They are rational actors.
However, this notably doesn't cover binaries, which are GPU vendor specific in that case, so AMD for example would have to provide a C++ compiler implementing stdpar for GPUs targeted to their hardware.
Ah-ha, you've caught us! Our plan is to lock you into our hardware by implementing Standard C++.
Once you are all writing code in Standard C++, then you won't be able to run it elsewhere, because Standard C++ only runs on NVIDIA platforms, right?
... What's that? Standard C++ is supported by essentially every platform?
Darnit! Foiled again.
2. OpenCL 2.x allows for C++(ish) source code. Not sure how good the AMD support is though.
That's what NVIDIA understood and made them what they are today.