Creating 100 random matrices of size 5000x5000 on CPU...
Adding matrices using CPU...
CPU matrix addition completed in 0.6541 seconds
CPU result matrix shape: (5000, 5000)
Creating 100 random matrices of size 5000x5000 on GPU...
Adding matrices using GPU...
GPU matrix addition completed in 0.1480 seconds
GPU result matrix shape: (5000, 5000)
Definitely worth digging into more, as the API is really simple to use, at least for basic things like these. CUDA programming seems like a big chore without something higher level like this.> The article is about the next wave of Python-oriented JIT toolchains
the article is content marketing (for whatever) but the actual product has literally has nothing to do with kernels or jitting or anything
https://github.com/NVIDIA/cuda-python
literally just cython bindings to CUDA runtime and CUB.
for once CUDA is aping ROCm:
For comparison, doing something similar with torch on CPU and torch on GPU will get you like 100x speed difference.
matricies = [np.random(...) for _ in range]
time_start = time.time()
cp_matricies = [cp.array(m) for m in matrices]
add_(cp_matricies)
sync
time_end = time.time()PSA: if you ever see code trying to measure timing and it’s not using the CUDA event APIs, it’s fundamentally wrong and is lying to you. The simplest way to be sure you’re not measuring noise is to just ban the usage of any other timing source. Definitely don’t add unnecessary syncs just so that you can add a timing tap.
https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART_...
print("Adding matrices using GPU...")
start_time = time.time()
gpu_result = add_matrices(gpu_matrices)
cp.cuda.get_current_stream().synchronize() # Not 100% sure what this does
elapsed_time = time.time() - start_time
I was going to ask, any CUDA professionals who want to give a crash course on what us python guys will need to know?It's great that parts of pie torch which concern the NVIDIA backend can now be implemented in Python directly, The important part that it doesn't really matter or shouldn't matter for end users / Developers
that being said, maybe this new platform will extend the whole concept of on GPU computation via Python to even more domains like maybe games.
Imagine running rust the Game performantly mainly on the GPU via Python
I'm totally with you that it's better that this took so long, so we have things like PyTorch abstracting most of this away, but I'm looking forward to (in my non-existent free time :/ ) playing with this.
That said, there were almost no announcements or talks related to CPUs, despite the Grace CPUs being announced quite some time ago. It doesn't feel like we're going to see generalizable abstractions that work seamlessly across Nvidia CPUs and GPUs anytime soon. For someone working on parallel algorithms daily, this is an issue: debugging with NSight and CUDA-GDB still isn't the same as raw GDB, and it's much easier to design algorithms on CPUs first and then port them to GPUs.
Of all the teams in the compiler space, Modular seems to be among the few that aren't entirely consumed by the LLM craze, actively building abstractions and languages spanning multiple platforms. Given the landscape, that's increasingly valuable. I'd love to see more people experimenting with Mojo — perhaps it can finally bridge the CPU-GPU gap that many of us face daily!
As someone working on graphics programming, it always frustrates me to see so much investment in GPU APIs _for AI_, but almost nothing for GPU APIs for rendering.
Block level primitives would be great for graphics! PyTorch-like JIT kernels programmed from the CPU would be great for graphics! ...But there's no money to be made, so no one works on it.
And for some reason, GPU APIs for AI are treated like an entirely separate thing, rather than having one API used for AI and rendering.
I’m one of those people who can’t (won’t) learn C++ to the extent required to effectively write code for GPU execution…. But to have a direct pipeline to the GPU via Python. Wow.
The efficiency implications are huge, not just for Python libraries like PyTorch, but also anything we write that runs on an NVIDIA GPU.
I love seeing anything that improves efficiency because we are constantly hearing about how many nuclear power plants OpenAI and Google are going to need to power all their GPUs.
This means that Nvidia is selling a relatively unique architecture with a fully-developed SDK, industry buy-in and relevant market demand. Getting AMD up to the same spot would force them to reevaluate their priorities and demand a clean-slate architecture to-boot.
Have you ever used a GPU API (CUDA, OpenCL, OpenGL, Vulkan, etc...) with a scripting language?
It's cool that Nvidia made a bit of an ecosystem around it but it won't replace C++ or Fortran and you can't simply drop in "normal" Python code and have it run on the GPU. CUDA is still fundamentally it's own thing.
There's also been CUDA bindings to scripting languages for at least 15 years... Most people will probably still use Torch or higher level things built on top of it.
Also, here's Nvidia's own advertisement and some instructions for Python on their GPUs:
- https://developer.nvidia.com/cuda-python
- https://developer.nvidia.com/how-to-cuda-python
Reality is kind of boring, and the article posted here is just clickbait.
While its not exactly normal Python code, there are Python libraries that allow writing GPU kernels in internal DSLs that are normal-ish Python (e.g., Numba for CUDA specifically via the @cuda.jit decorator; or Taichi which has multiple backends supporting the same application code—Vulkan, Metal, CUDA, OpenGL, OpenGL ES, and CPU.)
Apparently, nVidia is now doing this first party in CUDA Python, including adding a new paradigm for CUDA code (CuTile) that is going to be in Python before C++; possibly trying to get ahead of things like Taichi (which, because it is cross-platform, commoditizes the underlying GPU).
> Also, here's Nvidia's own advertisement for Python on their GPUs
That (and the documentation linked there) does not address the new upcoming native functionality announced at GTC; existing CUDA Python has kernels written in C++ in inline strings.
The polyglot nature of CUDA is one of the plus points versus the original "we do only C99 dialect around here" from OpenCL, until it was too late.
No need for a RTX for learning and getting into CUDA programming.
JAX lets you write Python code that executes on Nvidia, but also GPUs of other brands (support varies). It similarly has drop-in replacements for NumPy functions.
This only supports Nvidia. But can it do things JAX can't? It is easier to use? Is it less fixed-size-array-oriented? Is it worth locking yourself into one brand of GPU?
[1]: https://numba.readthedocs.io/en/stable/cuda/overview.html
The crate seems to have a lot of momentum, with many new features, releases, active communities on GH and Discord. I expect it to continue to get better.
I am using Cudarc.
The PEP model is a good vehicle for self-improvement and standardization. Packaging and deployment will soon be solved problems thanks to projects such as uv and BeeWare, and I'm confident that we're going to see continued performance improvements year over year.
I really hope you're right. I love Python as a language, but for any sufficiently large project, those items become an absolute nightmare without something like Docker. And even with, there seems to be multiple ways people solve it. I wish they'd put something in at the language level or bless an 'official' one. Go has spoiled me there.
I'm plenty familiar with packaging solutions that are painful to work with, but the state of python was shocking when I hopped back in because of the available ML tooling.
UV seems to be at least somewhat better, but damn - watching pip literally download 20+ 800MB torch wheels over and over trying to resolve deps only to waste 25GB of bandwidth and finally completely fail after taking nearly an hour was absolutely staggering.
I think software engineers with any significant amount of experience recognize you can build an application that does X in just about any language. To me, the largest difference, the greatest factor in which language to choose, is the existing packages. Simple example- there are several packages in Python for extracting text from PDFs (using tesseract or not). C# has maybe one tesseract wrapper? I recall working with PDFs in .NET being a nightmare. I think we had to buy a license to some software because there wasn’t a free offering. Python has several.
This is VERY important because we as software engineers, even if we wanted to reinvent the wheel sometimes, have very limited time. It takes an obscene number of man hours to develop a SalesForce or a Facebook or even something smaller like a Linux distro.
I hope so. Every time I engage in a "Why I began using Go aeons ago" conversation, half of the motivation was this. The reason I stopped engaging in them is because most of the participants apparently cannot see that this is even a problem. Performance was always the second problem (with Python); this was always the first.
Now if only CPython also got a world class JIT, V8 style.
Is Beeware that transformational ? What does Beeware do and what is its maturity level?
IMHO if you want to pick it up for a couple toy projects just to get a feel of what coding is like, then by all means try it out. But eventually you'll benefit tremendously from exploring other languages.
Python will teach you a lot of bad habits. You will feel like you know what you're doing, but only because you don't know all of the ways in which it is handwaving a lot of complexity that is inherent to writing code which you should be very much aware of.
Knowing what I know now, I wish Rust existed when I started out so that it could have been my first language. I'm never giving up the borrow checker and the type system that come with it.
But you don't have to do Rust. It's fine to work on a couple of projects in Python, then maybe something small in C (though the tooling can feel arcane and frustrating), then maybe switch it up and go with some more functional programming (FP) flavored like Lisp or F#.
I know Rust has a lot of zealots and a lot of haters, but I'm not pushing an agenda. I just think it strikes that perfect balance between being extremely expressive, clear to read (after maybe a month of writing it daily), strong type system, lots of FP elements, no OOP clutter but super powerful traits, the borrow checker which you'll invariably learn to love, and more...
This will give you a strong foundation upon which you'll be able to continuously build knowledge. And even if you start with Rust, you should definitely explore Python, C, Lisp and F# later (or maybe Haskell instead of F#)
Some people will tell you to start with C or C++ to get a better intuition for what's actually happening under the hood in Python, but that's not really necessary for most use cases unless you're doing something niche. Some of the most popular use cases for Python are webapps, data analysis, or general automation. For the 1% of use cases that Python isn't the right fit for, you can still use it to prototype or glue things together.
There are a lot of great resources out there for learning Python, but they won't necessarily teach you how to make great software. You can't go wrong with the official tutorial. https://learn.scientific-python.org/development/ is pretty terse and incorporates a lot of best practices.
In the end, my final answer is - yes. I say that because I believe it's the easiest programming language to get something working in. And getting something working is what motivates people to keep going.
If you sit them down and say 'well before you learn python you need to learn how a computer really works, here's an ASM x86 book', they're gonna probably read 10 pages, say this is boring, then go do something else. I think that because I went through that as a kid - I started reading a C++ book with no knowledge and gave up. It wasn't until I found qbasic and VB, by all marks a terrible language, that I really got motivated to learn and keep going because progress was so easy.
Python will teach you the basics - control flow, loops, variables, functions, libraries, etc. Those apply to almost every language. Then when you move to a different language, you at least know the basics and can focus on what's different or added that you didn't have or know before.
I think a compiled language is a better choice for people just getting started. Java is good, IMO, because it is verbose. Eventually the beginner may get tired of the verbosity and move on to something else, but at least they'll understand the value of explicit types and compile-time errors.
I would just encourage you to move on from Python fairly quickly. It's like... a balance bike. Easy to learn and teach you how to balance but you don't want to actually use it to get around.
There is no one-size-fits-all programming language.
AMD is held back by the combination of a lot of things. They have a counterpart to almost everything that exists on the other side. The things on the AMD side are just less mature with worse documentation and not as easily testable on consumer hardware.
I wonder why Python take over the world? Of course, it's easy to learn, it might be easy to read and understand. But it also has a few downsides: low performance, single threaded, lack of static typing.
If I were a Ruby developer, I'd be using Rails, and I'd also be describing 90% of Ruby development.
However, I do Python. What I'm describing is a tiny fraction of Python development.
If you want to do something with computer code - data analysis, ML, web development, duct-taping together parts of a #NIX system, even some game development - you can do it reasonably well, if not better, in Python. The paths that you can take are limitless, and that gets people interested.
At work right now we're integrating with scoring models hosted in Amazon SageMaker written by a "modelling team" and as far as I can tell they follow absolutely no basic coding practices. They give us the API and are asking us to send English strings of text for names of things instead of any real keys, and they're just comparing against plain strings and magic numbers everywhere so if they're asked to make any change like renaming something it's a herculean task that breaks a bunch of other things. Something will break when a field is null and then they'll tell us instead of sending null if we have no data to send -9999999. One time something broke and it turned out to be because we sent them "MB" (Manitoba) as someone's province, and whoever wrote it was just plain-text checking against a list of province codes as strings and didn't remember to include Manitoba.
I know this is still mainly a business/management issue that they're allowing people who don't know how to code to write code, but I'm sure this is happening at other companies, and I think Python's level of accessibility at the beginner level has been a real blight to software quality.
Not sure what "most popular programming language in the world" even means, in terms of existing projects? In terms of developers who consider it their main language? In terms of existing actually active projects? According to new projects created on GitHub that are also public?
My guess is that it's the last one, which probably isn't what one would expect when hearing "the most popular language in the world", so worth keeping in mind.
But considering that AI/ML is the hype today, and everyone want to get their piece of the pie, it makes sense that there is more public Python projects created on GitHub today compared to other languages, as most AI/ML is Python.
All the things that are not great about it make it easier to learn. No static typing, no control of memory, no threads.
When I started there was a language like BASIC or Visual BASIC that was easy to learn (or also quick to use) and C or C++ that was performant. If the world now is Python and Rust or Go, I think that it is just a better word for programmers. I say that as someone comfortable with C/ C++ / Java. They had their time and will still be with us, but the improvement is real.
My guess: it's the community.
Because data-science/ML/LLM's have taken over the world now and no other language offers best-in-breed libraries and frameworks.
Other languages need to get off their ass and start offering options soon or be relegated to niche domains.
Go is definitely not fun to write. The rest I agree.
https://github.com/CapsAdmin/luajit-llama3/blob/main/compute...
While obviously not complete, it was less than I thought was needed.
It was a bit annoying trying to figure out which version of the function (_v2 suffix) I have to use for which driver I was running.
Also sometimes a bit annoying is the stateful nature of the api. Very similar to opengl. Hard to debug at times as to why something refuse to compile.
CUDA was born from C and C++
It would be nice if they actually implemented a C variant of CUDA instead of extending C++ and calling it CUDA C.Alongside for the ride, they fostered an ecosystem from compiled language backends targeting CUDA.
Additionally modern CUDA supports standard C++ as well, with frameworks that hide the original extensions.
Most critics don't really get the CUDA ecosystem.
https://docs.nvidia.com/cuda/cuda-driver-api/index.html
Not to mention that C++ does not support neat features like variable sized arrays on the stack.
But then I end up finding myself juggling mutexes and wishing I had some newer language features.
I've noticed alot of projects add Python support like this. Does the Python codebase allow for it to compile down to different targets easier than others?
Greater Processing Unit
Giant Processing Unit
Galloping Processing Unit
Grape Processing Unit
Gorge Processing Unit
Gaggle Processing Unit
Grand Processing Unit
Giraffe Processing Unit
Gaping Processing Unit
it's only the beginning, there is no need to create new programming languages anymore
There will be new shiny things, but of course, my choice is Python too.
https://nvidia.github.io/cuda-python/cuda-core/latest/ https://developer.nvidia.com/nvmath-python
https://developer.nvidia.com/how-to-cuda-python
And
"Zero to Hero: Programming Nvidia Hopper Tensor Core with MLIR's NVGPU Dialect" from 2024 EuroLLVM.
Reverse-engineered python-only GPU API, works with not only CUDA but Also AMD's ROCm
Other runtimes: https://docs.tinygrad.org/runtime/#runtimes
It's a holistic approach to all levels of the stack, from high-level frameworks to low-level bindings, some of which is highlighting existing libraries, and some of which are completely newly announced.
One of the big things seems to be a brand new Tile IR, at the level of PTX and supported with a driver level JIT compiler, and designed for Python-first semantics via a new cuTile library.
https://x.com/JokerEph/status/1902758983116657112 (without login: https://xcancel.com/JokerEph/status/1902758983116657112 )
Example of proposed syntax: https://pbs.twimg.com/media/GmWqYiXa8AAdrl3?format=jpg&name=...
Really exciting stuff, though with the new IR it further widens the gap that projects like https://github.com/vosen/ZLUDA and AMD's own tooling are trying to bridge. But vendor lock-in isn't something we can complain about when it arises from the vendor continuing to push the boundaries of developer experience.
Even if there is some impedance mismatch, could PTX itself not have been updated?