https://github.com/ROCm/ROCm/issues/1714
With Nvidia cards, I know that if I buy any Nvidia card made in the last 10 years, CUDA code will run on it. Period. (Yes, different language levels require newer hardware, but Nvidia docs are quite clear about which CUDA versions require which silicon.) I have an AMD Zen3 APU with a tiny Vega in it; I ought to be able to mess around with HIP with ~zero fuss.
The will-they-won't-they and the rapidly dropped support is hurting the otherwise excellent ROCm and HIP projects. There is a huge API surface to implement and it looks like they're making rapid gains.
This is, obviously, way overdue and it might not be enough to let AMD get back into the race but
Where can you rent time on one? Traditionally, AMD has only helped build super computers, like Frontier and El Capitan, out of these cards.
This time around Azure [0] and other CSP's (cloud service providers) are working to change that. I will have the best of the best of their cards/systems for rent soon.
[0] https://techcommunity.microsoft.com/t5/azure-high-performanc...
I look at their financial performance and it’s staggering how they’ve missed the boat - and this is during a huge boom on gaming, crypto, and AI.
Compare:
VS
More likely, they wait to see how the AI HW startups shake up and then acquire the ones that have anything worth paying for.
You could probably get 80% there by dedicating enough AMD developers to improving AMD support in existing AI frameworks and software, in parallel with improving drivers and whatever CUDA equivalent they are betting on right now. But it would need a massive concerted effort that few companies seem to be able to pull off (probably it's hard to align the company on the right goals)
Salaries at semico companies are not even close to this
Also why would you even need ppl this good? People who earn 1 mil offer way, way more than just tech skills
I’m just one guy but my experience carries over to subsequent business decisions made by me, and there are many like me.
https://www.amd.com/en/newsroom/press-releases/2023-10-10-am...
At this point it's almost like it has to be intentional, like some perceived tradeoff ingrained in the culture that generates shit software.
They're underpaying their hardware engineers, and if they wanted to hire good software engineers they'd need to pay more, which would cause their hardware engineers to demand better pay too.
This is the actual source[1]:
> The AMD Instinct M1300A APU was launched in January 2023 and blends a total of 13 chiplets, of which many are 3D stacked, creating a single chip package with 24 Zen 4 CPU cores fused with a CDNA 3 graphics engine and eight stacks of HBM3 memory totaling 128GB.
Its literally a typo (or renamed SKU?) for the MI300A. So... the street is jumping on AMD because of a typo echoed by a ton of outlets?
https://www.datacenterdynamics.com/en/news/genci-upgrades-ad...
The discussion on the MI300X was on HN like 12 hours ago (after the AMD announcement event yesterday):
Are they just talking about MI300X availability?
2% is a "leap"?
It looks like NVDA is up ~1.5% since yesterday.
Maybe the LLM space is better about this, but the generative media side definitely isn't.
AMD has a market share of 0% here, and nobody publishes models with AMD support.
AMD have their own thrust gpu impls, so from a high level they are somewhat interchangeable
AMD is so far behind on this.
In the other hand though, their 3D Cache chips are amazing
Don't make me start submitting aol links.
Do you have examples of both sides of this claim?
The other side of this claim: sales numbers of GPUs.