> Here's the thing, we want AMD to join the graphics community not hang out inside the company in silos. We need to enable FreeSync on Linux, go ask the community how would be best to do it, don't shove it inside the driver hidden in a special ioctl. Got some new HDMI features that are secret, talk to other ppl in the same position and work out a plan for moving forward. At the moment there is no engaging with the Linux stack because you aren't really using it, as long as you hide behind the abstraction there won't be much engagement, and neither side benefits, so why should we merge the code if nobody benefits?
> The platform problem/Windows mindset is scary and makes a lot of decisions for you, open source doesn't have those restrictions, and I don't accept drivers that try and push those development model problems into our codebase.
They provide a standard implemented by the driver and not the hardware. There is not even a standard to get performance metrics for GFX cards. Nothing.
I agree with Dave. If you do not want to create the standard, leave others to do it. But having an HAL inside the driver is problematic.
Shall we have a cross-platform standard for writing cross-platform drivers? Write once, run everywhere? Why not as long as it is open source.
But it still needs someone to govern it, like linux kernel project and device companies do not seem interested. Which says a lot for their intentions.
> all sorts of "linux'isms" from code daily and deal with the pain of porting non portable Linux code to their platform.
If you're developing for Linux, using Linux specific technology, then of course there would be porting effort required.
Same as if you want to make you Windows stuff work on Linux, there should be porting required - after all, it's a different platform,
What AMD wants to do is to sidestep as much of the porting as possible, by effectively shipping their Windows code inside the Linux kernel.
Your case is about porting code between different OS kernels.
i'm definitely not mad and bitter about Linux in any way, grasping at straws in an attempt for relevance. i promise you. pinky swear.
DA instead is saying to the developers that they need to play ball and work with the existing Linux DRI world and not silo themselves off.
Oh please.
No, the boss said "merge this" and the code has been developed "corporate style" (with a HAL, etc) now the "mean" kernel developers won't approve this
But the Kernel people are right, because the other option would be to introduce code that breaks every now and then and is unmaintainable. See all the ACPI issues for example, that only stopped when Linus said "no changes can break existing functionality anymore"
Lack of profanities already exceeded the LKML reputation.
I think the point about rules being applied consistently is very true. If Alice does the work to comply then Bob shouldn't be able to get away without doing it just because he's bigger.
Please prefer the term "Digital Restriction Management". :-)
AMD tried the same in their open source driver and were rejected by the kernel maintainer. Unified drivers have code sharing advantages but don't follow the practices of the linux kernel.
Result is no people to do the required work.
Edit: Here's the start of the thread https://lists.freedesktop.org/archives/dri-devel/2016-Decemb...
So they wanted to get rid of the ugly kernel part that was a PITA to install and update and are now pushing a lot of their hardware abstraction code from Windows into kernel patches so the AMDGPU driver can just talk to the Windows blob pretty much verbatim (which is now AMDGPU Pro).
The practical effect is a continuation of the status quo. AMDGPU Pro, without a lot of this functionality, is either broken or underperforming across all distros. It is still better than what the last FGLRX was, but nowadays the Gallium free driver they also develop is beating the blob in almost everything except the latest driver-level optimized games.
Most distros have completely dropped all proprietary AMD support. Going forward, it will be up to them to ship a proprietary driver and maintain installation faculties pretty much everywhere. AMDGPU with Mesa is going to continue working fine, new GPUs are getting supported still, and a lot of what this HAL does (display / window management) has had usable support that has worked for years in various parts of Gallium / Mesa / DRM.
The optimistic future is that AMD drops AMDGPU Pro, refocuses developer effort on AMDGPU / Gallium, and works with the rest of the Linux graphics community to implement Freesync / Trueaudio / whatever other tech AMD has buzzwords on into shared kernel code rather than trying to stick it in a HAL from their Windows driver.
The pessimistic view has AMD just fire or reassign a lot of its Linux staff, leaving its hardware on the platform to wilt. It would never stop entirely, AMD provides programming manuals for their hardware and most of their ASM in new platforms to enable almost anyone to program their GPUs (unlike Nvidia, who publishes nothing, requiring devs to reverse engineer their hardware and ASM) so the support would still be better than Nouveau.
From that perspective, saying that "the optimistic future is that AMD drops AMDGPU Pro" is a bit silly, since it's largely the same code base as AMDGPU and the same people working on both.
It also makes no sense at all to say that "a lot of what this HAL does has had usable support [...] in various parts of Gallium / Mesa", since Gallium and Mesa are purely concerned with rendering and video. They don't care about display. In fact, you can actually use radeonsi and the rest of the open-source stack on top of the amdgpu-pro kernel module. (And for that matter, the closed source Vulkan driver is supposed to be compatible with an otherwise open source stack.)
Also, AMDGPU is not necessarily fine going forward, precisely because of this display code issue. Yes, the memory management and rendering/video engine parts are going to be just fine, but that won't do you a lot of good (outside of compute purposes) if you can't light up a display...
I have run desktop Linux across a dozen (maybe slightly more) machines over a decade, and friends will ask me for advice stepping into that world. On graphics drivers, my safest recommendation has always been:
- If AMD, use the open source version.
- If Nvidia, use proprietary.
- If Intel integrated, thank whatever god you believe in for Mesa.
What about Nvidia's GPUs, community relations or [insert other topic] hobbles their open source one so thoroughly compared to AMD? Or alternatively: Why is AMD's (presumably) deeper knowledge of their graphics hardware unable to be more stable than the open source equivalent when Nvidia's is?
Nvidia support for Linux is hard to get as well. Now AMD support is also hard to get.
Hardware OEM companies make Windows drivers, and don't seem to care about Linux. This is the same thing that happened to IBM and OS/2 not good third party driver support.
There are open source drivers that work, but are not as fast as the proprietary drivers.
Linux needs better display drivers and for that OpenCL or Vulkan support as well. Windows uses dotnet and DirectX for games.
Not necessarily. It not being merged into mainline Linux doesn't mean that other parties can't include it with their own distros.
Read an article or something.
Source: My desktop PC has an R9 Nano, and I've used the amdgpu driver since Linux 4.5 (when it good power management support for my card and thus could enable usual GPU clock speeds).
And definitely much better than my only other inkling for what that meant in relation to computer systems.[2]
Well, at least for the DRM (Direct Rendering Manager) subsystem.
One of these is cloud computing on large clusters of headless machines using the parallelization that GPUs are known for. If you want to do this right you definitely need input from a lot of sources, not just hacks in AMD delivered code.
The No men is all that stands between us and the Yes men.
Praise and salutations to the No men, God bless you.
("Do users do X often?" isn't the question; "do they get annoyed when they can't?" is the question, and hardcore gamers tend to have one computer for gaming and oftentimes other computers for other stuff; if they were even using Linux on those it'd be a paradigm shift)
I think what you mean is "I get the idealism but you also need to be realistic." It's not pragmatic to stand your guns and ask a multi-million dollar company to change the code they submit to your open-source project.
And before anyone mentions Android and ChromeOS, Google can replace the kernel and only OEMs writing drivers, most of them closed, would notice.
Nothing to do with Windows and OSX being bundled with the hardware /s
So this rejection is about maintainership that negatively affects distribution of the amdgpu module as a side effect. It's nothing that can't be solved by linux distributions though.
We propose to use the Display Core (DC) driver for display support on AMD's upcoming GPU (referred to by uGPU in the rest of the doc). In order to avoid a flag day the plan is to only support uGPU initially and transition to older ASICs gradually.
The DC component has received extensive testing within AMD for DCE8, 10, and 11 GPUs and is being prepared for uGPU. Support should be better than amdgpu's current display support.
I mean, it is all GPL, so it's perfectly okay. Is it too much for some dev in seek of fame to do this?
This is me talking with my community hat on (not my Intel maintainer hat), and with that hat on my overall goal is always to build a strong community so that in the future open source gfx wins everywhere, and everyone can have good drivers with source-code. Anyway:
- "Why not merge through staging?" Staging is a ghetto, separate from the main dri-devel discussions. We've merged a few drivers through staging, it's a pain, and if your goal is to build a strong cross-vendor community and foster good collaboration between different teams to share code and bugfixes and ideas then staging is fail. We've merged about 20 atomic modeset drivers in the past 2 years, non of them went through staging.
- "Typing code twice doesn't make sense, why do you reject this?" Agreed, but there's fundamentally two ways to share code in drivers. One is you add a masive HAL to abstract away the differences between all the places you want your driver to run in. The other is that you build a helper library that programs different parts of your hw, and then you have a (fairly minimal) OS-specific piece of glue that binds it together in a way that's best for each OS. Simplifying things of course here, but the big lesson in Linux device drivers (not just drm) is that HAL is pain, and the little bit of additional unshared code that the helper library code requires gives you massive benefits. Upstream doesn't ask AMD to not share code, it's only the specific code sharing design that DAL/DC implements which isn't good.
- "Why do you expect perfect code before merging?" We don't, I think compard to most other parts in the kernel DRM is rather lenient in accepting good enough code - we know that somewhat bad code today is much more useful than perfect code 2 years down the road, simply because in 2 years no one gives a shit about your outdated gpu any more. But the goal is always to make the community stronger, and like Dave explains in his follow up, merging code that hurts effective collaboration is likely an overall (for the community, not individual vendors) loss and not worth it.
- "Why not fix up post-merge?" Perfectly reasonable plan, and often what we do. See above for why we tend to except not-yet-perfect code rather often. But doing that only makes sense when thing will move forward soon&fast, and for better or worse the DAL team is hidden behind that massive abstraction layer. And I've seen a lot of these, and if there's not massive pressure to fix up th problem it tends to get postponed forever since demidlayering a driver or subsystem is very hard work. We have some midlayer/abstraction layer issues dating back from the first drm drivers 15 years ago in the drm core, and it took over 5 years to clean up that mess. For a grand total of about 10k lines of code. Merging DAL as-is pretty much guarantees it'll never get fixed until the driver is forked once more.
- "Why don't you just talk and reach some sort of agreement?" There's lots of talking going on, it's just that most of it happens in private because things are complicated, and it's never easy to do such big course correction with big projects like AMD's DAL/DC efforts.
- "Why do you open source hippies hate AMD so much?" We don't, everyone wants to get AMD on board with upstream and be able to treat open-source gfx drivers as a first class citizen within AMD (stuff like using it to validate and power-on hardware is what will make the difference between "Linux kinda runs" and "Linux runs as good or better than any other OS"). But doing things the open source way is completely different from how companies tend to do things traditinoally (note: just different, not better or worse!), and if you drag lots of engineers and teams and managers into upstream the learning experience tends to be painful for everyone and take years. We'll all get there eventually, but it's not going to happen in a few days. It's just unfortunate that things are a bit ugly while that's going on, but looking at any other company that tries to do large-scale open-source efforts, especially hw teams, it's the same story, e.g. see what IBM is trying to pull off with open power.
Hope that sheds some more light onto all this and calms everyone down ;-)
It will be time consuming to change now.
https://lists.freedesktop.org/archives/dri-devel/2016-Februa...
> Cleaning up that is not enough, abstracting kernel API like kmalloc or i2c, or similar, is a no go. If the current drm infrastructure does not suit your need then you need to work on improving it to suit your need. You can not develop a whole drm layer inside your code and expect to upstream it.
https://lists.freedesktop.org/archives/dri-devel/2016-Februa...
> Cleaning up that is not enough, abstracting kernel API like kmalloc or i2c, or similar, is a no go. If the current drm infrastructure does not suit your need then you need to work on improving it to suit your need. You can not redevelop a whole drm layer inside your code and expect to upstream it.
> Linux device driver are about sharing infrastructure and trying to expose it through common API to userspace.
> So i strongly suggest that you start thinking on how to change the drm API to suit your need and to start discussions about those changes. If you need them then they will likely be usefull to others down the road.
https://lists.freedesktop.org/archives/dri-devel/2016-Februa...