-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New package: nvidia-open-dkms-580.119.02 #54593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Being not really comfortable with git cli I tried to preserve mike's contribution (and failed miserably as can be seen above) And dkms make log: |
|
Oh, and also this makes |
|
@classabbyamp can re-build be approved? |
|
@abenson If I may bother you, since you're the maintainer of the proprietary driver package, what would be the appropriate way for dealing with this conflicting files issue? Especially considering that not all cards supporting 570.124.04 can run the open-source version of it. |
|
Hello, While I do not have anything more to add as of now, I'd like to answer your question about the No, it would not be enough, the nvidia470 & nvidia390 respectively support older nvidia cards, but there exist newer nvidia cards that are still not supported by the open driver, for example the GTX 10 series. about how that could go down well, personally I'd look into having an equivalent proprietary package(if there ain't one already) and have them conflict each other, with the open one picked as a default while adding an appropriate section in the void handbook for those cards in between, but that's me so take everything with a spoonful of salt. Definitely ain't a simple change though, so practice patience, and just keep bumping those versions, eventually attention will be brought to this topic out of necessity. |
|
Hi, thanks for your reply. I'm also unsure how to implement the dependency: As I mentioned above, I am also unsure whether it'd be better to add the open dkms module as a subpackage, and what do do with the license being set to nvidia proprietary in that package. Another thing that bugs me about the licenses is that it theoretically puts this template into the main repo while the nvidia one is in nonfree one. Making people manually choose the module could be a possibility, I suppose, but it'd be inconvenient as heck. We could probably automate it to some degree via querying which device id are enabled unless explicitly called for (for now the readme provided by the upstream lists the pci device ids of supported devices) — though that's me fantasizing the possibilities, I have little to no idea whether xbps is capable of operating on that level. |
|
The idea is that if you install I also understood that currently the script replaces nvidia-dkms. What I would want to achieve:installing nvidia -> installs the open kernel driver. Users that want/need the proprietary driver because of CUDA support or because their card is not supported by the open one -> instructed by the handbook to install the proprietary dkms package before installing nvidia or provide a secondary nvidia template that achieves the same but uses the closed kernel driver. How I'd achieve that, here is what I found in the manual: Thus, I would have both the current proprietary dkms Why?The nvidia open kernel driver is now the default supported by the respective newer cards, (20 series+), we are going against the default by supplying the proprietary one out of the box. |
|
I must have overlooked that section, thanks for pointing it out to me Made this patch (to be applied when both nvidia-open-dkms and nvidia templates are up-to-date, i.e. this PR locally merged) Renamed nvidia-dkms dependency into nvidia-proprietary-dkms, which provides new virtual dependency: The installation order and specifically dkms pkg section in nvidia template are not adjusted here, I am wondering if messing with that order by installing the nvidia-open-dkms contents way later may break anything. |
|
Am testing 570.144 locally since we haven't merged the proprietary version yet. |
I've an open PR for 575 which I'll keep updating till the next production release, just saying. |
|
Tried to rename the branch, that closed the PR, then restored, local git refused to push into it, so I tried to recreate the branch, but then couldn't re-open due to force-pushing. Sorry for the spam. Added patches sourced from arch, although I haven't found the necessity of them in my own testing, both seem to not break things and be used commonly |
|
Hi. I am interested in testing this. How do I switch from the proprietary drivers to these ones? I already now how to build the package, so instructions after that would be appreciated. Thanks. Edit: Instructions are in the template. |
|
Hi, any ideas when this might get merged in? |
Unfortunately this isn't ready for merging. At the very least, there is still an open question on how to handle the versions and the choice between open kernel modules vs proprietary ones (open support the newest cards but don't support older ones. proprietary supports older cards but not the newest). |
|
^ But I'd suspect the answer is soon this will get attention. Nvidia themselves will now treat the open kernel modules as the default, it's only natural to expect distros to transition, and that doesn't exclude void. Use this build if you want it now, otherwise be patient. |
|
I went ahead and installed it about 5 days ago. Kinda needed it as I upgraded to a 5080. Haven't had any problems so far. |
|
Added the "conflicts" field, but "replaces" is kept for ease of installation for testing |
|
Dropped the virtual by changing the provides (quite neat that it can resolve like this) and restored the copied nouveau blacklist logic as pointed out by abby and ahesford over on irc. |
|
What's the difference between this PR and this #56685 ? I have tried both and I find it easier to update the drivers with the other one. But maybe it becomes easier once we actually merge this? |
|
The other switches entirely to the open drivers, breaking support for pre-Turing GPUs (10xx and earlier). This PR provides the option to have both. |
While I originally did agree with this path, The announcement of pre-turing cards being deprecated post 580 driver version was what led me into opening that PR. Is there any point in having both, knowing that? Pre-turing cards can just keep using the original template, but I don't see the reason why any of the newer ones should be defaulting to the proprietary modules. And in case we do want to keep both for whatever reason(as far as I understand nvidia will eventually drop em completely, given that they don't support their newer series), wouldn't it make more sense for it to be the other way around? Aka by default the package installs the open ones and users can install the second package to switch to proprietary if they wish? Am I missing something? |
|
Personally, the only objection to that is "-open" seems to still sometimes have issues appear where proprietary doesn't, and some existing setups would require user intervention on the side of users of older GPUs. |
|
Yeah but some newer cards simply don't support non open drivers at all (I have the 5070ti). And it's been a hassle so far to keep this updated |
Meaning people with pre-Turing cards will need to maintain their own package? Or we somehow create a Or are you advocating we just drop support for pre-Turing cards altogether? We could drop |
I'm advocating for the pre Turing cards to get the same treatment the older deprecated cards got, aka: As soon as next production release drops, e.g 590, fork the nvidia template at that point in time (being in the latest 580.x version) to a package named 'nvidia580', point users with pre-turing cards to it in the handbook. As soon as that happens, switch the nvidia package to the open kernel modules, seeing as now only Turing and later cards will be using the nvidia package, which not only are supported by the open kernel drivers, but it's actually the recommendation by nvidia. In fact, in the case of the 5xxx series it's not only a recommendation but the only path forward. My understanding so far is that we do not need to keep the proprietary option around for Turing+ cards, but if there are valid reasons to, such as those stated by @JkktBkkt, then we either delay the whole switch, or go about deciding the default, which I advocate should still be the open kernel drivers, while offering a package similar to this one that switches to the proprietary as a backup. Regardless this would result in a single step install for either group with an optional second step as a backup if deemed necessary. Thoughts? EDIT: only potential caveat compared to going with this PR's route, is that the above requires that we wait for the next production release before we ship the open kernel drivers, and that as we've seen in the past can take a while longer (reminder that the PR that started off as a beta testing ground for 555 onwards took almost a full year to hit the next production release). And I'd consider that bad only for the users with newer cards, since it means that unless they bother checking out github and building the package from one of the open PRs, they won't have support from the distro. Could be dealt with by temporarily merging this solution, or splitting the package while still in the 580 release and updating them individually until the next production release which is the point they diverge. Sorry for the complexity, but it's hard to explain in simple words, let me know if you want me to rephrase something. |
|
I think I may restructure the Then, |
|
If thinking about the simplicity of a solution, I think the easiest would be to integrate this PR's -open as a subpackage into
Instead of waiting, a full duplicate of This way the structure of existing I personally don't see any need to subpackage the And at the same time |
That's the case already whenever there's a new versioned package for people that want to stay on it, so no more issues than usual.
That same argument can be made for why do we have nvidia, -libs, -gtklibs, etc. I'm not sure why they were split like this, at least I don't think I'm who did it.
Basically, Really its just me being impatient, as I've recently been forced into the requiring the open drivers myself. |
Personally, I'm down with pretty much anything you deem the appropriate path, including you taking over this package as the maintainer. P.S. sorry for the delayed update, had to finish other stuff on the machine before rebooting and testing new version |
|
Hi! I just tried to use this PR, but was hitting an issue with the DKMS not installing properly. It said it was building, but I noticed that the module never ended up in Should we add |
|
I went on a dive to find where xz comes from, thought it's dkms itself or something in the kernel configs changed, but no, the modules aren't using xz.. Should be fixed now Oh and lsmod only shows the loaded modules, completions to modinfo / modprobe / etc. should show you the built ones, and files should be in |
|
6.18 compatibility patch added. Tested, no issues detected. |
Co-authored-by: Miguel <migue07mx@protonmail.com>
|
last commit, upgrading to .119, does not work in my computer dmesg output |
|
@revington looks like the kernel panic'd a bit over 11 minutes after starting, I'd suggest looking at the rest of the logs around the 670 mark (which is seconds from boot) mark to see the rest of the message and perhaps figure out what's causing that. Also worth double-checking that the rest of the nvidia- packages have been updated to the same 580.119.02 version There are reports of 580.119.02 (and 590.4x versions) being broken in various setups as well, so if you'd prefer to not figure it out much with it much, a rollback to 580.105.08 ( |
|
@JkktBkkt thanks for your help. Here is the output of xbps-query for the nvidia package. Is there any other package I must downgrade? I did not have the .xbps packages so I switched to previous commit and rebuild the package from there. |
|
@revington I'm not sure if xbps-query checks that all dependencies are satisfied, but if those installed match what's listed in run_depends, that should be good. EDIT: nvidia's changelog for 580.126.09 (released today) shows:
Maybe that's what's happening on your system? |
|
Wanted to comment as well that I'm having some graphical glitches with the standard |
Important: PR has been set to WIP as the template requires additional review, for cleanup and tie-in with main nvidia package.Unset, current version should be a bit cleaner, not requiring manual removal of -dkms before re-installing updated -open-dkms on every update.
Here's a permalink to the commit that doesn't change thenvidiatemplate if you want to install or upgrade before this can land and in case more changes are requested by maintainers: 78cc73d^ current version doesn't change
nvidiatemplate either, but see there for previous update notes.See #54593 (comment) and below to track that progress
Testing the changes
Local build testing
Comments
Built and tested on 6.12.57
No longer WIP since this release is out of beta and is now the recommended driver for supported devices, which are Turing and newer, so 16-series and newer of the GeForce lineup.
Closes #51384
Perhaps package info should mention which GPUs are supported
Thanks to mike7d7 for original template and PR #51538