date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,492,630,278,000 |
I'm using a graphics card on some machine to which I don't have physical access. With lspci I can tell its:
84:00.0 VGA compatible controller: NVIDIA Corporation GM200 [GeForce GTX TITAN X] (rev a1)
but which vendor/manufacturer's card is it (e.g. ASUS, EVGA, etc.)? How can I find that out (either as a root or non-root user)?
|
As root or non-root, run lspci -v -s 84:00.0 and look at the "Subsystem" line, that will usually give you the name of the manufacturer.
That uses the bus identifier you found already; for a more generic form,
lspci -v | grep -A1 VGA
will show the relevant information for any graphics adapter installed in your system.
| How can I tell who manufactured my graphics card (as opposed to the GPU)? |
1,492,630,278,000 |
My system has a GPU and a shared video memory. I'm using Fedora 27. Some important lines from lspci output is as follows.
00:02.0 VGA compatible controller: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev 0b)
0a:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Sun LE [Radeon HD 8550M / R5 M230]
What I've understand from "How do I check if my system supports hardware acceleration? Is it on the CPU or motherboard?" is that it's an application which decides whether to use hardware or software based rendering.
This is supported by the fact that Google Chrome has an option to turn off hardware rendering.
But while launching an application from gnome 3 all application has an option to be launched using dedicated graphics.
So, I've two questions
Who decides what rendering to be used (the launcher Gnome 3 or the application solely or something else)?
How to check what rendering an running application is using?
Any explanation regarding this is highly appreciated.
|
There are a number of issues here.
First, applications can support a variety of rendering methods, e.g. OpenGL and software rendering. If an application renders in software, then no hardware acceleration will be used at all (or minimally, e.g. for blitting operations etc.); this is usually what happens when you select software rendering in applications which support it (Chrome, many older games). If an application renders using OpenGL or something like that, acceleration will depend on the available hardware and whether the appropriate drivers and libraries are installed. In most cases you’ll get hardware acceleration, especially if you’re using GNOME 3 since that requires hardware acceleration (so if GNOME 3 starts in non-classic mode, you’re sure that some form of hardware acceleration is available).
Second, on a system such as yours with two GPUs, acceleration can be provided by either the integrated GPU, or the dedicated GPU. This is generally not controlled by applications, but by the kernel, using the VGA switcheroo. Recent versions of GNOME have support for launching applications using either the integrated GPU or the dedicated GPU explicitly; that’s what the “Launch using Dedicated Graphics Card” option determines (see this blog post for details). If you start an OpenGL application “normally”, it will be hardware accelerated, using your integrated (Intel) GPU; if you start it using “Launch using Dedicated Graphics Card”, it will be hardware accelerated, using your dedicated (AMD) GPU.
To determine whether a running application is using hardware rendering, at least when using non-proprietary drivers, you can find the application’s process id then run
lsof -p ${pid} | grep /dev/dri
(replacing ${pid} with the appropriate value). If this outputs a line containing something like /dev/dri/card0, the application is running using hardware rendering (and the card number will tell you which GPU it’s using — match the values in /dev/dri/by-path with the PCI identifiers); otherwise, it’s not.
| How to check an application is using hardware or software based display rendering? |
1,492,630,278,000 |
Looking Glass is an open source application that allows the use of a KVM configured with a passthrough GPU without an attached physical monitor, keyboard or mouse.
In Looking Glass terminology, the host software is the term for the piece of Looking Glass that runs in the VM guest (the VM where the GPU is used). The client software is the term for the piece that runs on the Linux host, showing the rendered frames.
The Looking Glass host is currently Windows-only, and covers the main use case: run Windows-only GPU-heavy software in a Windows VM, showing the result on the Linux host.
I have a slightly different use case: I pass my beefier headless GPU through from a Linux host to a Linux VM guest. It works fine there for GPU computations based on OpenCL or CUDA or whatever. I'd also like to be able to run 3D software on that Linux VM guest, and display the result on my Linux host.
Thus: Is there an equivalent technology for a Linux guest on a Linux host? Or, alternatively, are there any Looking Glass hosts for Linux?
|
I am the Author of Looking Glass.
The project already has Linux guest support as the host application is agnostic and can be built for both. Please note though that the support for the Linux guest is currently lacking features such as cursor support, etc.
| An equivalent of Looking Glass where VM side runs Linux? |
1,492,630,278,000 |
Is there general purpose GPU support in the Linux kernel?
Let me explain in more details since it's too broad of a topic. By the word "capable" I mean native support. That crosses out OpenGL AND OpenCL as those are just APIs to help code applications in user mode.
Some common misconceptions are that since super computers use GPGPUs and run Linux, then Linux use them. Well, not quite. The Linux distributions running on super computers are often times not the same as the ordinary distributions we know. They are so far modified that they're not Linux but a whole new operating system.
Another famous answer might be the poor support of GPUs. Well, lets not go there and eliminate all other factors be it bottleneck or something else like architecture.
Lets reword the question as follows:
Does the mainline Linux kernel natively utilize stream processing via general purpose registers of a GPU? And if it is, to what extent?
|
Just a note, the idea of the kernel having to virtualize and context-switch hundreds of GPU registers is horrifying and the kernel is doing nothing that could benefit from using them itself. There is code in the kernel to manage sharing GPU resources among processes (more of that code is migrating into the kernel steadily), and the processes that do share the GPU for computing do it via opencl and cuda and the like, but any GPU context switching they do won't be tied to any CPU thread because see above. I strongly suspect the GPU runs entirely independently and reports its results with a bus report of some kind, CPU-facing register readout or interrupt or whatnot.
| Is mainline Linux kernel capable of GPGPU programming? |
1,492,630,278,000 |
I have been experiencing random system crashes on your Linux PC, accompanied by a small white light flashing on my graphics card.
This issue occurs regularly, but not consistently, and tends to happen within the first 20 minutes of use.
I noticed this issue started occurring after I installed a secondary OS (Linux) on a secondary hard drive, which installed a second grub. While I attempted to delete everything related to the secondary OS and grub, the crashes have persisted.
OS: Debian GNU/Linux 11 (bullseye) x86_64
Host: MS-7B89 1.0
Kernel: 5.10.0-21-amd64
Resolution: 1440x2560, 2560x1440
DE: Plasma 5.20.5
WM: KWin
WM Theme: Qogir
Theme: Breeze Dark [Plasma], Peace-GTK [GTK2/3]
Icons: Dexie-Korla [Plasma], Dexie-Korla [GTK2/3]
Terminal: konsole
CPU: AMD Ryzen 5 2600 (12) @ 3.400GHz
GPU: AMD ATI Radeon RX 470/480/570/570X/580/580X/590
Thanks for any suggestions.
|
A "small white light" on a modern high-performance GPU is most likely related to power input, although you'd want to check the GPU vendor's documentation to be sure.
Which power supply are you using, and how old is it? Maybe adding a secondary HDD put your power supply right at the limit of its capability. At times of maximum power consumption, it might fail to keep the voltages stable, resulting in a crash. The flashing white light might be the GPU's way of saying "My power feed was insufficient for a while".
| Troubleshooting Intermittent System Crashes on a Linux PC with Flashing Graphics Card Lights |
1,608,140,233,000 |
When I go to the "About" window of elementaryOS I see this:
Why doesn't it display the actual consumer-facing name of the GPU (Intel® Iris® Plus Graphics 640)?
|
This happens when your system’s PCI id database doesn’t have a description for your graphics device. The current upstream database does know about the Iris Plus Graphics 640, so updating the database should fix things:
sudo update-pciids
| Can I see an actual model of my Intel GPU in my distro's About screen? |
1,608,140,233,000 |
I'm Ubuntu user and i'd like to install windows and get directly access to gpu (NVidia 1060 6 Gb) to unity 3D and gaming, i've read a lot of information about this possibility on internet but i didn't find something usefull.
I've tried to use virtualbox - in virtualbox it's impossible to to something like that, i've installed vmware pro during creation vm i shared 3GB of GPU, but it still virtual and i have some freezes and problems with it.
May be i should use another software for virtualization or i just haven't full image of it. I'll be very thankfull for information.
|
You can do this in VirtualBox with the Guest Additions installed. It requires a processor with virtualization instructions and the instructions enabled in BIOS.
In the VirtualBox manager, adjust the settings of your VM by going to Settings → Display → Screen and ticking the following checkboxes:
☑ Enable 3D Acceleration
☑ Enable 2D Acceleration
For details and limitations, see these sections of the VirtualBox manual:
4.5.1. Hardware 3D Acceleration (OpenGL and Direct3D 8/9)
4.5.2. Hardware 2D Video Acceleration for Windows Guests
If you still have trouble, you might want to consult nVidia's Virtual GPU Software User Guide.
| Is it possible to use gpu directly from virtual machine on ubuntu? |
1,608,140,233,000 |
My /proc/meminfo shows about 500 MB is allocated as Shmem. I want to get more specific figures. I found an explanation here:
https://lists.kernelnewbies.org/pipermail/kernelnewbies/2013-July/008628.html
It includes tmpfs memory, SysV shared memory (from ipc/shm.c),
POSIX shared memory (under /dev/shm [which is a tmpfs]), and shared anonymous mappings
(from mmap of /dev/zero with MAP_SHARED: see call to shmem_zero_setup()
from drivers/char/mem.c): whatever allocates pages through mm/shmem.c.
2-> as per the developer comments NR_SHMEM included tmpfs and GEM
pages. what is GEM pages?
Ah yes, and the Graphics Execution Manager uses shmem for objects shared
with the GPU: see use of shmem_read_mapping_page*() in drivers/gpu/drm/.
I have about
50MB in user-visible tmpfs, found with df -h -t tmpfs.
40MB (10,000 pages of 4096 bytes) in sysvipc shared memory, found with ipcs -mu.
I would like to get some more positive accounting, for what uses the 500MB! Is there a way to show total GEM allocations? (Or any other likely contributor).
I expect I have some GEM allocations, since I am running a graphical desktop on intel graphics hardware. My kernel version is 4.18.16-200.fc28.x86_64 (Fedora Workstation 28).
|
Edit: there is an interface for kernel debugging purposes only. It is only accessible by root and is not stable. It might be rewritten, renamed, and/or misleading if you are not a kernel developer. (It might even be buggy, for all I know). But if you have a problem, it might be useful to know it's there.
My i915 driver gives me the information here:
$ sudo sh -c 'cat /sys/kernel/debug/dri/*/i915_gem_objects'
643 objects, 205852672 bytes
75 unbound objects, 7811072 bytes
568 bound objects, 198041600 bytes
16 purgeable objects, 5750784 bytes
16 mapped objects, 606208 bytes
13 huge-paged objects (2M, 4K) 123764736 bytes
13 display objects (globally pinned), 14954496 bytes
4294967296 [0x0000000010000000] gtt total
Supported page sizes: 2M, 4K
[k]contexts: 16 objects, 548864 bytes (0 active, 548864 inactive, 548864 global, 0 shared, 0 unbound)
systemd-logind: 324 objects, 97374208 bytes (0 active, 115798016 inactive, 23941120 global, 5246976 shared, 3858432 unbound)
Xwayland: 24 objects, 6995968 bytes (0 active, 12169216 inactive, 5283840 global, 5246976 shared, 110592 unbound)
gnome-shell: 246 objects, 89739264 bytes (26517504 active, 120852480 inactive, 63016960 global, 5242880 shared, 3629056 unbound)
Xwayland: 25 objects, 17309696 bytes (0 active, 22503424 inactive, 5304320 global, 5242880 shared, 90112 unbound)
Again, exercise caution. I notice mapped objects only shows 600KB. I guess mapped here means something different than I was expecting. For comparison, running the python script below to show the i915 objects mapped in user process' address spaces, I see a total of 70MB.
The line for systemd-logind in my output is representing a second gnome-shell instance, running on a different virtual console. If I switch over to a virtual console which has a text login running on it instead, then this file shows two systemd-logind lines and no gnome-shell lines :-).
Otherwise, the best you can do is find some of the shmem files by looking through all open files, in /proc/*/fd/ and /proc/*/map_files/ (or /proc/*/maps).
With the right hacks, it appears possible to reliably identify which files belong to the hidden shmem filesystem(s).
Each shared memory object is a file with a name. And the names can be used to identify which kernel subsystem created the file.
SYSV00000000
i915 (i.e. intel gpu)
memfd:gdk-wayland
dev/zero (for any "anonymous" shared mapping)
...
The problem is this does not show all DRM / GEM allocations. DRM buffers can exist without being mapped, simply as a numeric handle. These are tied to the open DRM file they were created on. When the program crashes or is killed, the DRM file will be closed, and all its DRM handles will be cleaned up automatically. (Unless some other software keeps a copy of the file descriptor open, like this old bug.)
https://www.systutorials.com/docs/linux/man/7-drm-gem/
You can find open DRM files in /proc/*/fd/, but they show as a zero-size file with zero blocks allocated.
For example, the output below shows a system where I cannot account for over 50% / 300MB of the Shmem.
$ grep Shmem: /proc/meminfo
Shmem: 612732 kB
$ df -h -t tmpfs
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 59M 3.8G 2% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 9.0M 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ipcs -mu
------ Shared Memory Status --------
segments allocated 20
pages allocated 4226
pages resident 3990
pages swapped 0
Swap performance: 0 attempts 0 successes
All open files on hidden shmem filesystem(s):
$ sudo python3 ~/shm -s
15960 /SYSV*
79140 /i915
7912 /memfd:gdk-wayland
1164 /memfd:pulseaudio
104176
Here is a "before and after", logging out one of my two logged-in GNOME users. It might be explained if gnome-shell had over 100MB of unmapped DRM buffers.
$ grep Shmem: /proc/meminfo
Shmem: 478780 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.5M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 276K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 8.0M 778M 2% /run/user/1000
tmpfs 786M 5.7M 781M 1% /run/user/1001
$ sudo ./shm -s
80 /SYSV*
114716 /i915
1692 /memfd:gdk-wayland
1156 /memfd:pulseaudio
117644
$ grep Shmem: /proc/meminfo
Shmem: 313008 kB
$ df -t tmpfs -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.9G 4.0K 3.9G 1% /dev/shm
tmpfs 3.9G 2.1M 3.9G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 204K 3.9G 1% /tmp
tmpfs 786M 20K 786M 1% /run/user/42
tmpfs 786M 6.8M 780M 1% /run/user/1000
$ sudo ./shm -s
40 /SYSV*
88496 /i915
1692 /memfd:gdk-wayland
624 /memfd:pulseaudio
90852
Python script to generate the above output:
#!/bin/python3
# Reads Linux /proc. No str, all bytes.
import sys
import os
import stat
import glob
import collections
import math
# File.
# 'name' is first name encountered, we don't track hardlinks.
Inode = collections.namedtuple('Inode', ['name', 'bytes', 'pids'])
# inode number -> Inode object
inodes = dict()
# pid -> program name
pids = dict()
# filename -> list() of Inodes
filenames = dict()
def add_file(pid, proclink):
try:
vfs = os.statvfs(proclink)
# The tmpfs which reports 0 blocks is an internal shm mount
# python doesn't admit f_fsid ...
if vfs.f_blocks != 0:
return
filename = os.readlink(proclink)
# ... but all the shm files are deleted (hack :)
if not filename.endswith(b' (deleted)'):
return
filename = filename[:-10]
# I tried a consistency check that all our st_dev are the same
# but actually there can be more than one internal shm mount!
# i915 added a dedicated "gemfs" so they could control mount options.
st = os.stat(proclink)
# hack the second: ignore deleted character devices from devpts
if stat.S_ISCHR(st.st_mode):
return
# Read process name succesfully,
# before we record file owned by process.
if pid not in pids:
pids[pid] = open(b'/proc/' + pid + b'/comm', 'rb').read()[:-1]
if st.st_ino not in inodes:
inode_pids = set()
inode_pids.add(pid)
inode = Inode(name=filename,
bytes=st.st_blocks * 512,
pids=inode_pids)
inodes[st.st_ino] = inode
else:
inode = inodes[st.st_ino]
inode.pids.add(pid)
# Group SYSV shared memory objects.
# There could be many, and the rest of the name is just a numeric ID
if filename.startswith(b'/SYSV'):
filename = b'/SYSV*'
filename_inodes = filenames.setdefault(filename, set())
filename_inodes.add(st.st_ino)
except FileNotFoundError:
# File disappeared (race condition).
# Don't bother to distinguish "file closed" from "process exited".
pass
summary = False
if sys.argv[1:]:
if sys.argv[1:] == ['-s']:
summary = True
else:
print("Usage: {0} [-s]".format(sys.argv[0]))
sys.exit(2)
os.chdir(b'/proc')
for pid in glob.iglob(b'[0-9]*'):
for f in glob.iglob(pid + b'/fd/*'):
add_file(pid, f)
for f in glob.iglob(pid + b'/map_files/*'):
add_file(pid, f)
def pid_name(pid):
return pid + b'/' + pids[pid]
def kB(b):
return str(math.ceil(b / 1024)).encode('US-ASCII')
out = sys.stdout.buffer
total = 0
for (filename, filename_inodes) in sorted(filenames.items(), key=lambda p: p[0]):
filename_bytes = 0
for ino in filename_inodes:
inode = inodes[ino]
filename_bytes += inode.bytes
if not summary:
out.write(kB(inode.bytes))
out.write(b'\t')
#out.write(str(ino).encode('US-ASCII'))
#out.write(b'\t')
out.write(inode.name)
out.write(b'\t')
out.write(b' '.join(map(pid_name, inode.pids)))
out.write(b'\n')
total += filename_bytes
out.write(kB(filename_bytes))
out.write(b'\t')
out.write(filename)
out.write(b'\n')
out.write(kB(total))
out.write(b'\n')
| Can I see the amount of memory which is allocated as GEM buffers? |
1,608,140,233,000 |
I have an Apple MacBook that is running a Linux From Scratch system that I have built. It is a minimal system, just booting into a bash prompt, with no X Window System installed. The graphics chip is an Intel GMA 950, which uses the i915 driver. Previously, I had it booting up into the framebuffer console; however, I tweaked some of the kernel configuration settings the other day and now the framebuffer console doesn't seem to load up any more (although the screen goes black and then resets during boot).
Stupidly, I didn't save the kernel config file for the setup I had working, although I do have a printout of the lsmod command for that setup, which shows which kernel modules were loaded:
Module Size Used by
ccm 20480 6
hid_generic 16384 0
isight_firmware 16384 0
usbhid 32768 0
i915 1343488 1
i2c_algo_bit 16384 1 i915
arc4 16384 2
fbcon 49152 70
bitblit 16384 1 fbcon
fbcon_rotate 16384 1 bitblit
fbcon_ccw 16384 1 fbcon_rotate
fbcon_ud 20480 1 fbcon_rotate
fbcon_cw 16384 1 fbcon_rotate
softcursor 16384 4 fbcon_ud,fbcon_cw,fbcon_ccw,bitblit
drm_kms_helper 114688 1 i915
ath9k 81920 0
cfbfillrect 16384 1 drm_kms_helper
ath9k_common 16384 1 ath9k
syscopyarea 16384 1 drm_kms_helper
cfbimgblt 16384 1 drm_kms_helper
ath9k_hw 389120 2 ath9k,ath9k_common
sysfillrect 16384 1 drm_kms_helper
sysimgblt 16384 1 drm_kms_helper
mac80211 405504 1 ath9k
fb_sys_fops 16384 1 drm_kms_helper
cfbcopyarea 16384 1 drm_kms_helper
drm 282624 3 i915,drm_kms_helper
ath 28672 3 ath9k_hw,ath9k,ath9k_common
pata_acpi 16384 0
intel_agp 16384 0
coretemp 16384 0
video 36864 1 i915
uhci_hcd 40960 0
pcspkr 16384 0
backlight 16384 2 video,i915
ehci_pci 16384 0
ehci_hcd 73728 1 ehci_pci
ata_piix 36864 0
rng_core 16384 0
intel_gtt 20480 2 intel_agp,i915
fb 65536 8 fbcon_ud,fbcon_cw,fbcon_ccw,bitblit,softcursor,i915,fbcon,drm_kms_helper
agpgart 32768 3 intel_agp,intel_gtt,drm
evdev 24576 0
fbdev 16384 2 fb,fbcon
mac_hid 16384 0
So, you can see that fbcon (which is the driver for the framebuffer console) was loaded.
However, the output of lsmod for the newer kernel build (where the console isn't loading) is as follows:
Module Size Used by
hid_generic 12288 0
arc4 12288 2
i915 1314816 0
usbhid 28672 0
prime_numbers 12288 1 i915
i2c_algo_bit 12288 1 i915
drm_kms_helper 98304 1 i915
cfbfillrect 12288 1 drm_kms_helper
syscopyarea 12288 1 drm_kms_helper
cfbimgblt 12288 1 drm_kms_helper
pata_acpi 12288 0
sysfillrect 12288 1 drm_kms_helper
ath9k 73728 0
ath9k_common 12288 1 ath9k
ath9k_hw 368640 2 ath9k,ath9k_common
sysimgblt 12288 1 drm_kms_helper
fb_sys_fops 12288 1 drm_kms_helper
cfbcopyarea 12288 1 drm_kms_helper
mac80211 356352 1 ath9k
coretemp 12288 0
ata_piix 32768 0
ath 24576 3 ath9k_hw,ath9k,ath9k_common
drm 241664 3 i915,drm_kms_helper
uhci_hcd 36864 0
video 32768 1 i915
intel_agp 12288 0
pcspkr 12288 0
intel_gtt 16384 2 intel_agp,i915
fb 57344 2 i915,drm_kms_helper
ehci_pci 12288 0
ehci_hcd 65536 1 ehci_pci
agpgart 28672 3 intel_agp,intel_gtt,drm
rng_core 12288 0
fbdev 12288 1 fb
backlight 12288 2 video,i915
evdev 20480 0
mac_hid 12288 0
fb, fbdev, i915, drm, intel_agp are all there, but fbcon isn't.
Does anyone know of a possible reason why fbcon isn't loading up?
Edit: (to answer a question in the comments)
The output of grep CONFIG_FRAMEBUFFER_CONSOLE .config is:
$ grep CONFIG_FRAMEBUFFER_CONSOLE .config
CONFIG_FRAMEBUFFER_CONSOLE=m
CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
# CONFIG_FRAMEBUFFER_CONSOLE_ROTATION is not set
fbcon is configured as a module (as it seemed to be in the previous setup). I believe the second line means that it should be setting fbcon to the primary display device by default.
Update:
I loaded the module manually, using modprobe fbcon and it worked - all of the text appeared on the screen. I still have to figure out why it didn't load on boot though and how I can make it do that.
Also, I ran cat $(readlink -f /sys/class/graphics/fb0/name) and that printed inteldrmfb. So, it appears it is using a framebuffer that is built in to the i915 Intel driver.
|
To post an answer to my own question:
The reason it wasn't working was because the fbcon module wasn't being loaded during boot, even though it had been built and installed. Running modprobe fbcon to load the module immediately made the console appear on my screen. I have added fbcon to /etc/sysconfig/modules and it's initializing properly on boot again now.
It seems a little strange though, that the module was loading automatically before, without me having to do anything.
| How can I get my framebuffer console working? |
1,608,140,233,000 |
I have "radeon" and "amdgpu" drivers installed. I want to switch to amdgpu from radeon but I don't know how can I do that.
lspci -v | grep driver:
Kernel driver in use: radeon
lspci -v | grep modules:
Kernel modules: radeon, amdgpu
How can I switch to amdgpu? Thanks.
|
The recommend method will depend on what Linux distribution you are running. But one thing that should work is to blacklist the radeon module from running.
In /etc/modprobe.d, create a new .conf file, and give it the following contents:
blacklist radeon
| How to change in use driver linux |
1,608,140,233,000 |
I'm doing some research that has to do with GPGPU resilience with NVIDIA graphics cards and I've been looking for a way to, as accurately as possible, simulate hardware failure. I know about cudaDeviceReset() and using intentionally failing asserts() within the kernel; correct me if I'm wrong but I don't think these accurately portray realistic hardware failure.
Ultimately what I'm trying to achieve is effectively turning off the device during execution, have the host detect this and try to recover from it.
What I'd like to know is if there is some method of "power cycling" the GPU via the Linux kernel.
I'm using CentOS 7 and my device's compute capability is 2.1. Kindly see below for output from uname -a.
Linux heisenbug 3.10.0-327.10.1.el7.x86_64 #1 SMP Tue Feb 16 17:03:50 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
|
You can manipulate some of the pci bus registers of the device fairly easily with setpci. Note: this is dangerous and may crash your system!
For example, find the pci bus and slot for your graphics board:
$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
$ lspci -s 00:02.0 -v
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller])
DeviceName: Onboard IGD
Subsystem: Holco Enterprise Co, Ltd/Shuttle Computer Device 4018
Flags: bus master, fast devsel, latency 0, IRQ 29
Memory at f7400000 (64-bit, non-prefetchable) [size=4M]
Memory at e0000000 (64-bit, prefetchable) [size=256M]
I/O ports at f000 [size=64]
Expansion ROM at <unassigned> [disabled]
Capabilities: <access denied>
Kernel driver in use: i915
Kernel modules: i915
You can read and write registers using setpci. You need to be root to read some registers, and to write any of them. The register names are listed with setpci --dumpregs. Eg:
$ setpci -s 00:02.0 command
0407
The 16bit pci config command register is an important register. The bit meaning can be got from the linux header. The low 3 bits are 1 to enable the device to respond to io and memory cycles from the cpu, and to be bus master so that it can dma into the cpu's main memory.
If you disable these bits, the device will no longer respond to your driver.
Beware, this may crash your system. Do not test this lightly:
$ sudo setpci -s 00:02.0 command=0000 # DONT DO THIS!
You can try writing a script to set the register to 0, waiting a few seconds while your graphics tries to draw, then setting the register back to its original setting (command=0407). All numbers are in hex (without any 0x prefix). As mentioned in the comments, you may need to provide 4 digits for the value, despite the fact that the width of named registers (like command) are known by setpci. You can provide an explicit width with a suffix to the register name of .b (8bits), .w (16), or .l (32).
Resetting the hardware is more difficult as it often requires you either to know of a particular register in the hardware to reset, or in the parent bus hardware.
| How to simulate GPU hardware failure? |
1,608,140,233,000 |
I am working with an embedded board which includes an Intel Atom N2600 processor and a GPU GMA 3600 series based on the PowerVR SGX 545 graphics core (developed by Imagination) [Link1].
As far as I know, Intel just offers Windows 7 support for its GPU through a driver in Link2. In my case, I am working under Linux, so I need to know if there exists any possibility of enabling the GPU capabilities using any compatible driver loaded when X server is started.
Is this impossible? Does the driver just go with a 3.2 Linux kernel, such as in this answer Link3?
|
Videocards of this series always have troubles with Linux support.
I know that
Fedora 17 supports CedarView chipset (with gma500 driver).
Ubuntu 13.04 with community driver supports too. Or Ubuntu 12.04.1 with proprietary driver.
I found some useful information about Arch support.
But if you want 3D-video decoding support, you can get into trouble.
P.S.: some years ago I had a netbook with this chipset. So I sold it. :)
| Intel GMA 3600 Linux support |
1,608,140,233,000 |
On my system I'm unable to install the recommended graphics driver, so something must be wrong with my installation.
The GPU chipset is ATI ES1000, but the recommended driver is NVIDIA NVS300 downloaded from the server vendor's site.
The maximum graphics resolution of the onboard graphics controller ATI
ES1000 with the native driver of Microsoft Windows 2012 is 1280 x
1024. ATI has not planned to support ATI ES1000 graphics chip with Windows 2012. So there"s no OEM driver available which could be
installed on PRIMERGY TX100 S3 or TX100 S3p with Microsoft Windows
2012. For higher graphics resolutions on PRIMERGY TX100 S3 or TX100 S3p, the PCIe graphics controller NVIDIA® Quadro® NVS 300 can be used.
Before installation I switched to runlevel 3 (init 3) and blacklisted nouveau driver (echo blacklist nouveau > /etc/modprobe.d/nvidia.conf). None of the conflicting drivers is present:
# lsmod | grep -e nouveau -e rivafb -e nvidiafb
(empty)
These are all steps that should be needed, what else can be wrong on my Oracle Linux (based on Red Hat Enterprise Linux 6.7, Kernel Linux 3.8.13-118.2.1.el6uek.x86_64, GNOME 2.28.2), I was thinking incompatible kernel or some GPU driver conflict?
List of OS supported by the driver:
Red Hat Enterprise Linux 6.6 (x86_64)
Red Hat Enterprise Linux 6.7 (x86_64)
Red Hat Enterprise Linux 7 GA (x86_64)
Red Hat Enterprise Linux 7.1 (x86_64)
SUSE Linux Enterprise Server 11 SP3 (x86_64)
SUSE Linux Enterprise Server 11 SP4 (x86_64)
The main error:
ERROR: Unable to load the kernel module 'nvidia.ko'. This happens most frequently when this kernel module was built against the wrong or
improperly configured kernel sources, with a version of gcc that
differs from the one used to build the target kernel, or if a driver
such as rivafb, nvidiafb, or nouveau is present and prevents the
NVIDIA kernel module from obtaining ownership of the NVIDIA graphics
device(s), or no NVIDIA GPU installed in this system is supported by
this NVIDIA Linux graphics driver release.
Output from /var/log/nvidia-installer.log:
-> Kernel module compilation complete.
-> Unable to determine if Secure Boot is enabled: No such file or directory
ERROR: Unable to load the kernel module 'nvidia.ko'. This happens most frequently when this kernel module was built against the wrong or improperly configured kernel sources, with a version of gcc that differs from the one used to build the target kernel, or if a driver such as rivafb, nvidiafb, or nouveau is present and prevents the NVIDIA kernel module from obtaining ownership of the NVIDIA graphics device(s), or no NVIDIA GPU installed in this system is supported by this NVIDIA Linux graphics driver release.
Please see the log entries 'Kernel module load error' and 'Kernel messages' at the end of the file '/var/log/nvidia-installer.log' for more information.
-> Kernel module load error: insmod: error inserting './kernel/nvidia.ko': -1 No such device
-> Kernel messages:
survey done event(5c) band:0 for wlan0
==>rtw_ps_processor .fw_state(8)
==>ips_enter cnts:5
===> rtw_ips_pwr_down...................
====> rtw_ips_dev_unload...
usb_read_port_cancel
usb_read_port_complete()-1284: RX Warning! bDriverStopped(0) OR bSurpriseRemoved(0) bReadPortCancel(1)
usb_read_port_complete()-1284: RX Warning! bDriverStopped(0) OR bSurpriseRemoved(0) bReadPortCancel(1)
usb_read_port_complete()-1284: RX Warning! bDriverStopped(0) OR bSurpriseRemoved(0) bReadPortCancel(1)
usb_read_port_complete()-1284: RX Warning! bDriverStopped(0) OR bSurpriseRemoved(0) bReadPortCancel(1)
usb_write_port_cancel
==> rtl8192cu_hal_deinit
bkeepfwalive(0)
card disble without HWSM...........
<=== rtw_ips_pwr_down..................... in 29ms
usb 2-1.2: USB disconnect, device number 7
usb 2-1.2: new low-speed USB device number 8 using ehci-pci
usb 2-1.2: New USB device found, idVendor=093a, idProduct=2510
usb 2-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
usb 2-1.2: Product: USB Optical Mouse
usb 2-1.2: Manufacturer: PixArt
input: PixArt USB Optical Mouse as /devices/pci0000:00/0000:00:1d.0/usb2/2-1/2-1.2/2-1.2:1.0/input/input7
hid-generic 0003:093A:2510.0005: input,hidraw1: USB HID v1.11 Mouse [PixArt USB Optical Mouse] on usb-0000:00:1d.0-1.2/input0
NVRM: No NVIDIA graphics adapter found!
NVRM: NVIDIA init module failed!
ERROR: Installation has failed. Please see the file '/var/log/nvidia-installer.log' for details. You may find suggestions on fixing installation problems in the README available on the Linux driver download page at www.nvidia.com.
|
The ES1000 is built-in to your motherboard, the NVS300 is an optional extra. Which is why are you getting an error message saying NVRM: No NVIDIA graphics adapter found!
The text you quoted says that if you want higher resolution than what the ATI ES1000 supports, then you can install an Nvidia NVS300, which is a completely different and separate GPU card.
The NVS300 is also a fairly old card. you could probably install any other recent AMD or Nvidia card that would physically fit into the slot (would need a pci-e x16 slot) and into the case (you might need a small fanless card).
e.g. an Nvidia GTX-750 (around $110USD) completely wipes the floor with an NVS300, it's so much faster that it's beyond comparison - and the 750 isn't even close to a top-of-the-range modern GPU. Even much cheaper cards like the ~$40USD GT610 are significantly faster than the NVS300.
According to http://www.fujitsu.com/tw/Images/ds-py-tx100-s3-en.pdf
your system has 1 pci-e 3.0 slot that is physically x16 (so it can take a full size x16 GPU card) but only x8 electronically, so the card would run fine but with slightly reduced bandwidth (GPUs don't use anywhere near the full bandwidth of pci-e 3.0 @ x16 anyway).
Finally, if you just want the ES1000 built-in GPU to work, it should Just Work with a reasonably modern linux kernel and X. Don't expect high resolution or fast graphics, though.
| Unable to load the kernel module 'nvidia.ko' |
1,608,140,233,000 |
On a server with Tesla Nvidia Card we decide to Restrict user access to GPU. In our server 2 GPU.
# ls -las /dev/nvidia*
0 crw-rw-rw-. 1 root root 195, 0 Dec 2 22:02 /dev/nvidia0
0 crw-rw-rw-. 1 root root 195, 1 Dec 2 22:02 /dev/nvidia1
I found this solve Defining User Restrictions for GPUs
I create local group gpu_cuda
sudo groupadd gpu_cuda
after add user to group gpu_cuda
Create a config file at /etc/modprob.d/nvidia.conf with content
#!/bin/bash
options nvidia NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=0 NVreg_DeviceFileMode=0777 NVreg_ModifyDeviceFiles=0
Create script in /etc/init.d/gpu-restriction
#!/bin/bash
### BEGIN INIT INFO
# Provides: gpu-restriction
# Required-Start: $all
# Required-Stop:
# Default-Start: 2 3 4 5
# Default-Stop:
# Short-Description: Start daemon at boot time
# Description: Enable service provided by daemon.
# permissions if needed.
### END INIT INFO
set -e
start() {
/sbin/modprobe --ignore-install nvidia;
/sbin/modprobe nvidia_uvm;
test -c /dev/nvidia-uvm || mknod -m 777 /dev/nvidia-uvm c $(cat /proc/devices | while read major device; do if [ "$device" == "nvidia-uvm" ]; then echo $major; break; fi ; done) 0 && chown :root /dev/nvidia-uvm;
test -c /dev/nvidiactl || mknod -m 777 /dev/nvidiactl c 195 255 && chown :root /dev/nvidiactl;
devid=-1;
for dev in $(ls -d /sys/bus/pci/devices/*);
do vendorid=$(cat $dev/vendor);
if [ "$vendorid" == "0x10de" ];
then class=$(cat $dev/class);
classid=${class%%00};
if [ "$classid" == "0x0300" -o "$classid" == "0x0302" ];
then devid=$((devid+1));
test -c /dev/nvidia${devid} || mknod -m 750 /dev/nvidia${devid} c 195 ${devid} && chown :gpu_cuda /dev/nvidia${devid};
fi;
fi;
done
}
stop() {
:
}
case "$1" in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
# code to check status of app comes here
# example: status program_name
;;
*)
echo "Usage: $0 {start|stop|status|restart}"
esac
exit 0
I reboot server and run
/etc/init.d/gpu-restriction start
check result in first time is good.
# ls -las /dev/nvidia*
0 crw-rw-rw-. 1 root gpu_cuda 195, 0 Dec 2 22:02 /dev/nvidia0
0 crw-rw-rw-. 1 root gpu_cuda 195, 1 Dec 2 22:02 /dev/nvidia1
but in second time, chown group is back to root
# ls -las /dev/nvidia*
0 crw-rw-rw-. 1 root root 195, 0 Dec 2 22:02 /dev/nvidia0
0 crw-rw-rw-. 1 root root 195, 1 Dec 2 22:02 /dev/nvidia1
Why result back? and how to solve this problem?
|
nvidia provides the way to set the group ID of its special device files without needing to resort to whatever extra somber script :
Whether a user-space NVIDIA driver component does so itself, or
invokes nvidia-modprobe, it will default to creating the device files
with the following attributes:
UID: 0 - 'root'
GID: 0 - 'root'
Mode: 0666 - 'rw-rw-rw-'
Existing device files are changed if their attributes don't match these defaults.
If you want the NVIDIAdriver to create the device files with different attributes, you can
specify them with the "NVreg_DeviceFileUID" (user),
"NVreg_DeviceFileGID" (group) and "NVreg_DeviceFileMode" NVIDIA Linux
kernel module parameters.
The nvidia Linux kernel modue parameters can be set in the /etc/modprobe.d/nvidia.conf file, mine tells :
...
options nvidia \
NVreg_DeviceFileGID=27 \
NVreg_DeviceFileMode=432 \
NVreg_DeviceFileUID=0 \
NVreg_ModifyDeviceFiles=1\
...
And I indeed can ls -ails /dev/nvidia0 :
3419 0 crw-rw---- 1 root video 195, 0 4 déc. 15:01 /dev/nvidia0
and witness the fact that access to root owned special files is actually restricted to the members of the video group (GID=27 on my system)
Therefore, all you need to do is to get the group id of your gpu_cuda group and modify (or setup) your nvidia.conf accordingly.
Credits : /usr/share/doc/nvidia-drivers-470.141.03/html/faq.html (you'll probably need to adapt the path to your driver version).
| Restricting user access to nvidia GPU? |
1,608,140,233,000 |
I've been trying to get my nvidia gpu (960m) to work on my arch install, but for now it doesn't.
I use the nvidia drivers. I ran nvidia-config, which modified my xorg.conf, as
Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Unknown"
Option "DPMS"
EndSection
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24
EndSubSection
EndSection
I don't really get where the issue is, but i believe it may have to do with the screen, or the monitor.
Lspci returns
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06)
Subsystem: Lenovo HD Graphics 530
Kernel driver in use: i915
--
01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 960M] (rev a2)
Subsystem: Lenovo GM107M [GeForce GTX 960M]
Kernel driver in use: nvidia
If any of you have any idea of what i did wrong, please point it out, I'll be happy to correct it!
edit :
By looking at Xorg log file, i found out that it was using Nouveau drivers, depsite the fact i uninstalled those, i guess for the integrated gpu, since it ran into what seems to be an error when loading nividia
drivers :
(WW) Open ACPI failed (/var/run/acpid.socket) (No such file or directory)
any idea what that means ?
I found out that i had to have acpi installed (which was not), but also that i had to add it to the rc.conf file ? No idea what that is, and i don't seem to have one in /etc...
edit : I've already posted a similar question two weeks or so ago, but my internet went down, so i wasn't able to answer anything, my bad
|
I also have a 960m in my laptop and when first installing arch this was a massive pain to find all the resources i needed to fix it.
A good place to start is https://wiki.archlinux.org/index.php/NVIDIA_Optimus
My personal /etc/X11/xorg.conf looks like this:
Section "Module"
Load "modesetting"
Endsection
Section "Device"
Identifier "nvidia"
Driver "nvidia"
BusID "1:0:0"
Option "AllowEmptyInitialConfiguration"
EndSection
In your post you showed that your bus id for the nvidia card was 01:00.0 for the X11 config we need to change it to look like 1:0:0
Once you get your /etc/X11/xorg.conf setup properly make sure to follow the instructions on the wiki page for your display manager of choice.
| Nvidia GPU not used |
1,608,140,233,000 |
I've just installed the mint 19 cinnamon on my acer aspire 5830TG as a second OS alongside with win7 and immediately have a very high temp readings:
$ sensors
coretemp-isa-0000
Adapter: ISA adapter
Package id 0: +87.0°C (high = +86.0°C, crit = +100.0°C)
Core 0: +76.0°C (high = +86.0°C, crit = +100.0°C)
Core 1: +87.0°C (high = +86.0°C, crit = +100.0°C)
with maxing out according to psensor CPU 93°C, GPU 72°C.
I only have a firefox running with 8 tabs opened. and a console. that's it.
I have already changed from xserver-xorg-video-nouveau to the recommended nvidia-driver-390 (the device is GeForce GT 540M)
It definitely doesn't look like that under the other OS, win7.
I have to keep my palms in the air in order to write this and not to burn myself.
edit:
System: Host: blackstar Kernel: 4.15.0-20-generic x86_64 bits: 64 compiler: gcc v: 7.3.0 Desktop: Cinnamon 4.0.8
Distro: Linux Mint 19.1 Tessa base: Ubuntu 18.04 bionic
Machine: Type: Laptop System: Acer product: Aspire 4830TG v: V1.12 serial: <filter>
Mobo: Acer model: JM40_HR serial: <filter> BIOS: Acer v: 1.12 date: 08/14/2012
Battery: ID-1: BAT0 charge: 25.0 Wh condition: 38.4/66.6 Wh (58%) model: SANYO Li_Ion_4000mA status: Charging
CPU: Topology: Dual Core model: Intel Core i5-2430M bits: 64 type: MT MCP arch: Sandy Bridge rev: 7 L2 cache: 3072 KiB
flags: lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 19155
Speed: 798 MHz min/max: 800/3000 MHz Core speeds (MHz): 1: 798 2: 798 3: 798 4: 798
Graphics: Device-1: Intel 2nd Generation Core Processor Family Integrated Graphics vendor: Acer Incorporated ALI driver: i915
v: kernel bus ID: 00:02.0
Device-2: NVIDIA GF108M [GeForce GT 540M] vendor: Acer Incorporated ALI driver: nvidia v: 390.116 bus ID: 01:00.0
Display: x11 server: X.Org 1.19.6 driver: modesetting,nvidia unloaded: fbdev,nouveau,vesa
resolution: 1024x768~60Hz, 1366x768~60Hz
OpenGL: renderer: GeForce GT 540M/PCIe/SSE2 v: 4.6.0 NVIDIA 390.116 direct render: Yes
Audio: Device-1: Intel 6 Series/C200 Series Family High Definition Audio vendor: Acer Incorporated ALI
driver: snd_hda_intel v: kernel bus ID: 00:1b.0
Sound Server: ALSA v: k4.15.0-20-generic
Network: Device-1: Qualcomm Atheros AR8151 v2.0 Gigabit Ethernet vendor: Acer Incorporated ALI driver: atl1c v: 1.0.1.1-NAPI
port: 2000 bus ID: 02:00.0
IF: enp2s0 state: down mac: <filter>
Device-2: Intel Centrino Advanced-N 6205 [Taylor Peak] driver: iwlwifi v: kernel port: 2000 bus ID: 03:00.0
IF: wlp3s0 state: up mac: <filter>
IF-ID-1: docker0 state: down mac: <filter>
Drives: Local Storage: total: 698.64 GiB used: 8.83 GiB (1.3%)
ID-1: /dev/sda vendor: Western Digital model: WD7500BPVT-22HXZT3 size: 698.64 GiB
Partition: ID-1: / size: 96.82 GiB used: 8.83 GiB (9.1%) fs: ext4 dev: /dev/sda5
Sensors: System Temperatures: cpu: 70.0 C mobo: N/A gpu: nvidia temp: 66 C
Fan Speeds (RPM): N/A
Info: Processes: 208 Uptime: 40m Memory: 7.65 GiB used: 1.29 GiB (16.8%) Init: systemd runlevel: 5 Compilers: gcc: 7.3.0
Shell: bash v: 4.4.19 inxi: 3.0.27
|
Check your CPU heatsink ! ! Mine has disengaged from the mounting brackets, same symptoms ! Replace by a good model, select from the cpu type.
| linux mint 19 cinnamon cpu temerature goes crazy! |
1,608,140,233,000 |
So I'll describe the set-up, then the exact requirements and then the list of options I have tried and then I'll ask if their's a better approach or the best option among the ones mentioned.
So we are a group of Machine Learning researchers, We have one very powerful workstation machine, and other decently powerful machines one for each of us.
Requirements :
That the GPU is efficiently or equally allocated among all the active users at any given time while all users are working on the workstation simultaneously. (Ram is huge enough to not worry about and also we don't mind having common hard disks) (Some kind of GPU Virtualization?)
We are looking for an approach that's up and running in 2-3 days.
The working OS is Ubuntu 16 on all the machines
The Proposals :
Setting up multiple VMs in the Workstation, one per user and SSH
from our current machines. Running a VM over another OS seems like a
big overhead plus we'd rather like to spend on more hardware than
software licenses. VMWare ESXI bare-metal seems one way to go.
The multiseat approach, it would allow multiple users at the same
time, though it requires one set of keyboard, mouse and video card
per seat, we do have a very powerful GPU dedicated just to the
display but again it's just one and multi-seat requires one per
seat, while there are slow workarounds to operate with a single
video card(xephyr) we'd still need to allocate the computing GPU
among users efficiently.
Multiple users SSH into multiple Virtual Terminals. The multiple
Virtual Terminals in Unix were made in the time where the computers
were expensive and a single computer would be shared among different
users using Terminals. We'd still need a way to virtualize the GPU.
But if all else works good we can still work since their are four
users and two computing GPUs so we could run two programs at once
assinging each to one GPU manually through the code(Tensorflow), but
if there's an approach to virtualize the two physical GPUs into 4
virtual GPUs it'd be best(except Nvidia vGPU).
rCUDA, have sent them a request form. Waiting.
Some cluster management system such as Apache Mesos. Since single
or multiple computers a CMS won't mind and it's made to virtualize
and allocate it's resources efficiently among it's clients.
LTSP, haven't looked much into it.
Now I know I might sound naive in many of above suggestions, so please give a suggestion as per your knowledge. In case anything in the question seems vague please point to it and I'd clear it out.
|
The best and simplest workaround was :
Jupyter Notebook( to run the code on other machine) + SSH(access + using data transfer protocol) + using TF to assign GPUs.
| Best way to share the resources of a powerful workstation across multiple users? |
1,608,140,233,000 |
My graphic card is not recognized on my laptop with Debian Jessie installed and a Nvidia Geforce GTX 850M.
glewinfo tells me it uses Mesa DRI with Intel (OpenGL 3.0) instead of Nouveau with the actual GPU (OpenGL 4.4+).
nvidia-detect can't find my graphic card.
lspci identifies my graphic card as a 3d controller while the web tells me it should be identified as a VGA controller.
I tried Bumblebee because I'm pretty sure my laptop includes that Optimus stuff but it didn't change anything.
How to make my laptop to recognize my GPU? Is it a matter of etc config files or something? I would like to stick with Nouveau driver. However if there is a "debian" way (e.g. apt-get) to install the official Nvidia driver, I'll take it.
Thank you,
Here's some news. I partially recovered my desktop.
I apt-get install xserver-xorg-video-intel|nouveau|nvidia (yes, everybody!).
I didn't remove xorg.conf generated by nvidia-xconf.
I just change driver "nvidia" to "intel".
I followed punctiliously this guideline from ArchLinux community.
I succeeded to run Bumblebee and I could be able to run optirun glxgears. But now, my desktop is at 640x480 instead of 1280*1024. It's probably a separate problem.
Here's my dpkg -l|grep nvidia
ii bumblebee-nvidia 3.2.1-7 amd64 NVIDIA Optimus support using the proprietary NVIDIA driver
ii glx-alternative-nvidia 0.5.1 amd64 allows the selection of NVIDIA as GLX provider
ii libegl1-nvidia:amd64 340.65-2 amd64 NVIDIA binary EGL libraries
ii libgl1-nvidia-glx:amd64 340.65-2 amd64 NVIDIA binary OpenGL libraries
ii libgl1-nvidia-glx:i386 340.65-2 i386 NVIDIA binary OpenGL libraries
ii libgl1-nvidia-glx-i386 340.65-2 i386 NVIDIA binary OpenGL 32-bit libraries
ii libgles1-nvidia:amd64 340.65-2 amd64 NVIDIA binary OpenGL|ES 1.x libraries
ii libgles2-nvidia:amd64 340.65-2 amd64 NVIDIA binary OpenGL|ES 2.x libraries
ii libnvidia-eglcore:amd64 340.65-2 amd64 NVIDIA binary EGL core libraries
ii libnvidia-ml1:amd64 340.65-2 amd64 NVIDIA Management Library (NVML) runtime library
ii nvidia-alternative 340.65-2 amd64 allows the selection of NVIDIA as GLX provider
ii nvidia-detect 340.65-2 amd64 NVIDIA GPU detection utility
ii nvidia-driver 340.65-2 amd64 NVIDIA metapackage
ii nvidia-driver-bin 340.65-2 amd64 NVIDIA driver support binaries
ii nvidia-installer-cleanup 20141201+1 amd64 cleanup after driver installation with the nvidia-installer
ii nvidia-kernel-common 20141201+1 amd64 NVIDIA binary kernel module support files
ii nvidia-kernel-dkms 340.65-2 amd64 NVIDIA binary kernel module DKMS source
ii nvidia-modprobe 340.46-1 amd64 utility to load NVIDIA kernel modules and create device nodes
ii nvidia-settings 340.46-2 amd64 tool for configuring the NVIDIA graphics driver
ii nvidia-support 20141201+1 amd64 NVIDIA binary graphics driver support files
ii nvidia-vdpau-driver:amd64 340.65-2 amd64 Video Decode and Presentation API for Unix - NVIDIA driver
ii nvidia-xconfig 340.46-1 amd64 X configuration tool for non-free NVIDIA drivers
ii xserver-xorg-video-nvidia 340.65-2 amd64 NVIDIA binary Xorg driver
Link to my xorg.conf
Note: This file is not in /etc/X11/xorg.conf.d but directly in /etc/X11/
|
The poster has a Nvidia Optimus laptop. It turns out, per the Bumblebee page on the Debian Wiki, that you need to do:
apt-get install bumblebee-nvidia primus
and remove any existing xorg.conf and prevent debconf from creating a xorg.conf during the installation of the packages above.
@Spiralwise confirmed that this works for him.
Note courtesy of @Spiralwise: once Bumblebee-nvidia and Primus are installed, software that need to be run with GPU must be launched like this: primusrun my_program.
| My graphic card is not recognized on laptop/debian |
1,608,140,233,000 |
My GPU is NVIDIA - GeForce RTX 3090 Ti, and the OS is Ubuntu 18.04.
As my code didn’t work, I checked the versions of python, pytorch, cuda, and cudnn.
Python: 3.6
torch. version : 1.4.0
torch.version.cuda : 10.1 (nvidia-smi shows CUDA version 11.3)
cudnn: 7.6.3
These are not compatible with 3090 Ti, I successfully upgraded Python to 3.9, and Pytorch to 1.12.1+cu102.
However, “pip3 install cuda-python” and “pip install nvidia-cudnn” did not work for me. So I followed the steps on the website.
For cuda (tried version 11.8): https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&Distribution=Ubuntu&target_version=18.04&target_type=deb_local
For cudnn (tried version 8.6.0, tar file installation): Installation Guide :: NVIDIA Deep Learning cuDNN Documentation
After the installation steps, nvidia-smi shows “Failed to initialize NVML: Driver/library version mismatch”.
I found that rebooting would work, but the system is stuck at the rebooting step.
dpkg -l |grep nvidia
iU libnvidia-cfg1-520:amd64 520.61.05-0ubuntu1 amd64 NVIDIA binary OpenGL/GLX configuration library
ii libnvidia-common-465 465.19.01-0ubuntu1 all Shared files used by the NVIDIA libraries
iU libnvidia-common-520 520.61.05-0ubuntu1 all Shared files used by the NVIDIA libraries
rc libnvidia-compute-465:amd64 465.19.01-0ubuntu1 amd64 NVIDIA libcompute package
iU libnvidia-compute-520:amd64 520.61.05-0ubuntu1 amd64 NVIDIA libcompute package
iU libnvidia-compute-520:i386 520.61.05-0ubuntu1 i386 NVIDIA libcompute package
ii libnvidia-container-tools 1.11.0-1 amd64 NVIDIA container runtime library (command-line tools)
ii libnvidia-container1:amd64 1.11.0-1 amd64 NVIDIA container runtime library
iU libnvidia-decode-520:amd64 520.61.05-0ubuntu1 amd64 NVIDIA Video Decoding runtime libraries
iU libnvidia-decode-520:i386 520.61.05-0ubuntu1 i386 NVIDIA Video Decoding runtime libraries
iU libnvidia-encode-520:amd64 520.61.05-0ubuntu1 amd64 NVENC Video Encoding runtime library
iU libnvidia-encode-520:i386 520.61.05-0ubuntu1 i386 NVENC Video Encoding runtime library
iU libnvidia-extra-520:amd64 520.61.05-0ubuntu1 amd64 Extra libraries for the NVIDIA driver
iU libnvidia-fbc1-520:amd64 520.61.05-0ubuntu1 amd64 NVIDIA OpenGL-based Framebuffer Capture runtime library
iU libnvidia-fbc1-520:i386 520.61.05-0ubuntu1 i386 NVIDIA OpenGL-based Framebuffer Capture runtime library
iU libnvidia-gl-520:amd64 520.61.05-0ubuntu1 amd64 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD
iU libnvidia-gl-520:i386 520.61.05-0ubuntu1 i386 NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD
rc nvidia-compute-utils-465 465.19.01-0ubuntu1 amd64 NVIDIA compute utilities
iU nvidia-compute-utils-520 520.61.05-0ubuntu1 amd64 NVIDIA compute utilities
ii nvidia-container-toolkit 1.11.0-1 amd64 NVIDIA Container toolkit
ii nvidia-container-toolkit-base 1.11.0-1 amd64 NVIDIA Container Toolkit Base
rc nvidia-dkms-465 465.19.01-0ubuntu1 amd64 NVIDIA DKMS package
iU nvidia-dkms-520 520.61.05-0ubuntu1 amd64 NVIDIA DKMS package
iU nvidia-driver-520 520.61.05-0ubuntu1 amd64 NVIDIA driver metapackage
rc nvidia-kernel-common-465 465.19.01-0ubuntu1 amd64 Shared files used with the kernel module
iU nvidia-kernel-common-520 520.61.05-0ubuntu1 amd64 Shared files used with the kernel module
iU nvidia-kernel-source-520 520.61.05-0ubuntu1 amd64 NVIDIA kernel source package
iU nvidia-modprobe 520.61.05-0ubuntu1 amd64 Load the NVIDIA kernel driver and create device files
ii nvidia-opencl-dev:amd64 9.1.85-3ubuntu1 amd64 NVIDIA OpenCL development files
ii nvidia-prime 0.8.16~0.18.04.1 all Tools to enable NVIDIA’s Prime
iU nvidia-settings 520.61.05-0ubuntu1 amd64 Tool for configuring the NVIDIA graphics driver
iU nvidia-utils-520 520.61.05-0ubuntu1 amd64 NVIDIA driver support binaries
iU xserver-xorg-video-nvidia-520 520.61.05-0ubuntu1 amd64 NVIDIA binary Xorg driver
ls -l /usr/lib/x86_64-linux-gnu/libcuda*
lrwxrwxrwx 1 root root 28 Sep 29 05:22 /usr/lib/x86_64-linux-gnu/libcudadebugger.so.1 → libcudadebugger.so.520.61.05
-rw-r–r-- 1 root root 10934360 Sep 29 01:20 /usr/lib/x86_64-linux-gnu/libcudadebugger.so.520.61.05
lrwxrwxrwx 1 root root 12 Sep 29 05:22 /usr/lib/x86_64-linux-gnu/libcuda.so → libcuda.so.1
lrwxrwxrwx 1 root root 20 Sep 29 05:22 /usr/lib/x86_64-linux-gnu/libcuda.so.1 → libcuda.so.520.61.05
-rw-r–r-- 1 root root 26284256 Sep 29 01:56 /usr/lib/x86_64-linux-gnu/libcuda.so.520.61.05
dkms status
virtualbox, 5.2.42, 5.4.0-126-generic, x86_64: installed
virtualbox, 5.2.42, 5.4.0-72-generic, x86_64: installed
|
Current driver seems to be causing black screen and freezing machine on boot.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 520.61.05 Driver Version: 520.61.05 CUDA Version: 11.8 |
|-------------------------------+----------------------+----------------------+
I have this issue on bare metal Ubuntu 22.04 after upgrading the driver/cuda packages. However, virtual machines that get similar rtx3090 passthrough GPUs work fine with the same driver and OS versions. Perhaps because they use GPUs only for compute and not for display.
Some people say switching from HDMI input to DP might help. I haven't tested. The fix according to Nvidia rep will be out in the next release, so you can either downgrade to previous version or wait for a fix.
https://forums.developer.nvidia.com/t/nvidia-driver-520-61-05-cuda-11-8-rtx-3090-black-display-and-superslow-modesets/230217/5
| Stuck at booting after upgrading |
1,371,138,478,000 |
Recently I installed Mint Linux 15 (Olivia) 32 bit on my friends netbook. I am copy pasting the output of sudo lspci -vk
00:00.0 Host bridge: Intel Corporation Atom Processor D2xxx/N2xxx DRAM Controller (rev 03)
Subsystem: Acer Incorporated [ALI] Device 061f
Flags: bus master, fast devsel, latency 0
00:02.0 VGA compatible controller: Intel Corporation Atom Processor D2xxx/N2xxx Integrated Graphics Controller (rev 09) (prog-if 00 [VGA controller])
Subsystem: Acer Incorporated [ALI] Device 061f
Flags: bus master, fast devsel, latency 0, IRQ 46
Memory at 86000000 (32-bit, non-prefetchable) [size=1M]
I/O ports at 50d0 [size=8]
Expansion ROM at <unassigned> [disabled]
Capabilities: [d0] Power Management version 2
Capabilities: [b0] Vendor Specific Information: Len=07 <?>
Capabilities: [90] MSI: Enable+ Count=1/1 Maskable- 64bit-
Kernel driver in use: gma500
So the problem is whenever I try to boot into the system it pops out a notification (not the exact words)
Running in software rendering mode. No Hardware acceleration.
I have searched the Mint Linux forum and found [this thread] (http://forums.linuxmint.com/viewtopic.php?f=49&t=135578&p=727654), but it did not help much. I am also attaching the output of inxi -Fxz
Kernel: 3.8.0-19-generic i686 (32 bit, gcc: 4.7.3)
Desktop: Gnome
Distro: Linux Mint 15 Olivia
Machine:
System: Acer product: AOD270 version: V1.06
Mobo: Acer model: JE01_CT
Bios: Insyde version: V1.06 date: 03/05/2012
CPU:
Dual core Intel Atom CPU N2600 (-HT-MCP-)
cache: 512 KB flags: (lm nx sse sse2 sse3 ssse3) bmips: 6383.8
Clock Speeds: 1: 1600.00 MHz 2: 1600.00 MHz 3: 1600.00 MHz 4: 1600.00 MHz
Graphics:
Card: Intel Atom Processor D2xxx/N2xxx
Integrated Graphics Controller bus-ID: 00:02.0
X.Org: 1.13.3 drivers: vesa (unloaded: fbdev)
Resolution: [email protected]
GLX Renderer: Gallium 0.4 on llvmpipe (LLVM 3.2, 128 bits)
GLX Version: 2.1 Mesa 9.1.1
Direct Rendering: Yes
The direct effect of disabled hardware video acceleration is that it is impossible to play video files and since the CPU is engaged with software acceleration, the system is damn too slow.
|
A fellow owner of a Dell 3000 / Intel-865G provided me with the following solution.
Create / edit / save this file ('vi', notepad, et al),
/usr/share/X11/xorg.conf.d/00-xorg.conf that contains the following content:
Section "Screen"
Identifier "Default Screen"
DefaultDepth 24
EndSection
| Video acceleration disabled in Mint Linux 15 (Olivia) on an Intel Atom processor |
1,371,138,478,000 |
The time has come for me to upgrade my aging gpu (9800 gt). The AMD 7950 has caught my attention because of the attractive price with pleasing benchmarks. But it is common knowledge that AMD GPUs have poor support on Linux.
What sort of performance can I expect, with say, the latest version of Ubuntu? Will I have issues starting X. Will I have issues with a dual monitor setup? Will basic games such as minecraft or neverball play as expected without problems? Will there be any issues with video playback or flash?
I tried doing some research with how this card performs on Linux, but google lacks any usable resources.
If the 7950 is completely useless when paired with a Linux system, can someone suggest a comparable nvidia gpu for the same price range with similar performance?
|
A generally good experience. I did have KDE problems (some minor crashes, really rarely) with my Radeon 7870, but this never happened on Ubuntu with Unity.
Installing the driver is pretty straightforward. I used the AMD installer to generate the .deb files and installed them by hand. Then I generated the config file with (aticonfig --initial), and everything worked.
Games run, wine works, videos play, flash runs, Chrome works.
I can't recommend you a card (as it's not allowed), but keep in mind that Nvidia >> ATI. When it comes to drivers (especially Linux). However an ATI card will give you a better performance (on Windows at least) for a lower price. If I were to use Linux all day, I would buy an Nvidia card no doubt. But I took the leap and bought this Radeon. I'm satisfied on Windows, but it's got some problems under Linux.
| How well will the AMD Radeon HD 7950 gpu perform on Linux? |
1,371,138,478,000 |
I am working in RHEL 7 and I need to install the Nvidia driver for my GPU. I know I have downloaded the right driver from the Nvidia website. I have also installed the linux kernel packages and those are located in /usr as in /usr/include/linux/kernel.h
It's become clear to me that the Nvidia driver is taking a path and then adding it's own path to it to look for the kernel file. If I run the driver install with:
NVIDIA-Linux-x86_64-418.126.02.run --kernel-source-path /usr/include
Nvidia says that /usr/include/include/linux/kernel.h is an invalid path (note the extra include, this is the part that Nvidia adds). OK, no problem, so then I run
NVIDIA-Linux-x86_64-418.126.02.run --kernel-source-path /usr/
And that tells me that /usr/ is not a valid entry for that parameter.
I'm stuck as to what to do next. Is it OK to move the files to another directory? Or is this a known issue with Nvidia? Google searches turned up nothing on this specific issue.
|
/usr/include is the path for include files for user-space programs. The place where RHEL kernel-devel RPMs place the headers for compiling kernel modules is actually /usr/src/kernels/$(uname -r).
The Nvidia installer should actually be able to auto-detect this, because there should be a symbolic link at /lib/modules/$(uname -r)/build pointing there.
So, make sure that the kernel-devel RPM matching the exact kernel version you're running is installed, then try this one:
NVIDIA-Linux-x86_64-418.126.02.run --kernel-source-path /usr/src/kernels/$(uname -r)
Or just omit the --kernel-source-path option altogether.
As the name of the option suggests, it's supposed to be pointed at a directory hierarchy whose structure matches the root directory of a standard Linux kernel source tree. It will have its own include sub-directory, exactly as the installer is expecting.
| Help with Nvidia driver install and --kernel-source-path |
1,371,138,478,000 |
My system heats up when running Ubuntu, with no major processing tasks. I also run Windows 10, on the same machine, and it never heats up when on Windows, so we can rule out hardware issues. My best guess is a driver issue, since this problem started after I fiddled with my system in order to emulate a Nvidia GPU on AMD Radeon, using Ocelot, but I undid all my changes, and the problem persists.
Or any way to check which driver my system is using for graphics card?
My Resource Monitor Screenshot:
My top result (it says user root using 48% processing):
top - 23:17:25 up 3:15, 1 user, load average: 3.10, 3.11, 2.93
Tasks: 346 total, 2 running, 279 sleeping, 0 stopped, 0 zombie
%Cpu(s): 15.5 us, 20.1 sy, 0.1 ni, 63.4 id, 0.2 wa, 0.0 hi, 0.8 si, 0.0 st
KiB Mem : 12209596 total, 3167008 free, 3717188 used, 5325400 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 7155104 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
374 root 20 0 220908 178912 3140 S 48.0 1.5 25:38.91 systemd-ud+
4948 ashutosh 20 0 1162840 137192 103660 S 3.3 1.1 4:00.39 Xorg
26005 ashutosh 20 0 2855536 381020 151472 S 2.6 3.1 4:27.62 firefox
5308 ashutosh 20 0 3851468 264376 75324 S 2.3 2.2 3:00.62 gnome-shell
1 root 20 0 226204 10040 6768 S 2.0 0.1 1:26.18 systemd
4788 root 20 0 29012 840 780 R 1.7 0.0 0:00.05 modprobe
18343 ashutosh 20 0 723704 39712 30020 S 1.7 0.3 0:07.60 gnome-term+
8648 ashutosh 20 0 2112580 233300 140332 S 1.3 1.9 3:30.49 Web Content
26421 ashutosh 20 0 2184448 458740 90536 S 1.3 3.8 3:24.41 WebExtensi+
339 root 19 -1 266948 137392 132132 S 1.0 1.1 0:41.54 systemd-jo+
1328 root 20 0 48000 5524 2348 S 1.0 0.0 0:58.36 systemd-ud+
14747 _apt 20 0 81832 8772 7788 S 1.0 0.1 0:25.84 http
19207 ashutosh 20 0 5137248 660948 53060 S 1.0 5.4 3:30.89 java
1698 root 20 0 1803284 35956 27112 S 0.7 0.3 0:37.50 libvirtd
2604 root 20 0 48000 4772 1600 S 0.7 0.0 0:44.92 systemd-ud+
4701 root 20 0 48000 6616 3416 S 0.7 0.1 0:43.52 systemd-ud+
21966 root 20 0 780024 31784 17092 S 0.7 0.3 0:33.25 snapd
udevadm monitor result:
ashutosh@ashutosh-Lenovo-G50-80:~$ udevadm monitor
monitor will print the received events for:
UDEV - the event which udev sends out after rule processing
KERNEL - the kernel uevent
UDEV [4527.590214] remove /kernel/slab/:0012288 (slab)
UDEV [4527.605280] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4527.720709] add /module/nvidia (module)
KERNEL[4527.721492] add /kernel/slab/:0012288 (slab)
KERNEL[4527.721563] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4527.721932] remove /kernel/slab/:0012288 (slab)
KERNEL[4527.721984] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4527.736603] add /kernel/slab/:0012288 (slab)
KERNEL[4527.744206] remove /module/nvidia (module)
UDEV [4527.746470] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4527.759300] remove /kernel/slab/:0012288 (slab)
UDEV [4527.774551] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4527.885168] add /module/nvidia (module)
KERNEL[4527.885935] add /kernel/slab/:0012288 (slab)
KERNEL[4527.886011] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4527.886380] remove /kernel/slab/:0012288 (slab)
KERNEL[4527.886433] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4527.900224] remove /module/nvidia (module)
UDEV [4527.902492] add /kernel/slab/:0012288 (slab)
UDEV [4527.911873] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4527.926199] remove /kernel/slab/:0012288 (slab)
UDEV [4527.940766] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.048687] add /module/nvidia (module)
KERNEL[4528.049406] add /kernel/slab/:0012288 (slab)
KERNEL[4528.049477] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.049817] remove /kernel/slab/:0012288 (slab)
KERNEL[4528.049866] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4528.066313] add /kernel/slab/:0012288 (slab)
KERNEL[4528.072171] remove /module/nvidia (module)
UDEV [4528.075653] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4528.088961] remove /kernel/slab/:0012288 (slab)
UDEV [4528.090611] add /module/nvidia (module)
UDEV [4528.101611] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4528.112995] remove /module/nvidia (module)
KERNEL[4528.245592] add /module/nvidia (module)
KERNEL[4528.246903] add /kernel/slab/:0012288 (slab)
KERNEL[4528.247002] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.247526] remove /kernel/slab/:0012288 (slab)
KERNEL[4528.247599] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.260227] remove /module/nvidia (module)
UDEV [4528.262239] add /kernel/slab/:0012288 (slab)
UDEV [4528.272641] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4528.288292] remove /kernel/slab/:0012288 (slab)
UDEV [4528.300851] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.402297] add /module/nvidia (module)
KERNEL[4528.403053] add /kernel/slab/:0012288 (slab)
KERNEL[4528.403126] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.403492] remove /kernel/slab/:0012288 (slab)
KERNEL[4528.403542] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.416235] remove /module/nvidia (module)
UDEV [4528.419266] add /kernel/slab/:0012288 (slab)
UDEV [4528.428287] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4528.443380] remove /kernel/slab/:0012288 (slab)
UDEV [4528.456770] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.555878] add /module/nvidia (module)
KERNEL[4528.556659] add /kernel/slab/:0012288 (slab)
KERNEL[4528.556739] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.557108] remove /kernel/slab/:0012288 (slab)
KERNEL[4528.557160] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.572199] remove /module/nvidia (module)
UDEV [4528.572770] add /kernel/slab/:0012288 (slab)
UDEV [4528.582049] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4528.595829] remove /kernel/slab/:0012288 (slab)
UDEV [4528.610358] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.713648] add /module/nvidia (module)
KERNEL[4528.714387] add /kernel/slab/:0012288 (slab)
KERNEL[4528.714461] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.714798] remove /kernel/slab/:0012288 (slab)
KERNEL[4528.714856] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.728260] remove /module/nvidia (module)
UDEV [4528.731368] add /kernel/slab/:0012288 (slab)
UDEV [4528.741057] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4528.751104] add /module/nvidia (module)
UDEV [4528.754985] remove /kernel/slab/:0012288 (slab)
UDEV [4528.765928] remove /module/nvidia (module)
UDEV [4528.769766] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.898017] add /module/nvidia (module)
KERNEL[4528.898776] add /kernel/slab/:0012288 (slab)
KERNEL[4528.898852] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.899224] remove /kernel/slab/:0012288 (slab)
KERNEL[4528.899274] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4528.912328] remove /module/nvidia (module)
UDEV [4528.913607] add /kernel/slab/:0012288 (slab)
UDEV [4528.923057] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4528.938895] remove /kernel/slab/:0012288 (slab)
UDEV [4528.951204] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.053251] add /module/nvidia (module)
KERNEL[4529.053993] add /kernel/slab/:0012288 (slab)
KERNEL[4529.054067] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.054430] remove /kernel/slab/:0012288 (slab)
KERNEL[4529.054482] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.068218] remove /module/nvidia (module)
UDEV [4529.069977] add /kernel/slab/:0012288 (slab)
UDEV [4529.079324] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4529.094377] remove /kernel/slab/:0012288 (slab)
UDEV [4529.107379] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.210655] add /module/nvidia (module)
KERNEL[4529.211399] add /kernel/slab/:0012288 (slab)
KERNEL[4529.211476] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.211834] remove /kernel/slab/:0012288 (slab)
KERNEL[4529.211881] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.224166] remove /module/nvidia (module)
UDEV [4529.226843] add /kernel/slab/:0012288 (slab)
UDEV [4529.237124] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4529.252801] remove /kernel/slab/:0012288 (slab)
UDEV [4529.265856] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.366827] add /module/nvidia (module)
KERNEL[4529.367552] add /kernel/slab/:0012288 (slab)
KERNEL[4529.367626] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.367955] remove /kernel/slab/:0012288 (slab)
KERNEL[4529.368055] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4529.383073] add /kernel/slab/:0012288 (slab)
KERNEL[4529.384195] remove /module/nvidia (module)
UDEV [4529.392245] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4529.404878] add /module/nvidia (module)
UDEV [4529.408027] remove /kernel/slab/:0012288 (slab)
UDEV [4529.419792] remove /module/nvidia (module)
UDEV [4529.425247] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.567700] add /module/nvidia (module)
KERNEL[4529.568567] add /kernel/slab/:0012288 (slab)
KERNEL[4529.568670] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.569048] remove /kernel/slab/:0012288 (slab)
KERNEL[4529.569120] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4529.585039] add /kernel/slab/:0012288 (slab)
UDEV [4529.595090] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.600162] remove /module/nvidia (module)
UDEV [4529.609379] remove /kernel/slab/:0012288 (slab)
UDEV [4529.623199] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.763077] add /module/nvidia (module)
KERNEL[4529.764182] add /kernel/slab/:0012288 (slab)
KERNEL[4529.764299] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4529.764784] remove /kernel/slab/:0012288 (slab)
KERNEL[4529.764865] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4529.780412] add /kernel/slab/:0012288 (slab)
KERNEL[4529.788177] remove /module/nvidia (module)
UDEV [4529.790516] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4529.803121] remove /kernel/slab/:0012288 (slab)
UDEV [4529.818311] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4530.027936] add /module/nvidia (module)
KERNEL[4530.029017] add /kernel/slab/:0012288 (slab)
KERNEL[4530.029131] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4530.029612] remove /kernel/slab/:0012288 (slab)
KERNEL[4530.029691] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4530.045657] add /kernel/slab/:0012288 (slab)
KERNEL[4530.052206] remove /module/nvidia (module)
UDEV [4530.057300] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4530.070458] remove /kernel/slab/:0012288 (slab)
UDEV [4530.086839] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4530.206142] add /module/nvidia (module)
KERNEL[4530.206865] add /kernel/slab/:0012288 (slab)
KERNEL[4530.206955] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4530.207302] remove /kernel/slab/:0012288 (slab)
KERNEL[4530.207336] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4530.223105] add /kernel/slab/:0012288 (slab)
KERNEL[4530.224214] remove /module/nvidia (module)
UDEV [4530.234185] add /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4530.243977] add /module/nvidia (module)
UDEV [4530.247910] remove /kernel/slab/:0012288 (slab)
UDEV [4530.259226] remove /module/nvidia (module)
UDEV [4530.262211] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4530.391019] add /module/nvidia (module)
KERNEL[4530.391904] add /kernel/slab/:0012288 (slab)
KERNEL[4530.391982] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4530.392415] remove /kernel/slab/:0012288 (slab)
KERNEL[4530.392468] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
UDEV [4530.408068] add /kernel/slab/:0012288 (slab)
UDEV [4530.418642] add /bus/pci/drivers/nvidia-nvswitch (drivers)
KERNEL[4530.420181] remove /module/nvidia (module)
UDEV [4530.431689] remove /kernel/slab/:0012288 (slab)
UDEV [4530.445422] remove /bus/pci/drivers/nvidia-nvswitch (drivers)
My lspci -k and dmesg output, dmesg output just repeats forever as shown below:
[ 7645.281540] PKCS#7 signature not signed with a trusted key
[ 7645.295973] nvidia-nvlink: Nvlink Core is being initialized, major device number 240
[ 7645.296392] NVRM: No NVIDIA graphics adapter found!
[ 7645.296614] nvidia-nvlink: Unregistered the Nvlink Core, major device number 240
[ 7645.462797] PKCS#7 signature not signed with a trusted key
[ 7645.478302] nvidia-nvlink: Nvlink Core is being initialized, major device number 240
[ 7645.478703] NVRM: No NVIDIA graphics adapter found!
[ 7645.478886] nvidia-nvlink: Unregistered the Nvlink Core, major device number 240
ashutosh@ashutosh-Lenovo-G50-80:~$ man lspci
ashutosh@ashutosh-Lenovo-G50-80:~$ lspci -k
00:00.0 Host bridge: Intel Corporation Broadwell-U Host Bridge -OPI (rev 09)
Subsystem: Lenovo Broadwell-U Host Bridge -OPI
Kernel driver in use: bdw_uncore
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 5500 (rev 09)
Subsystem: Lenovo HD Graphics 5500
Kernel driver in use: i915
Kernel modules: i915
00:03.0 Audio device: Intel Corporation Broadwell-U Audio Controller (rev 09)
Subsystem: Lenovo Broadwell-U Audio Controller
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
00:14.0 USB controller: Intel Corporation Wildcat Point-LP USB xHCI Controller (rev 03)
Subsystem: Lenovo Wildcat Point-LP USB xHCI Controller
Kernel driver in use: xhci_hcd
00:16.0 Communication controller: Intel Corporation Wildcat Point-LP MEI Controller #1 (rev 03)
Subsystem: Lenovo Wildcat Point-LP MEI Controller
Kernel driver in use: mei_me
Kernel modules: mei_me
00:1b.0 Audio device: Intel Corporation Wildcat Point-LP High Definition Audio Controller (rev 03)
Subsystem: Lenovo Wildcat Point-LP High Definition Audio Controller
Kernel driver in use: snd_hda_intel
Kernel modules: snd_hda_intel
00:1c.0 PCI bridge: Intel Corporation Wildcat Point-LP PCI Express Root Port #1 (rev e3)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:1c.2 PCI bridge: Intel Corporation Wildcat Point-LP PCI Express Root Port #3 (rev e3)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:1c.3 PCI bridge: Intel Corporation Wildcat Point-LP PCI Express Root Port #4 (rev e3)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:1c.4 PCI bridge: Intel Corporation Wildcat Point-LP PCI Express Root Port #5 (rev e3)
Kernel driver in use: pcieport
Kernel modules: shpchp
00:1d.0 USB controller: Intel Corporation Wildcat Point-LP USB EHCI Controller (rev 03)
Subsystem: Lenovo Wildcat Point-LP USB EHCI Controller
Kernel driver in use: ehci-pci
00:1f.0 ISA bridge: Intel Corporation Wildcat Point-LP LPC Controller (rev 03)
Subsystem: Lenovo Wildcat Point-LP LPC Controller
Kernel driver in use: lpc_ich
Kernel modules: lpc_ich
00:1f.2 SATA controller: Intel Corporation Wildcat Point-LP SATA Controller [AHCI Mode] (rev 03)
Subsystem: Lenovo Wildcat Point-LP SATA Controller [AHCI Mode]
Kernel driver in use: ahci
Kernel modules: ahci
00:1f.3 SMBus: Intel Corporation Wildcat Point-LP SMBus Controller (rev 03)
Subsystem: Lenovo Wildcat Point-LP SMBus Controller
Kernel driver in use: i801_smbus
Kernel modules: i2c_i801
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 10)
Subsystem: Lenovo RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller
Kernel driver in use: r8169
Kernel modules: r8169
03:00.0 Network controller: Broadcom Limited BCM43142 802.11b/g/n (rev 01)
Subsystem: Lenovo BCM43142 802.11b/g/n
Kernel driver in use: wl
Kernel modules: wl
04:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Sun LE [Radeon HD 8550M / R5 M230]
Subsystem: Lenovo Sun LE [Radeon HD 8550M / R5 M230]
Kernel driver in use: radeon
Kernel modules: radeon, amdgpu
|
Nvidia drivers are causing problems with Ubuntu 18.04+. Uninstall Nvidia drivers and everything related to it. This solved my problem:
sudo apt-get purge nvidia-*
| My system is constantly processing something, heating up my laptop |
1,371,138,478,000 |
I'm trying to create a shell script that installs a series of things for me. One such thing is iMod. I've located self-installing shell script for iMod and have run the following commands on my bash console:
export IMOD_VERSION=4.11.12
export CUDA_VERSION=10.1
wget https://bio3d.colorado.edu/imod/AMD64-RHEL5/imod_${IMOD_VERSION}_RHEL7-64_CUDA${CUDA_VERSION}.sh
sudo sh imod_${IMOD_VERSION}_RHEL7-64_CUDA${CUDA_VERSION}.sh
Note
The issue still persists after restarting the device and disconnecting and reconnecting to it (via SSH, starting a new terminal)
Installation Output
$ export IMOD_VERSION=4.11.12
$ export CUDA_VERSION=10.1
$ wget https://bio3d.colorado.edu/imod/AMD64-RHEL5/imod_${IMOD_VERSION}_RHEL7-64_CUDA${CUDA_VERSION}.sh
--2022-02-02 03:16:12-- https://bio3d.colorado.edu/imod/AMD64-RHEL5/imod_4.11.12_RHEL7-64_CUDA10.1.sh
Resolving bio3d.colorado.edu (bio3d.colorado.edu)... 128.138.72.88
Connecting to bio3d.colorado.edu (bio3d.colorado.edu)|128.138.72.88|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 205325213 (196M) [application/x-sh]
Saving to: ‘imod_4.11.12_RHEL7-64_CUDA10.1.sh.1’
100%[===================================================================================================================>] 205,325,213 5.60MB/s in 38s
2022-02-02 03:16:51 (5.21 MB/s) - ‘imod_4.11.12_RHEL7-64_CUDA10.1.sh.1’ saved [205325213/205325213]
$ sudo sh imod_4.11.12_RHEL7-64_CUDA10.1.sh
This script will install IMOD in /usr/local and rename
any previous version, or remove another copy of this version.
It will copy IMOD-linux.csh and IMOD-linux.sh to /etc/profile.d
You can add the option -h to see a full list of options
Enter Y if you want to proceed: y
Extracting imod_4.11.12_RHEL7-64_CUDA10.1.tar.gz ...
Extracting installIMOD
Checking system and package types
Saving the Plugins directory in the existing installation
Removing link to previous version but leaving previous version
Removing an existing copy of the same version...
Unpacking IMOD in /usr/local ...
Linking imod_4.11.12 to IMOD
Restoring the Plugins directory
Copying startup scripts to /etc/profile.d: IMOD-linux.csh IMOD-linux.sh
SELinux is enabled - Trying to change security context of libraries.
The installation of IMOD 4.11.12 is complete.
You may need to start a new terminal window for changes to take effect
If there are version-specific IMOD startup commands in individual user
startup files (.cshrc, .bashrc, .bash_profile) they should be changed
or removed.
Cleaning up imod_4.11.12_RHEL7-64_CUDA10.1.tar.gz, installIMOD, and IMODtempDir
|
I had some time to try and reproduce your problem.
Stock CentOS 7.9 minimal.
Then:
export IMOD_VERSION=4.11.12
export CUDA_VERSION=10.1
wget https://bio3d.colorado.edu/imod/AMD64-RHEL5/imod_${IMOD_VERSION}_RHEL7-64_CUDA${CUDA_VERSION}.sh
sudo sh imod_${IMOD_VERSION}_RHEL7-64_CUDA${CUDA_VERSION}.sh
Output:
This script will install IMOD in /usr/local and rename
any previous version, or remove another copy of this version.
It will copy IMOD-linux.csh and IMOD-linux.sh to /etc/profile.d
You can add the option -h to see a full list of options
Enter Y if you want to proceed: Y
Extracting imod_4.11.12_RHEL7-64_CUDA10.1.tar.gz ...
Extracting installIMOD
Checking system and package types
Unpacking IMOD in /usr/local ...
Linking imod_4.11.12 to IMOD
Copying startup scripts to /etc/profile.d: IMOD-linux.csh IMOD-linux.sh
SELinux is enabled - Trying to change security context of libraries.
The installation of IMOD 4.11.12 is complete.
You may need to start a new terminal window for changes to take effect
If there are version-specific IMOD startup commands in individual user
startup files (.cshrc, .bashrc, .bash_profile) they should be changed
or removed.
Cleaning up imod_4.11.12_RHEL7-64_CUDA10.1.tar.gz, installIMOD, and IMODtempDir
It appears that the installation script installed software under /usr/local/IMOD:
[test@centos7test ~]$ ll /usr/local/
total 0
<...>
lrwxrwxrwx. 1 root root 12 Feb 3 10:31 IMOD -> imod_4.11.12
drwxr-xr-x. 13 1095 111 286 Nov 19 12:32 imod_4.11.12
<...>
Now, it's very important to logout and login to your shell, because it needs to pick up the following piece of code that was installed in /etc/profile.d/IMOD-linux.sh:
<...>
export IMOD_DIR=${IMOD_DIR:=/usr/local/IMOD}
# Put the IMOD programs on the path
#
if ! echo ${PATH} | grep -q "$IMOD_DIR/bin" ; then
export PATH=$IMOD_DIR/bin:$PATH
fi
<...>
This is reflected in your current $PATH env var:
[test@centos7test ~]# echo $PATH
/usr/local/IMOD/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin
I was now successfully able to locate and run both the imod and imodhelp binaries:
[test@centos7test local]# whereis imod imodhelp
imod: /usr/local/imod_4.11.12/bin/imod
imodhelp: /usr/local/imod_4.11.12/bin/imodhelp
If for some reason your machine isn't picking up the file under /etc/profile.d/IMOD-linux.sh you can force run it like so:
[test@centos7test ~]# source /etc/profile.d/IMOD-linux.sh
| Installation of iMod on CentOS 7 |
1,371,138,478,000 |
I accidentally ended up using the Nouveau driver (as opposed to the proprietary NVIDIA driver) for my GPU today and was surprised by how well it worked. I am aware of the reclocking issue (that is, that the clock speeds are stuck low). Regardless, I'm considering switching to primarily using it, but I have one significant issue preventing me from doing so: my GPU's fans. When using Nouveau they constantly spin at almost 2000 RPM despite the card not being particularly warm (according to lm-sensors) and as a result are very loud. I would like to set the fan curve to something more reasonable. How might I do this in Linux when using the Nouveau GPU driver?
Worth noting is that I have a GTX 970 which according to this matrix has support for controlling the fan speed: https://nouveau.freedesktop.org/PowerManagement.html (edit: never mind, the GTX 970 is one generation too new to support this due to firmware issue)
| ERROR: type should be string, got "\nhttps://wiki.archlinux.org/index.php/nouveau#Fan_control\nAs for the fan curve, man fancontrol :\nhttps://wiki.archlinux.org/index.php/fan_speed_control#Fancontrol_(lm-sensors)\n" | How do I set my GPU's fan curve when using Nouveau? |
1,371,138,478,000 |
I'm trying to monitor AMD gpus in a system running AMDGPU-PRO 18.10 and linux kernel 4.4.0.
I am reading values from:
/sys/kernel/debug/dri/$X/amdgpu_pm_info
where $X is a card index.
I am also reading the pp_dpm_cclk values from another directory, found under
/sys/class/drm/card$X/
I have 2 questions about this.
Does $X in both these cases refer to the same card? E.g. is /sys/class/drm/card0/device/pp_dpm_mclk returning information about the same card as /sys/kernel/debug/dri/0/amdgpu_pm_info?
Will this be true every boot/if I add or remove cards?
Finally, should I be using /sys/devices/pci0000:00 to access pp_dpm_mclk rather than the symlinks in /sys/class/drm? If so, how can I find out which card in /sys/devices/pci0000:00 corresponds to the cards in /sys/kernel/debug/dri ?
Thanks
|
First question the answer is Yes.
/sys/kernel/debug/dri/0 is for card /sys/class/drm/card0 and so on..
Will this be true every boot/if I add or remove cards?
Considering my personal case:
I have 3 pcie x16 on my motherboard. This is order as they are physicaly on my board.
PCIEx16 [================] bus 0000:65:00.0 First slot
PCIEx16 [================] bus 0000:17:00.0 Second slot
PCIEx16 [================] bus 0000:15:00.0 Third slot
If you have one video cards plug into bus 65. Bus 65 will be card0.
But if you add a second video card into bus 17, this will reorder all the card in /sys/class/drm/card$X.
card0 will be bus 17, and card1 bus 65.
Same with one more card on bus 15.
card0 bus 15, card1 bus 17, card2 bus 65.
So the card number is depending of the pcie slot you have plug in the video card and the number of video cards you currently have installed on your motherboard.
Finally, should I be using /sys/devices/pci0000:00 to access
pp_dpm_mclk rather than the symlinks in /sys/class/drm? If so, how can
I find out which card in /sys/devices/pci0000:00 corresponds to the
cards in /sys/kernel/debug/dri ?
When you cd into /sys/class/drm/card0/device this is a symlink to /sys/devices/pci0000:00/0000:00:$PCI.0/subsystem/devices/0000:$PCI:00.0
Both are the same.
| AMDGPU-PRO How to associate GPU stats found in /sys/kernel/debug/dri and /sys/class/drm/? |
1,371,138,478,000 |
Is it possible do have basic (even just console) graphics on Linux, but without using the GPU (which, in this case, is fried and not replaceable)? Or would such a computer be limited to non-graphical uses only?
EDIT: The computer I'm talking about is an iMac with a broken graphics card/GPU, but everything else working (like the screen)
|
Yes it is possible.
The software you need is vnc (server and client).
After the software is installed, you can remotely connect to a virtual desktop.
Also you can use Xvfb.
| Use Linux without a GPU [closed] |
1,371,138,478,000 |
I am trying to get the Intel integrated GPU working with my Parabola (Arch variant) desktop PC. According to lspci, the GPU is:
00:02.0 Display controller: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller (rev 09)
I have reconfigured my xorg.conf files to point to it; however, when I run startx, I get the following error in the Xorg log file:
[ 1611.090] (II) Initializing extension GLX
[ 1611.101] (EE) AIGLX error: dlopen of /usr/lib/dri/i965_dri.so failed (/usr/lib/dri/i965_dri.so: cannot open shared object file: No such file or directory)
[ 1611.101] (EE) AIGLX error: unable to load driver i965
So, it seems to not be able to find the i965 driver for the GPU. Looking in /usr/lib/dri verifies that the driver file is not there:
# ls /usr/lib/dri
crocus_dri.so iris_dri.so nouveau_dri.so r600_dri.so swrast_dri.so vmwgfx_dri.so
d3d12_dri.so kms_swrast_dri.so r300_dri.so radeonsi_dri.so virtio_gpu_dri.so zink_dri.so
However, if I check the file list for the mesa package I have installed, it says the file should be installed:
# pacman -Fl mesa | grep dri
mesa usr/include/GL/internal/dri_interface.h
mesa usr/lib/dri/
mesa usr/lib/dri/i915_dri.so
mesa usr/lib/dri/i965_dri.so
mesa usr/lib/dri/iris_dri.so
mesa usr/lib/dri/kms_swrast_dri.so
mesa usr/lib/dri/nouveau_dri.so
mesa usr/lib/dri/nouveau_vieux_dri.so
mesa usr/lib/dri/r200_dri.so
mesa usr/lib/dri/r300_dri.so
mesa usr/lib/dri/r600_dri.so
mesa usr/lib/dri/radeon_dri.so
mesa usr/lib/dri/radeonsi_dri.so
mesa usr/lib/dri/swrast_dri.so
mesa usr/lib/dri/virtio_gpu_dri.so
mesa usr/lib/dri/vmwgfx_dri.so
mesa usr/lib/pkgconfig/dri.pc
mesa usr/share/drirc.d/
mesa usr/share/drirc.d/00-mesa-defaults.conf
However, if I check the mesa package tar archive, that driver file is clearly not present:
# tar -tf mesa-22.2.1-1-x86_64.pkg.tar.zst | grep dri
usr/include/GL/internal/dri_interface.h
usr/lib/dri/
usr/lib/dri/crocus_dri.so
usr/lib/dri/d3d12_dri.so
usr/lib/dri/iris_dri.so
usr/lib/dri/kms_swrast_dri.so
usr/lib/dri/nouveau_dri.so
usr/lib/dri/r300_dri.so
usr/lib/dri/r600_dri.so
usr/lib/dri/radeonsi_dri.so
usr/lib/dri/swrast_dri.so
usr/lib/dri/virtio_gpu_dri.so
usr/lib/dri/vmwgfx_dri.so
usr/lib/dri/zink_dri.so
usr/lib/pkgconfig/dri.pc
usr/share/drirc.d/
usr/share/drirc.d/00-mesa-defaults.conf
So, what's going on here then? Is 'i965_dri.so' supposed to be provided with mesa, or am I supposed to get it from somewhere else? If it is supposed to be there, I should probably file an issue report?
|
Run sudo pacman -Fy to refresh your package file databases. i965_dri.so is in the mesa-amber package:
↪ pacman -F i965_dri.so
multilib/lib32-mesa-amber 21.3.9-2
usr/lib32/dri/i965_dri.so
extra/mesa-amber 21.3.9-2
usr/lib/dri/i965_dri.so
| Problem using Intel integrated graphics GPU (Xorg) |
1,371,138,478,000 |
In the Linux source code, specifically in linux/drivers/video/console/vgacon.c, there is a switch case block for cursor shapes. Each of these shapes are rectangles of the same width and varying heights. Clearly, Linux handles the height of the cursor, but does it handle the width? Does Linux choose the width, or does the GPU decide? Does this vary between the other *.cons, (some of which have switch cases of cursors)?
|
In vgacon, the hardware chooses the width, and it’s always the full width of a character cell — that’s all that VGA supports. mdacon is similar, for the same reason.
Other console implementations with cursor size handling can be found by looking for CUR_UNDERLINE. Some of them, such as fbcon, could theoretically support cursors of varying widths too, but they all match the behaviour of the original Linux console (the VGA one) and use a fixed width.
| What handles virtual console cursor specifics? |
1,371,138,478,000 |
nvidia-smi is showing as
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Quadro K620 On | 00000000:02:00.0 On | N/A |
| 65% 74C P0 20W / 30W | 758MiB / 1994MiB | 98% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1314 G /usr/lib/xorg/Xorg 31MiB |
| 0 1927 G /usr/lib/xorg/Xorg 187MiB |
| 0 2139 G /usr/bin/gnome-shell 248MiB |
| 0 4055 G ...AAAAAAAAAAAACAAAAAAAAAA= --shared-files 85MiB |
| 0 5824 G /opt/teamviewer/tv_bin/TeamViewer 8MiB |
| 0 14613 C gmx 73MiB |
| 0 21803 G /usr/lib/rstudio/bin/rstudio 59MiB |
+-----------------------------------------------------------------------------+
As you can see GPU util is showing 98%
But RAM is still free - 758MiB / 1994MiB
What does exactly GPU-Util mean? Can I load one more process into the GPU now?
Especially I wish to load a molecular dynamics simulation to GPU. (gmx)
|
GPU-Util is the percentage of time, over the last sample period, during which at least one kernel was running on the GPU.
98% means that your GPU is being used nearly all the time, probably by gmx, so there might not be any spare capacity to run another compute task.
| Can I load more process into GPU if RAM is free but GPU Util is showing almost full? |
1,371,138,478,000 |
From What I've searched, my openGL renderer should show my discrete GPU but strangely, it shows my integrated GPU.
Here is my lspci | grep -E "VGA|Display"
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 620 (rev 02)
01:00.0 Display controller: Advanced Micro Devices, Inc. [AMD/ATI] Topaz XT [Radeon R7 M260/M265 / M340/M360 / M440/M445] (rev c3)
and my glxinfo | grep OpenGL
OpenGL vendor string: Intel Open Source Technology Center
OpenGL renderer string: Mesa DRI Intel(R) HD Graphics 620 (Kaby Lake GT2)
I have an Ubuntu 18.04
Running on Inspiron 15 5567 16GB ram which has Radeon R7 M440
I've also tried switcharoo with no avail.
|
You need to set DRI_PRIME value before running the programs.
Example
DRI_PRIME=1 glxinfo | grep OpenGL
This is assumming you already set the proper provider
related article : PRIME
| Why is my OpenGL renderer shows my CPU? |
1,371,138,478,000 |
I have a bunch of Centos workstations with different Nvidia cards.
In the nvidia-settings interface, I need to enable this option: Force Full Composition Pipeline.
This is then saved to /etc/X11/xorg.conf. It appears like this in the file:
Option "metamodes" "DVI-D-0: nvidia-auto-select +0+0 {ForceCompositionPipeline=On, ForceFullCompositionPipeline=On}, DP-0.8: nvidia-auto-select +1920+0"
The issue I have is that at each boot, my xorg.conf file is reset containing only:
Section "Device"
Identifier "Videocard0"
Driver "nvidia"
EndSection
Is it normal that this file is reset at each boot?
How can I disable this function and be sure that the option is enabled at each boot?
|
You can run this script at startup:
nvidia-settings --assign CurrentMetaMode="$(nvidia-settings -q CurrentMetaMode -t|tr '\n' ' '|sed -e 's/.*:: \(.*\)/\1\n/g' -e 's/}/, ForceCompositionPipeline = On, ForceFullCompositionPipeline=On}/g')" > /dev/null
If you want it to be executed on startup, you can drop those lines in an executable file in /etc/X11/xinit/xinitrc.d/
Eg: /etc/X11/xinit/xinitrc.d/99-force-composition-pipeline
| /etc/X11/xorg.conf reset at each boot |
1,371,138,478,000 |
Suddenly one of three monitors (DELL U2414H) stopped working. In nvidia-settings I can see 4 lanes instead of 2.
It is the only visible difference between working and not working monitor configuration.
Working:
Not working:
Sometimes, I have a similar issue with the wrong number of lanes, but 1 lane instead of 2. It happens after turning the monitor off and on, and is OS- and driver-independent. The only solution I found is re-plugging the DP several times. With some probability it works.
It does not help now, though (when I see 4 lanes instead of 2).
What can I do to solve this? Is it a problem with signal coming from the video card?
Drivers version: Linux x64 375.82
There is also a projector connected via HDMI as the 4th monitor.
Full screen map:
|
Solved by buying a DisplayPort cable (not mini).
Maybe the interface on the monitor was damaged.
| Wrong number of lanes via DisplayPort, probably Nvidia GTX 1080 driver issue [closed] |
1,371,138,478,000 |
I am working with (one of) my workstation(s) working under Scientific Linux 6, so, basically a quite old version of Red Hat Enterprise Linux. I would need to use 2 screens, but only have 2 DisplayPort and one VGA as outputs from my Intel IGP. I am unable to make the DisplayPort ports working, I guess because the driver and kernel used are too old.
Anyone would have an idea (besides using a dedicated GPU) ?
lspci
00:00.0 Host bridge: Intel Corporation Sky Lake Host Bridge/DRAM Registers (rev 07)
00:02.0 VGA compatible controller: Intel Corporation Sky Lake Integrated Graphics (rev 06)
00:14.0 USB controller: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller (rev 31)
00:14.2 Signal processing controller: Intel Corporation Sunrise Point-H Thermal subsystem (rev 31)
00:16.0 Communication controller: Intel Corporation Sunrise Point-H CSME HECI #1 (rev 31)
00:16.3 Serial controller: Intel Corporation Sunrise Point-H KT Redirection (rev 31)
00:17.0 SATA controller: Intel Corporation Device a102 (rev 31)
00:1f.0 ISA bridge: Intel Corporation Sunrise Point-H LPC Controller (rev 31)
00:1f.2 Memory controller: Intel Corporation Sunrise Point-H PMC (rev 31)
00:1f.3 Audio device: Intel Corporation Sunrise Point-H HD Audio (rev 31)
00:1f.4 SMBus: Intel Corporation Sunrise Point-H SMBus (rev 31)
00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (2) I219-LM (rev 31)
uname -a
Linux pcbe13615 2.6.32-573.22.1.el6.x86_64 #1 SMP Wed Mar 23 17:13:03 CET 2016 x86_64 x86_64 x86_64 GNU/Linux
|
It is the lack of support for the GPU in kernel (and likely also in X.Org video driver) which you need to somehow solve. Proper support for Sky Lake based GPUs in i915 kernel driver should be available from kernel 4.4 on. Then again, myself I still couldn't get a Intel GPU with device code 1912 working in Debian Jessie under 4.4.5 due to something with possibly the X.org version in Jessie (haven't tried any later kernel now, though). So it'll be either major upgrade of the system, or a dedicated GPU.
Getting a used common good known brand GPU which your system has support for could be the easiest way out, but I'm not sure if you could find one that has specifically DisplayPort available.
If you don't want to upgrade the system, you could try just taking a recent kernel and compiling that manually with all the required options to support the GPU. The possible problem with this approach is that it might be hard to get the system to boot with the new kernel, as there might be some conflicts between the kernel and the base software of the system, udev being one possible issue. You'd also need to remember to include much of the deprecated stuff to be compatible with the older software which interfaces the kernel.
Intel does even provide sources for their graphics driver, so if you are willing to try every possible thing, you could try also compiling that.
In addition to compiling either the Linux kernel or just the Intel graphics driver, you'd still also need to get recent enough X.Org Intel video driver which also supports Skylake based GPUs, so you'd probably also end up needing to compile that (possibly the whole of X.Org), too. This might prove to be impossible without upgrading large parts of the rest of the system due to conflicting version requirements for many other components. After all, there is a reason why most people rely on prebuilt distributions instead of trying to get things going from the scratch :)
| How to use the proper video driver on Scientific Linux 6 for Display Port screens? |
1,371,138,478,000 |
I'm trying to help debug an issue with Mesa and the llvm r600 shader compiler, and would prefer not to install the test compiles of these packages system wide.
My question therefore is: How can I install these two packages to my home folder and make applications use them from there?
I've tried to compile llvm with --prefix set to a subfolder of home, and then to compile mesa using --with-llvm-prefix to point to that installation folder of llvm. Both packages compile fine.
Nevertheless, when running applications with
LD_LIBRARY_PATH="path-to-mesa-install/lib/:path-to-llvm-install/lib/:$LD_LIBRARY_PATH"
LIBGL_DRIVERS_PATH="path-to-mesa-install/lib/dri"
I'm experiencing graphics issues in some applications (for instance the bloom effect is missing in Euro Truck Simulator), and other applications that are running fine with the same version of Mesa installed system wide refuse to start (for instance the Unigine benchmarks).
Therefore I think I'm missing something, but what?
I'd be grateful if someone could either link to or quickly write a step by step guide on how to use Mesa installed to a non-system wide path.
|
Debian's X Strike Force has a comprehensive guide to building MESA from source and running it without installing it (which effectively allows using it without installing it to a system path).
| Install Mesa to home folder and make applications use it from there |
1,371,138,478,000 |
GPU Passthrough Virtualization with KVM or VirtualBox
For a research project I need to passthrough a PCI GPU from a Ubuntu Host to a Windows 8.1 guest. We need to test a certain setup, where the guest performs GPU intensive tasks. I tried to follow this tutorial with KVM and also VirtualBox. Now before we invest in expensive server grade hardware, we wanted to try the setup with some older hardware that we had available in the lab. I am aware that the setup is very hardware dependent, but I want to learn how I can rule out errors.
I tried KVM and VirtualBox so far, but I think my problem is related to this error in the dmesg log:
~$ dmesg | grep -e IOMMU -e DMAR
[ 0.000000] Intel-IOMMU: enabled
[ 0.148515] DMAR: Forcing write-buffer flush capability
[ 0.148516] DMAR: Disabling IOMMU for graphics on this chipset
[ 24.487950] vboxpci: IOMMU not found (not registered)
Where does this come from?
I would like to know which component causes this error. I see many people having this problem but there is no answer available which would apply to several scenarios.
The hardware I use
Motherboard: P5Q-EM, ASUSTeK Computer INC.
BIOS (updated and virtualization enabled)
CPU: Intel(R) Core(TM)2 Quad CPU Q9300 @ 2.50GHz
GPU 1: Intel Corporation 4 Series Onboard
GPU 2: GeForce GT 610 (should be passed through)
OS: Ubuntu Server 14.04.2 LTS (with desktop installed)
Grub parameters: intel_iommu=on.
As the first dmesg message shows Intel-IOMMU: enabled, I assume that this works.
GPU details from lshw:
*-display UNCLAIMED
description: VGA compatible controller
product: GF119 [GeForce GT 610]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:04:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller cap_list
configuration: latency=0
resources: memory:fd000000-fdffffff memory:f0000000-f7ffffff memory:fa000000-fbffffff ioport:ec00(size=128) memory:feb00000-feb7ffff
Now I checked the CPI capabilities with:
ubuntu~$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm sse4_1 lahf_lm dtherm tpr_shadow vnmi flexpriority
lspci -nn gives:
00:02.0 VGA compatible controller [0300]: Intel Corporation 4 Series Chipset Integrated Graphics Controller [8086:2e22] (rev 03)
00:02.1 Display controller [0380]: Intel Corporation 4 Series Chipset Integrated Graphics Controller [8086:2e23] (rev 03)
05:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF119 [GeForce GT 610] [10de:104a] (rev a1)
05:00.1 Audio device [0403]: NVIDIA Corporation GF119 HDMI Audio Controller [10de:0e08] (rev a1)
KVM says its okay:
~$ sudo kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
I also tried pci-stub.ids=10de:104a, where 10de:104a is the GPU Id in grub. How can I make sure where the DMAR message comes from and what component causes the disableing?
|
The "DMAR: Disabling IOMMU for graphics on this chipset" message comes from the kernel, specifically http://lxr.free-electrons.com/source/drivers/iommu/intel-iommu.c?v=3.19#L4634. This quirk was introduced in https://lkml.org/lkml/2013/2/1/327 (the bugs linked from there give useful background information); apparently your chipset has bugs which cause crashes when the IOMMU is used with graphics.
I'm not sure from the discussions whether the bugs only really affect the built-in graphics, or if they would prevent using a separate adapter as you're trying to do. If I'm understanding the source code correctly though, the quirk disables all IOMMUs for graphics devices.
| Linux GPU Pass-through Virtualization - Verify wich component causes trouble |
1,371,138,478,000 |
I've installed Bumblebee with Ryan McQuen's crazybee.sh script, described here, and I'm actually able to startx successfully now (because Bumblebee uses the on-board Intel graphics by default), but when I invoke optirun to run a program with the Nvidia card, I get:
[ERROR]Cannot access secondary GPU
Failed to initialize the NVIDIA kernel module
The "Turbo" light on the laptop, which indicates if the Nvidia card is on, turns on when I invoke optirun, which is good.
I've tried the solutions to this "Cannot access secondary GPU" problem on ArchWiki, but to no avail…
Also, my trackpad freezes upon logging into KDE, so I'm thinking this might involve an issue in xorg.conf.
I'm running slackware64-current with the 3.14.27 kernel.
There's a thread about the "failed to initialize kernel module" part of this issue here, but it's old, from 2004.
thanks
|
The issue was that the nvidia-kernel package of Bumblebee did not install, due to my lacking libvdpau.
| [ERROR]Cannot access secondary GPU…Failed to initialize the NVIDIA kernel module [closed] |
1,371,138,478,000 |
So, I did what a tutorial said and I set coolbits to 8. As expected, GreenWithEnvy's overclocking menu became available. However, when I tried to make a custom overclock setting, the overclock profile box popped up without the overclocking sliders
I'm using a NVIDIA Tesla P4 graphics card and my system is Linux Mint 21.1 Vera
|
It turns out, that my specific GPU has no overclocking feature, thus no sliders appeared. For anybody else who had a similar problem with their headless/workstation GPU in Linux, you can see which clocks are supported by your GPU with the
nvidia-smi -i 0 --query-supported-clocks=mem,gr --format=csv
bash command.
| GreenWithEnvy's overclocking menu has no overclocking sliders |
1,371,138,478,000 |
I have a new Lenovo Yoga Slim 7 with an AMD Ryzen 5 6600HS processor. This processor has a Radeon 660M integrated graphics controller, and I don't have a dedicated GPU. I have several issues which I think are all or in part related to driver issues:
High CPU usage when watching a YouTube video.
http://webglsamples.org/aquarium/aquarium.html reaches 20FPS in Firefox (500 fish), which is already 27FPS on my much older (2015) system with an integrated HD graphics 5500 card.
LCD backlight is fixed to the maximum. /sys/class/backlight is empty. With the kernel setting acpi_backlight=vendor there is a /sys/class/backlight/ideapad entry and when I try to change the backlight level the changes are registered in actual_brightness but there is no effect on the screen.
xrandr only recognizes one mode (2880x1800 @ 91Hz), while at least a lower refresh rate should be available. Also xrandr "fails to get size of gamma" (see output below).
lspci incorrectly recognizes the integrated GPU as Radeon 680M, this should be 660M (see output below).
radeontop fails to find DRM devices and measures only zero values.
glxgears has very high framerate (thousands of FPS), not close to screen refresh rate. On my old system it says "Running synchronized to the vertical refresh" and runs at 51FPS. Perhaps this means that the system cannot determine the screen refresh rate correctly (although xrandr can...).
This is a fresh install of the alpha1 release candidate for Debian Bookworm with the GNOME desktop environment. I used an ISO with non-free firmware from https://cdimage.debian.org/cdimage/unofficial/non-free/cd-including-firmware/bookworm_di_alpha1+nonfree/amd64/iso-cd, dated 20 September 2022.
I also installed firmware-amd-graphics_20210818-1_all.deb from that ISO (following https://wiki.debian.org/AtiHowTo), although I'm not sure that's the right thing to do for integrated GPUs. I had to copy the yellow_carp firmware files to /lib/firmware/amdgpu manually, following https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1019847.
I don't normally deal with driver issues. How can I get the graphics to work properly?
Notes:
The X log complains that /dev/dri/card0/ does not exist. This is correct; there is no /dev/dri on my system.
dmesg has no mention of amdgpu so perhaps it is not loaded at all?
I would have expected the non-free ISO to automatically install the amdgpu driver if needed, so the fact that I had to install it manually may also already suggest that the card is not recognized properly.
Relevant outputs (let me know if you need more):
$ lspci -nn | grep VGA
32:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Rembrandt [Radeon 680M] [1002:1681] (rev 03)
$ xrandr
xrandr: Failed to get size of gamma for output default
Screen 0: minimum 2880 x 1800, current 2880 x 1800, maximum 2880 x 1800
default connected primary 2880x1800+0+0 0mm x 0mm
2880x1800 91.00*
$ sudo radeontop -d -
Failed to find DRM devices: error 2 (No such file or directory)
Failed to open DRM node, no VRAM support.
Dumping to -, until termination.
1676540729.241609: bus 32, gpu 0.00%, ee 0.00%, vgt 0.00%, ta 0.00%, sx 0.00%, sh 0.00%, spi 0.00%, sc 0.00%, pa 0.00%, db 0.00%, cb 0.00%
$ glxgears
24119 frames in 5.0 seconds = 4823.791 FPS
24445 frames in 5.0 seconds = 4888.914 FPS
Logs:
Xorg.0.log
Xorg.1.log
dmesg
|
It turns out I still had nomodeset as a kernel parameter. Removing it fixed the issues.
The WebGL aquarium sample now runs at 60FPS with 10,000 fish (was 20FPS with 500).
The LCD backlight works out of the box; no acpi_backlight parameter needed.
xrandr now recognizes different modes and does not complain about gamma.
radeontop correctly reports measurements.
glxgears fixes the frame rate to the vertical refresh rate.
Only lspci still sees the card as 680M rather than 660M, but I guess it's not a problem.
| Integrated AMD Radeon 660M does not seem to be used on Debian Bookworm |
1,623,528,470,000 |
I just purchased a new PC with Windows 10 installed on it. The PC has two SSD cards, one for the Windows and one empty which I want to install Manjaro-Linux on. And load it all with the Dual Boot.
To install Manjaro I'm using a USB. I used Rufus to do so.
I guess the error is due to the GPU. I'm using the RTX 3060ti and for CPU I use i7 10700F. I wrote those in case the problem has to do with either of them...
I must mention that the installation of the Manjaro did work on my previous PC. On my previous PC, I divided my HDD into half and installed Manjaro on the empty half.
|
I tried a couple of weeks later, It worked. I believe it has to do with the version I installed and the GPU at the time, it didn't have the graphical support. Now it works.
| Manjaro installation stuck on "Reached target graphical interface" |
1,623,528,470,000 |
I have recently installed a nVidia Tesla K80 graphics accelerator into an existing dual-socket workstation running Ubuntu 20.04 with a low-energy consumption nVidia Quadro NVS315. After updating the nVidia drivers (from legacy 390 that were needed for the Quadro to 450 in order to support CUDA on the K80), the now unsupported Quadro is stuck at a resolution of 640x480, leaving me unable to use xrandr to introduce additional custom resolutions. I have already asked the friendly folks at nVidia and they have confirmed that it is indeed a driver issue (as the Quadro is a legacy GPU by now) and that it is not possible to use two different nVidia drivers in parallel.
I have also tried to use Nouveau but I was unable to turn it on only for the Quadro in "Software & Updates/Additional Drivers", all the entries apart from "Continue using a manually installed driver" are greyed out for the Quadro if the nVidia-proprietary 450 driver is activated for the Tesla. If I switch both devices to Nouveau I have the full resolution but I can't run code on the Tesla.
As it is close to impossible to code on 640x480, I would like to ask if there is a way to force the Quadro running on the manually installed drivers to use a higher resolution or if I can force Ubuntu to use Nouveau only for the Quadro, while using the 450 drivers for the Tesla. Any hints are appreciated.
Thanks for taking your time. :)
|
I came across two possible solutions that both worked for me
Increase the GRUB frame buffer resolution: For me the resolution in this case was limited to 1024×768 - with higher resolutions it would start lagging.
Buy a newer generation low-power graphics card (I ended up with a Nvidia Quadro P620 to accompany my Nvidia Tesla K80) that is compatible with the same version of the regular Nvidia graphics driver (e.g. 450.51.05 Data center driver -> supported products -> Nvidia Tesla K80 and 450.51 Display driver -> supported products -> Nvidia Quadro P620)
| Stuck at 640x480 - Run Nouveau and nVidia graphic drivers side-by-side? |
1,623,528,470,000 |
This is my first question so if I missed anything please let me know. My knowledge of linux is quite weak at this point.
I am running ubuntu 18.04 with 1080ti gpus (x2). This is with a threadripper 2990wx and 128gigs of compatible ram (if that is somehow related and I am wrong).
For the last few weeks now, I have been attempting to use cuda 10.1 with nvidia driver 440. I have repetitive crashes; my firefox/chrome tabs will crash, followed by my mouse becoming irresponsive (clicks not working but seeing the cursor move), and my terminal will not activate (will open with flashing cursor, but no user@pc and commands dont work). I had thought to post crash reports, but saw other posts with similar issues and am basically concluding that driver 440 is the culprit (apparently this issue persists from driver 430).
What is the proper way to roll back, without issue, to a driver that is more stable with my configuration? I am ok to roll back my cuda version as well at this point. I feel like I have been beta testing and would prefer a stable build so that I can get back to work.
Thanks a lot for your help.
|
1) Before downgrading, please make sure you have the latest NVIDIA driver, version 440.82, direct from NVIDIA, not from a PPA.
2) If upgrading to the very latest stable driver direct from NVIDIA does not help, uninstall with
sudo sh NVIDIA-Linux-x86_64-440.82.run --uninstall
3) If you installed nvidia-current or nvidia-current-updates uninstall them with
sudo apt -y remove nvidia-current nvidia-current-updates
4) Rename xorg.conf to xorg.conf.old then reboot, which will use nouveau instead.
5) Once rebooted, download and install 390.132 or 418.113 since you've read unhappy things about 430 onward.
6) Test stability with those drivers before making any changes to cuda. Run lots of apps simultaneously, including stress or equivalent with a low-to-moderate workload while manually using other apps. Keep an eye on results of free-m or other equivalent tool while you do, and increase the stress workload until most of the swap is in use.
| Properly downgrading from cuda 10.1 and driver 440 |
1,623,528,470,000 |
I'd like to try Linux Mint 18.3 after experiencing trouble with ubuntu16.04 on a razr stealth rz09 with gtx1060 gpu . I made a live install usb , boot it and go thru the install screens, and invariably hit a freeze after defining timezones and starting the actual install ('copying files...'). I tried:
disabling UEFI in favor of legacy in bios
with/wo network, with/wo 3rd party installs
OEM install
doing, at CLI from a terminal in live installer:
gksu live-installer
at CLI from comaptibility-mode boot:
gksu live-installer
I get freezes in all cases (except the gksu command which gives no output and doesnt appear to do anything).
If anyone has any hints I'd appreciate it , otherwise the show is effectively stopped and I will go back to ubuntu
|
Ok the guide here involving adding the params
nomodeset xforcevesa
to the grub boot script (at the end of the line starting with 'linux' ) did the trick. The install finally occurs, hallelujah and i reboot, once again put those params into the grub, and mint then boots from disk and not usb. Then install the video drivers as per the guide and reboots are now fine without grub modification. I'm currrently waiting on cuda and hope to have cuda and cudnn running as well; currently at least nvidia-smi shows hardware which is further than I was before. With the nvidia drivers up under mint, some other stuff started working that hadnt been on linux, e.g. external monitors and rebooting without freezing.
This guide might also work for an ubuntu install but in meantime mint seems to have no drawbacks as compared to ubuntu.
| Linux Mint install freeze |
1,623,528,470,000 |
I've recently built a Linux From Scratch system on my Apple Macbook laptop; however, I've been struggling to understand the graphics hardware and what kernel driver options I need to enable.
The LFS system is (currently) a fairly minimal system that boots up into Bash, but doesn't have the X Window system or any DE. The laptop is a Macbook 2,1 which includes an Intel GMA 950 graphics chip. I have enabled what I believe to be the appropriate driver in the Kernel for this GPU, which is the i915 driver; however, unless I also enable some other options relating to 'framebuffer devices' (I have yet to identify the exact config options), nothing prints on the screen during boot (although, the screen changes to a different shade of black a couple of times).
Can someone explain what is going on here? If that i915 driver is the correct one for the GPU, then shouldn't that be enough for the system to print the terminal output to the screen? If not, then what else should I need, other than the i915 driver?
I also have Trisquel installed on the same laptop, which boots up fine into the LXDE environment and, according to lsmod, the i915 driver is the correct one and the kernel doesn't seem to be loading any framebuffer-related drivers.
I'm confused!
|
I've been doing some research into this myself, and the short answer seems to be: yes - I need a framebuffer to enable the console.
According to the Wikipedia article on the Linux Console, the console has two modes: text mode and framebuffer. From the description, it seems that the text mode is quite basic and may not work with all graphics hardware. So, that leaves the framebuffer console, which is obviously going to require a framebuffer to work.
I copied the output of lsmod to a file, for the kernel configuration where I had it working, which shows this when piped to grep fb:
$ less lsmod_LFS | grep fb
fbcon 49152 70
bitblit 16384 1 fbcon
fbcon_rotate 16384 1 bitblit
fbcon_ccw 16384 1 fbcon_rotate
fbcon_ud 20480 1 fbcon_rotate
fbcon_cw 16384 1 fbcon_rotate
softcursor 16384 4 fbcon_ud,fbcon_cw,fbcon_ccw,bitblit
cfbfillrect 16384 1 drm_kms_helper
cfbimgblt 16384 1 drm_kms_helper
fb_sys_fops 16384 1 drm_kms_helper
cfbcopyarea 16384 1 drm_kms_helper
fb 65536 8 fbcon_ud,fbcon_cw,fbcon_ccw,bitblit,softcursor,i915,fbcon,drm_kms_helper
fbdev 16384 2 fb,fbcon
So, it was using the framebuffer console (fbcon).
The next question though is why I can't get the fbcon module to load up any more (which seems to be the reason that nothing is printing to my screen).
| Do I need a framebuffer driver for a minimal CLI system without X? |
1,623,528,470,000 |
I have a ATI Radeon 2400 XT, and a Nvidia GTX 580 in my debian computer. The 580 has 3 ports, but only 2 of them could be used at the same time. I bought the refurbished Radeon so that I could use another screen, but it was being ignored.
I reconfigured my bios so that the Radeon was the primary display, and the ttys now use that display.
After more fiddling, I managed to get my computer to show the cursor in the third screen when I moved my mouse into it, but in Gnome windows do not move with it. I have also added the ppa xorg-edgers
I stopped gdm, and tried with xinit, xterm and openbox. I had the same problem.
I looked at:
http://web.archive.org/web/20120906222652/http://en.gentoo-wiki.com/wiki/X.Org/Dual_Monitors
https://bbs.archlinux.org/viewtopic.php?id=141041
I could not find anything for debian except for how to setup each individual graphics card.
I am using the free xserver-xorg-video-radeon driver and the proprietary nvidia-driver
xrandr does not detect the Radeon GPU, but lspci and X do.
The gnome cursor passes between them
EDIT:
After looking at https://askubuntu.com/questions/593938, it almost works. Interactions with windows still works, and so does the mouse. However, the graphics do not transfer, and I am left with a glitched screen.
|
I have fixed this by using nouveau, then setting the Radeon to be the output
| Nvidia and ATI gpu system for three monitors |
1,623,528,470,000 |
I have OpenSUSE 42.1 kde 5. How to get the new AMDGPU driver (in the website its only released for ubuntu with .deb files)?
|
AMDGPU consists of a device driver which is part of Linux since 4.3 and a XOrg driver, which you're both currently missing. If you want to use AMDGPU, you'll need to upgrade your kernel (use this repository) and your XOrg using this repository. Afterwards do a vendor change for all packages in the repositories. You can now install xf86-video-amdgpu.
Careful, you will replacing core system components with new versions that haven't been tested on your distribution version. Don't do this if you have no way to recover a non-working X11.
| How to get new AMDGPU driver in Opensuse? |
1,458,970,997,000 |
I'll be installing the cuda toolkit 7.5 on debian stretch, currently without an nvidia card. I only intend to do remote development (sync projects to create remote builds on targets) on an nvidia server with two titan blacks.
The getting started guide says I need a cuda-capable gpu, but I'm wondering if it's really needed if I never build locally.
Has anyone tried this kind of install scenario?
If so, what do I need to do to get nsight for eclipse working to create, debug, and profile remote builds?
|
I found some newer information here that says a gpu is not required for either cross compilation mode or remote synchronized project mode.
I've installed cuda toolkit 7.5 on linux mint 17.3 using the ubuntu 14.04 local deb file. The installer complained of the missing gpu but went through to completion otherwise.
In you're wondering, I gave up on debian because of issues with my usb wireless stick (yes, I used the non-free drivers, etc). Mint found it without a hitch.
| Do I need an NVidia GPU locally if I only sync projects to create remote builds on targets? |
1,458,970,997,000 |
I need GPU with CUDA support and open-source community support for FFT computation with Matlab R2015b. I think I need to reject nVidia GPUs because of bad open-source support, here Torvalds about nVidia's open-source development.
So I am thinking some AMD GPU's which has a development plan for open-source support of CUDA, here press release.
I need to do GPU computation because my system's RAM (16 GB/32 GB) is fully utilized in the computation process.
I plan to split all FFT computationt into GPU, which the documentation says should be possible here.
Is AMD GPU FirePro support enough yet for Linux's Matlab and CUDA?
|
Use Otoy's reversed-engineered solution, discussed here.
Some limitations can exist as discussed in some threads about the topic.
| Is AMD GPU FirePro support enough for Linux's Matlab and CUDA? |
1,458,970,997,000 |
I'm using Debian 7.7 with the proprietary NVidia driver 319.82 and a GeForce 560Ti. This drives 2 portrait (rotated) monitors over DVI with no problem and I'm happy with the result.
I'd now like to add a 3rd portrait monitor. It seems like one can't mix NVidia and HD3000 GPUs, so presumably I'll have to buy another NVidia card. So which NVidia will work? Can I just buy any random cheap NVidia GPU (e.g., GeForce 210) and have it work? Or are there only particular combinations that are Linux-friendly?
(I'm not terribly concerned about rendering performance, OpenGL compliance level or SLI ability, so hopefully any card will do, and the only concern is compatibility. My only real requirement is that it can do 1200x1920 rotated over DVI, which I guess any modern-ish one will manage.)
|
I threw caution to the wind and spent £26 on a Geforce GT610. The driver recognises it and it can drive a 3rd monitor. Best results, to my eyes, came after using nvidia-settings to configure each screen (including the two driven by the same GPU) as its own X display, then ticking the Xinerama box.
One general problem I've had after doing this:
Xinerama conflicts with the XRandR extension, so XRandR ends up disabled. This causes a lot of X programs to print a warning on startup, but it doesn't actually appear to cause any problems.
And two GNOME-related problems, which I suppose anybody else following along might also experience:
With Xinerama, GNOME runs in fallback mode, because Xinerama doesn't get on with Composite, which is an extension the fancy new GNOME shell requires. This has disabled a number of my GNOME shell extensions, but in general I think I'll get by.
GNOME got very confused as I was changing things around, and created 7 copies of the clock (and the logout menu, and the workspace switcher, and so on). To fix this, run dconf reset -f /org/gnome/gnome-panel/layout/ from the shell.
| What combinations of NVidia GPUs are well supported with with driver 319.82? |
1,458,970,997,000 |
How do I add a parameter to system-d on Pop!_OS? I want to pass-through a GPU. Please also give a a good guide of how to GPU pass-through on Pop!_OS 21.10. Thanks in advance.
|
Welcome to Unix & Linux StackExchange!
Please edit your question to add a link to the specific tutorial you're following, so that other people will be able to get an idea of the steps you are trying to follow.
My guess is, your tutorial is probably telling you to add some kernel boot parameters, which would typically go onto GRUB_CMDLINE_LINUX="..." line in /etc/default/grub on Linux systems that use GRUB as their bootloader.
But Pop!_OS currently uses systemd-boot as its bootloader, instead of GRUB. Since the boot parameters are passed to the Linux kernel, the syntax of the parameters themselves will remain the same, but the way you tell the bootloader to pass specific parameters to the kernel will be somewhat different in every bootloader that is capable of booting Linux.
So the question you probably need to be asking is "how to add kernel boot parameters when using systemd-boot?"
And the answer to that question is: you add them to the options line in the appropriate $BOOT/loader/entries/*.conf file, where $BOOT might be /boot, /efi or even /boot/efi depending on where your distribution chooses to mount its UEFI ESP partition. You'll find more details about those *.conf files and their format in https://systemd.io/BOOT_LOADER_SPECIFICATION/ .
After a bit of Googling, it seems that Pop_OS specific names for these files would be something like:
/boot/efi/loader/entries/Pop_OS-current.conf
/boot/efi/loader/entries/Pop_OS-old-kern.conf
The first of those would be for the current kernel, the second is for an old kernel version that is kept as a backup in case something goes wrong with the newest kernel. I would recommend that you modify the first file only, and only make changes to the second file after you have tested the boot process with your modified options and are 100% sure it works.
| I need help (both with GRUB and GPU pass-through) |
1,458,970,997,000 |
I'm using Xorg with the FBDEV driver, configuration:
Section "Device"
Identifier "Device0"
Driver "fbdev"
Option "fbdev" "/dev/fb0"
Option "ShadowFB" "false"
EndSection
I got a new framebuffer device to my system, it's /dev/fb1. I adjusted the config:
Section "Device"
Identifier "Device0"
Driver "fbdev"
Option "fbdev" "/dev/fb1"
Option "ShadowFB" "false"
EndSection
But it doesn't work, it still uses /dev/fb0, and doesn't even open /dev/fb1.
I'm using Ubuntu-based (jammy-based) OS with xserver-xorg-video-fbdev package installed.
Everything works if I do
mount --bind /dev/fb1 /dev/fb0
But it's not an option because I want to have access to both the framebuffers (so I did umount /dev/fb0 to undo it).
Thanks for any help
|
Xorg's FBDEV driver requires a BusID option to be passed in Config, not only a path to the framebuffer's char-device. I don't know why is that, but here is how to configure it:
First is to figure out the "bus id" of the framebuffer device. Assuming that the wanted framebuffer device is fb1:
ls /sys/class/graphics/fb1/device/driver
The example output (the output in my case) is:
bind module uevent unbind vfb.0
From this list of entries you should ignore (don't pay attention) to bind, module, uevent, unbind and ANYTHING_id (if exists).
Then you're left with exactly that "bus id" of your Framebuffer. (In my case, vfb.0).
Here is a different example with my fb0 device, by the way, which is a real FB from nouveaudrmfb:
# ls /sys/class/graphics/fb0/device/driver
0000:03:00.0 bind module new_id remove_id uevent unbind
In this case, you can see that the "bus id" is 0000:03:00.0.
Knowing the Bus ID, you can finally configure the FBDEV driver in the Xorg conf:
Section "Device"
Identifier "Device0"
Driver "fbdev"
BusID "vfb.0"
Option "fbdev" "/dev/fb1"
EndSection
This is an example configuration for a fb1 device with vfb.0 BusID.
That's it.
| Xorg FBDEV refuses to use the specified framebuffer |
1,458,970,997,000 |
I got a new Nvidia GPU and I read that I had to purge all of the current drivers to avoid collisions. I removed everything, replaced the GPU with the new one, turned the PC on. It worked, I was able to load the OS, but the GPU wasn't recognized using nvidia-smi. When used lspci | grep ' VGA ' | cut -d" " -f 1 | xargs -i lspci -v -s {} I was able to see that the system detects that there is some gpu, without the name of the model.
I downloaded a new driver from Nvidia
Then I got the file, I tried to open it. It took an hour to open, and then suddenly closed.
Then I restarted my PC, in case it might help. Since then, I couldn't get to access Ubuntu. Every time I am trying to turn the PC on, it looks like the system starts, there is a "TUF gaming into logo" which always comes before Ubuntu turned on. Now, suddenly it changes into this screen
and just becomes like an empty and unresponsive terminal page.
I can access bios and safe start by pressing on F2
I can go into safe mode via pressing ctrl+alt+F3 AFTER the 2nd screenshot above. There, I can travel through dictionaries in my computer. Problem is, that for some reason my WiFi dongle doesn't work, so every time I try to install something (say by using sudo-apt get install nvidia-driver-535 I get an error that basically says Temporary failure resolving us.storage.ubuntu.com
How can I resolve this issue? There is some problem with the graphic card's drivers, but I don't have any access to the Internet to fix things..
|
Ubuntu already has Nvidia drivers properly packaged in its official repository. There's NEVER a need to install them using the Nvidia's binaries. As a matter of fact that's strongly discouraged.
And whenever the new card is supported by the same driver version branch already installed there's no need to uninstall anything. If desired you can later try a newer version, if available, simply by using the Additional Drivers tool that effectively purges the installed version and install the newly selected one in its place. Reboot required.
Now, according to your report, you can't use the WiFi in the TTY (NOT "Safe Mode", nothing to do with it, it's simply a command line only console)... Although you can make it work after logging in there's really no point in teaching you all the steps for something so simple as this so just use USB tethering (all it takes is a few MBs of data and a couple of minutes of your time).
Do:
sudo apt purge nvidia*
followed by
sudo ubuntu-drivers install
No need to specify any version, the tool will select the recommended one for you. Reboot.
In most cases you won't even need to do the second step in CLI because after purging the proprietary drivers it should fallback to the default nouveau driver and this should give you a graphical interface unless the new card is so new to not have some support with the community driver yet. Of course, the first step doesn't require internet, at all. It goes without saying that you should try that and then, if all goes well (meaning: booting to a graphical desktop, WiFi and everything else) you can and should use the aforementioned Additional Drivers tool, select and apply the required Nvidia proprietary drivers version. Reboot. Done!
| Updated Nvidia drivers, Ubuntu OS won't start, can access BIOS and safe-mode |
1,458,970,997,000 |
I am on Debian 11
I have a program that works by overriding tty1. So when starting my computer, the program runs right away. This works fine, but I use Gnome as my GUI to develop and test the program. When running the program while in Gnome I get:
WARNING: lavapipe is not a conformant vulkan implementation, testing use only.
Segmentation fault
Running vulkaninfo --summary over tty outputs
Devices:
========
GPU0:
apiVersion = 420641 (1.2.145)
driverVersion = 83898373
vendorID = 0x10002
deviceID = 0x1508
deviceType = PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU
deviceName = AMD_RADV_RAVEN *ACO)
driverID = DRIVER_ID_MESA_RADV
driverName = radv
driverInfo = Mesa 20.3.5 (ACO)
conformanceVersion = 1.2.3.0
GPU1:
apiVersion = 4194306 (1.0.2)
driverVersion = 1 (0x0001)
vendorID = 0x10005
deviceID = 0x0000
deviceType = PHYSICAL_DEVICE_TYPE_CPU
deviceName = llvmpipe (LLVM 11.0.1, 256 bits)
driverID = DRIVER_ID_MESA_LLVMPIPE
driverName = llvmpipe
but running vulkaninfo --summary over gnome outputs
Devices:
========
GPU0:
apiVersion = 4194306 (1.0.2)
driverVersion = 1 (0x0001)
vendorID = 0x10005
deviceID = 0x0000
deviceType = PHYSICAL_DEVICE_TYPE_CPU
deviceName = llvmpipe (LLVM 11.0.1, 256 bits)
driverID = DRIVER_ID_MESA_LLVMPIPE
driverName = llvmpipe
driverInfo = Mesa 20.3.5 (LLVM 11.0.1)
conformanceVersion = 1.0.0.0
driverInfo = Mesa 20.3.5 (LLVM 11.0.1)
conformanceVersion = 1.0.0.0
So I can see that in Gnome it's just using my CPU. Is there any way to configure vulkan to also use my GPU?
|
So to answer my own question:
I ran
lspci -k
to check if Gnome recognized my GPU and I saw my GPU there and its kernel driver.
So I realized the problem might be from Vulkan itself so then I ran
ls /usr/share/vulkan/icd.d/
which gave me the output
intel_icd.x86_64.json lvp_icd.x86_64.json radeon_icd.x86_64.json
radeon_icd.x86_64.json is the installable client driver for my AMD GPU
I then ran
export VK_ICD_FILENAMES=/usr/share/vulkan/icd.d/radeon_icd.x86_64.json
and then ran my program and it worked.
| Vulcan detecting GPU on TTY but not on gnome |
1,458,970,997,000 |
I'm using Fedora 36, KDE on a Framework laptop with core i7-1185G7.
I use Google meet on Chrome, but it uses too much CPU (about 35%, when CPU is in full speed), warms up the CPU, and could trigger thermal throttling where clock falls to 400Mhz or even 200Mhz for a minute. I've improved that with a cooling pad for the laptop.
The same Google meet works great on my android phone, ipad or M1 Mac. No excessive heat, etc.
From my understanding, my CPU is supposed to have an iGpu that's adequate for things like video compression/decompression.
How can I monitor which processes are using the GPU? How can I set my OS and applications to use the GPU?
|
These are good questions and there are no answers. For Intel GPUs there's a intel_gpu_top utility (found in the intel-gpu-tools package) which shows GPU load (without showing individual apps), so you can at least understand whether your system is currently using CPU or GPU, but that's it. For NVIDIA GPUs, there's nvidia-smi which shows GPU utilization and apps using it but now how much of GPU each app uses. For AMD there's radeontop which again doesn't break GPU usage by app.
There's no way to "configure apps to use your GPU" - they either do it or not. In case of web browsers (Firefox/Chrome), there are certain internal flags which allow to e.g. enable HW video acceleration for video decoding but they are experimental. Check this article for more info: https://wiki.archlinux.org/title/Hardware_video_acceleration
Speaking of "Google meet on Chrome" - most likely it only uses your CPU for video encoding/decoding vs. e.g. Chrome under Windows which could use your GPU to do the same while consuming 10 times less power because it's hardware accelerated. You could try enable at least HW video decoding acceleration using the provided article. As for HW video encoding acceleration I've no clue. I've not seen any Linux applications offering or using it.
| How to setup and confirm hardware acceleration in Linux |
1,458,970,997,000 |
I have installed Debian 11 (bullseye) on a new Lenovo LEGION 5i Pro with Nvidia RTX 3050.
After installing the Nvidia drivers:
sudo apt-get install nvidia-driver firmware-misc-nonfree
I connected an external monitor using the HDMI port, but it was not recognized, it does not show up in the Displays settings.
I tried searching about the issue and I found somewhere someone fixing a similar problem with xrandr.
~$ xrandr --listproviders
Providers: number : 2
Provider 0: id: 0x4a cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 4 outputs: 7 associated providers: 0 name:modesetting
Provider 1: id: 0x2af cap: 0x2, Sink Output crtcs: 4 outputs: 6 associated providers: 0 name:NVIDIA-G0
This command fixed the problem, but honestly I don't know what it does:
xrandr --setprovideroutputsource 1 0
But the problem is that the changes did not persist after reboot and I had a lot of lagging and Xorg was using about 30-40% CPU as shown using top. So I have uninstalled the drivers and started all over again.
Next I tried creating an /etc/X11/xorg.conf file using nvidia-xconfig, which created a file with these contents:
# nvidia-xconfig: X configuration file generated by nvidia-xconfig
# nvidia-xconfig: version 460.32.03
Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0"
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"
EndSection
Section "Files"
EndSection
Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/psaux"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"
EndSection
Section "InputDevice"
# generated from default
Identifier "Keyboard0"
Driver "kbd"
EndSection
Section "Monitor"
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Unknown"
Option "DPMS"
EndSection
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device0"
Monitor "Monitor0"
DefaultDepth 24
SubSection "Display"
Depth 24
EndSubSection
EndSection
The good thing is that the external monitor was recognized and I started using it and it was showing in the Displays settings, but I couldn't use the built-in display, and if I try to use the laptop without the external display I get a blank screen and I had to delete the /etc/X11/xorg.conf file and reboot to be able to use the built-in display.
How can I configure my system to be able to use both the built-in and the external display?
Update:
$ nvidia-xconfig --query-gpu-info
Number of GPUs: 1
GPU #0:
Name : GeForce RTX 3050 Laptop GPU
UUID : GPU-5f21a5b3-2add-7b3d-aa6b-1cfe5dd7085e
PCI BusID : PCI:1:0:0
Number of Display Devices: 1
Display Device 0 (TV-4):
EDID Name : LG Electronics 24MP56
Minimum HorizSync : 30.000 kHz
Maximum HorizSync : 83.000 kHz
Minimum VertRefresh : 56 Hz
Maximum VertRefresh : 61 Hz
Maximum PixelClock : 150.000 MHz
Maximum Width : 1920 pixels
Maximum Height : 1080 pixels
Preferred Width : 1920 pixels
Preferred Height : 1080 pixels
Preferred VertRefresh : 60 Hz
Physical Width : 510 mm
Physical Height : 290 mm
Listing the monitors using xrandr:
$ xrandr --listmonitors
Monitors: 1
0: +*eDP-1 1920/345x1200/215+0+0 eDP-1
After using this command xrandr --setprovideroutputsource 1 0 I get this output:
$ xrandr --listmonitors
Monitors: 2
0: +*eDP-1 2560/345x1600/215+0+0 eDP-1
1: +HDMI-1-0 1920/510x1080/290+2560+0 HDMI-1-0
But the problem is high CPU usage by the Xorg process (30-40%).
|
Notebooks with separate dedicated and integrated graphics cards will try to balance which is used to improve battery life. Check nvidia-settings and bios settings to see if there is an option to specify which you would like to use.
| How to configure multiple displays on Lenovo LEGION 5 Pro (Nvidia RTX 3050) |
1,458,970,997,000 |
Since I use microsoft teams on my Debian buster machine, I get a GUI freeze sometimes: The mouse pointer can still be moved on the screen, but no visible feedback on clicks or keyboard presses. Also no switching to a console with ctrlaltF1
I could not help myself other then sshing to the machine to restart Xorg.
The dmesg shows me fingerprints of teams, but I guess the deeper problem must the in the nouveau GPU driver?
[ 4918.083079] show_signal_msg: 7 callbacks suppressed
[ 4918.083082] GpuWatchdog[2056]: segfault at 0 ip 000055dcd609b006 sp 00007f5a8f043490 error 6 in teams[55dcd2705000+5fbe000]
[ 4918.083087] Code: 89 de e8 4d 0e 71 ff 80 7d cf 00 79 09 48 8b 7d b8 e8 1e 45 ce fe 41 8b 84 24 e0 00 00 00 89 45 b8 48 8d 7d b8 e8 ea f0 66 fc <c7> 04 25 00 00 00 00 37 13 00 00 48 83 c4 38 5b 41 5c 41 5d 41 5e
[ 5006.078739] traps: Watchdog[2423] trap invalid opcode ip:555c23f287de sp:7f77fb7fd6f0 error:0 in teams[555c23dba000+5fbe000]
For now I deactivated GPU acceleration in MS teams, but would you recommend switching to NVIDIA driver instead of nouveau in this case?
|
nouveau is known for being quite unstable and crash-prone, so I'd recommend installing NVIDIA proprietary drivers instead.
The error you're getting indicates exactly that.
Alternatively try installing a fresh kernel, 4.19 is quite dated and may not contain all the fixes the nouveau driver has seen.
| MS teams makes the whole GUI stall: GpuWatchdog segfault |
1,458,970,997,000 |
I installed Windows on a virtual machine which runs through KVM/QEMU. The virtual machine is hosted on a Debian server which has just a basic GPU on the motherboard.
Applications which rely on GPU, such as Adobe Premiere Pro, don't run fast enough on this virtual machine. The Wiki on the subject explains the techniques which allow to obtain bare-metal experience, but I'm not ready yet to try them, so I'm stuck with QXL/SPICE for now.
Suppose I put a dedicated GPU in the server. Would it automatically (with no configuration changes) make the applications which rely on the GPU run faster? Or it will have absolutely no effect?
|
Automatically - no, not at all.
Manually via PCI(e) passthrough in case your system supports IOMMU and AMD-Vi/Intel VT-d:
https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Plain_QEMU_without_libvirt
https://wiki.gentoo.org/wiki/GPU_passthrough_with_libvirt_qemu_kvm
| Would a GPU make the applications in the virtual machine faster? |
1,458,970,997,000 |
Running Pop OS on an Intel Hades Canyon. Just installed Steam and it looks like this:
Restarting the computer or reinstalling Steam from different sources did not solve the issue. Wierd enough, if I go full screen with Steam it looks okay, but if I resize the window it looks messed up again. Drivers seem fine, what else could it be?
This is what the output of lspci -v -s 01:00.0 looks like:
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Polaris 22 XT [Radeon RX Vega M GH] (rev c0) (prog-if 00 [VGA controller])
Subsystem: Intel Corporation Polaris 22 XT [Radeon RX Vega M GH]
Flags: bus master, fast devsel, latency 0, IRQ 148
Memory at 2000000000 (64-bit, prefetchable) [size=4G]
Memory at 2100000000 (64-bit, prefetchable) [size=2M]
I/O ports at e000 [size=256]
Memory at db500000 (32-bit, non-prefetchable) [size=256K]
Expansion ROM at 000c0000 [disabled] [size=128K]
Capabilities: <access denied>
Kernel driver in use: amdgpu
Kernel modules: amdgpu
If I run Steam from terminal, the same problem happens and no useful information is returned to stdout.
|
Your issue sounds similar to https://github.com/ValveSoftware/steam-for-linux/issues/6593 - steam view becomes corrupted when resizing in a tiling window manager
The title is a bit misleading: further down the bug report they say it happens if Steam hardware acceleration is enabled, even if you're not using a tiling window manager.
It is supposed to be fixed in the latest Steam beta.
| Steam looks messed up |
1,458,970,997,000 |
I'm not sure how to troubleshoot these, but at least once a week, the amdgpu driver crashes and I either have to hard power down or try to ssh in from my laptop and reboot.
0c:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Vega 10 XL/XT [Radeon RX Vega 56/64] (rev c1) (prog-if 00 [VGA controller])
Subsystem: Sapphire Technology Limited Device e37f
Flags: bus master, fast devsel, latency 0, IRQ 103
Memory at e0000000 (64-bit, prefetchable) [size=256M]
Memory at f0000000 (64-bit, prefetchable) [size=2M]
I/O ports at e000 [size=256]
Memory at fcc00000 (32-bit, non-prefetchable) [size=512K]
Expansion ROM at 000c0000 [disabled] [size=128K]
Capabilities: <access denied>
Kernel driver in use: amdgpu
Kernel modules: amdgpu
Here's the last kernel log
https://pastebin.com/d6qyJ8ha
I'm on Fedora 31 running a vanilla kernel (I was hoping this slightly newer kernel would fair better). Details are at the top of the kernel log pastebin
Maybe I need to submit a bug report but I'm not even sure what information I need & if there's any troubleshooting I can do
|
There's little you can do to solve this issue on your own, so do please file a bug report here: https://bugzilla.kernel.org/enter_bug.cgi?product=Drivers (choose Video(Other) as a component).
What info you could provide (attach it as files):
Full sudo dmesg output
Full sudo lspci -vvv output
Full sudo lshw output
And of course describe under which circumstances you get this issue.
| amdgpu driver crashes |
1,458,970,997,000 |
I am using Arch Linux on a Chromebook C201 (ARM). Since I recently upgraded the system, the desktop environment seems to be crashing shortly after I login (before the upgrade it was working fine). I have both LXDE and MATE installed and I am seeing similar crashes on both. The two DEs are using different Window Managers (Openbox and marco), so I suspect there may be an issue with X server.
When the system crashes, the screen freezes and the desktop completely locks up. There is no response to mouse or keyboard input and I am unable to use CTL-ALT-F2 etc. to switch to a console tty. After a few minutes it dumps me back in the lightdm login screen.
I have found the following errors (which seem to be relevant) in ~/.cache/lxsession/LXDE/run.log:
** (lxpanel:524): WARNING **: 21:18:33.907: The directory '~/Templates' doesn't exist, ignoring it
** (pcmanfm:525): WARNING **: 21:18:33.907: The directory '~/Templates' doesn't exist, ignoring it
Openbox-Message: Unable to find a valid menu file "/usr/share/lxde/openbox/menu.xml"
(lxpanel:524): GLib-GObject-CRITICAL **: 21:18:34.467: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
(lxpanel:524): GLib-GObject-CRITICAL **: 21:18:34.467: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
(lxpanel:524): GLib-GObject-CRITICAL **: 21:18:34.476: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
** (lxpanel:524): WARNING **: 21:18:34.544: Battery entry BAT0 not found, using sbs-20-000b
(lxpanel:524): GLib-GObject-CRITICAL **: 21:18:34.547: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
(lxpanel:524): GLib-GObject-CRITICAL **: 21:18:34.547: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
** (lxpanel:524): WARNING **: 21:18:34.736: launchbar: desktop entry does not exist
(nm-applet:541): libnotify-WARNING **: 21:18:38.692: Failed to connect to proxy
(nm-applet:541): nm-applet-WARNING **: 21:18:38.698: Failed to show notification: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Notifications was not provided by any .service files
lxterminal
(lxpanel:524): Wnck-WARNING **: 21:18:40.000: Unhandled action type _OB_WM_ACTION_UNDECORATE
(lxpanel:524): Wnck-WARNING **: 21:19:12.727: Unhandled action type _OB_WM_ACTION_UNDECORATE
(lxpanel:524): Wnck-WARNING **: 21:19:12.861: Unhandled action type _OB_WM_ACTION_UNDECORATE
/usr/lib/firefox/firefox
(lxpanel:524): Wnck-WARNING **: 21:19:36.058: Unhandled action type _OB_WM_ACTION_UNDECORATE
(END)
The equivalent errors for MATE (from ~/.xsession-errors) are:
mate-session[1216]: WARNING: Unable to find provider '' of required component 'dock'
Window manager warning: Log level 128: unsetenv() is not thread-safe and should not be used after threads are created
(caja:1299): Gtk-WARNING **: 21:22:12.818: Failed to register client: GDBus.Error:org.gnome.SessionManager.AlreadyRegistered: Unable to register client
(mate-power-manager:1337): Gdk-CRITICAL **: 21:22:14.720: gdk_window_thaw_toplevel_updates: assertion 'window->update_and_descendants_freeze_count > 0' failed
Gdk-Message: 21:25:19.408: mate-power-manager: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Gdk-Message: 21:25:19.408: evolution-alarm-notify: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Gdk-Message: 21:25:19.409: mate-session: Fatal IO error 104 (Connection reset by peer) on X server :0.
Gdk-Message: 21:25:19.409: marco: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Gdk-Message: 21:25:19.415: caja: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Gdk-Message: 21:25:19.413: polkit-mate-authentication-agent-1: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Gdk-Message: 21:25:19.409: mate-maximus: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Gdk-Message: 21:25:19.408: mate-volume-control-status-icon: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Gdk-Message: 21:25:19.408: nm-applet: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Gdk-Message: 21:25:19.409: mate-screensaver: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Gdk-Message: 21:25:19.415: mate-settings-daemon: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Gdk-Message: 21:25:19.423: mate-panel: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.
Contents of Xorg.0.log:
[ 10.124]
X.Org X Server 1.20.7
X Protocol Version 11, Revision 0
[ 10.124] Build Operating System: Linux Arch Linux
[ 10.124] Current Operating System: Linux leeLibrebook 5.5.6-1-ARCH #1 SMP PREEMPT Wed Feb 26 00:56:53 UTC 2020 armv7l
[ 10.124] Kernel command line: cros_secure console=tty0 init=/sbin/init root=PARTUUID=1b19e700-f9cb-f247-bc7f-207dece4cdb7/PARTNROFF=1 rootwait rw noinitrd
[ 10.124] Build Date: 16 January 2020 05:49:11PM
[ 10.124]
[ 10.124] Current version of pixman: 0.38.4
[ 10.124] Before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
[ 10.124] Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
[ 10.124] (==) Log file: "/var/log/Xorg.0.log", Time: Tue Mar 17 21:57:41 2020
[ 10.125] (==) Using system config directory "/usr/share/X11/xorg.conf.d"
[ 10.125] (==) No Layout section. Using the first Screen section.
[ 10.125] (==) No screen section available. Using defaults.
[ 10.125] (**) |-->Screen "Default Screen Section" (0)
[ 10.125] (**) | |-->Monitor "<default monitor>"
[ 10.125] (==) No monitor specified for screen "Default Screen Section".
Using a default monitor configuration.
[ 10.125] (==) Automatically adding devices
[ 10.125] (==) Automatically enabling devices
[ 10.125] (==) Automatically adding GPU devices
[ 10.125] (==) Automatically binding GPU devices
[ 10.125] (==) Max clients allowed: 256, resource mask: 0x1fffff
[ 10.126] (WW) The directory "/usr/share/fonts/misc" does not exist.
[ 10.126] Entry deleted from font path.
[ 10.126] (WW) The directory "/usr/share/fonts/OTF" does not exist.
[ 10.126] Entry deleted from font path.
[ 10.126] (WW) The directory "/usr/share/fonts/Type1" does not exist.
[ 10.126] Entry deleted from font path.
[ 10.126] (WW) The directory "/usr/share/fonts/100dpi" does not exist.
[ 10.126] Entry deleted from font path.
[ 10.126] (WW) The directory "/usr/share/fonts/75dpi" does not exist.
[ 10.126] Entry deleted from font path.
[ 10.126] (==) FontPath set to:
/usr/share/fonts/TTF
[ 10.126] (==) ModulePath set to "/usr/lib/xorg/modules"
[ 10.126] (II) The server relies on udev to provide the list of input devices.
If no devices become available, reconfigure udev or disable AutoAddDevices.
[ 10.126] (II) Module ABI versions:
[ 10.126] X.Org ANSI C Emulation: 0.4
[ 10.126] X.Org Video Driver: 24.1
[ 10.126] X.Org XInput driver : 24.1
[ 10.126] X.Org Server Extension : 10.0
[ 10.127] (++) using VT number 7
[ 10.127] (II) systemd-logind: logind integration requires -keeptty and -keeptty was not provided, disabling logind integration
[ 10.129] (II) xfree86: Adding drm device (/dev/dri/card0)
[ 10.143] (II) xfree86: Adding drm device (/dev/dri/card1)
[ 10.144] (II) no primary bus or device found
[ 10.144] falling back to /sys/devices/platform/display-subsystem/drm/card0
[ 10.144] (II) LoadModule: "glx"
[ 10.144] (II) Loading /usr/lib/xorg/modules/extensions/libglx.so
[ 10.148] (II) Module glx: vendor="X.Org Foundation"
[ 10.148] compiled for 1.20.7, module version = 1.0.0
[ 10.148] ABI class: X.Org Server Extension, version 10.0
[ 10.148] (==) Matched modesetting as autoconfigured driver 0
[ 10.148] (==) Matched fbdev as autoconfigured driver 1
[ 10.148] (==) Assigned the driver to the xf86ConfigLayout
[ 10.148] (II) LoadModule: "modesetting"
[ 10.148] (II) Loading /usr/lib/xorg/modules/drivers/modesetting_drv.so
[ 10.149] (II) Module modesetting: vendor="X.Org Foundation"
[ 10.149] compiled for 1.20.7, module version = 1.20.7
[ 10.149] Module class: X.Org Video Driver
[ 10.149] ABI class: X.Org Video Driver, version 24.1
[ 10.149] (II) LoadModule: "fbdev"
[ 10.150] (WW) Warning, couldn't open module fbdev
[ 10.150] (EE) Failed to load module "fbdev" (module does not exist, 0)
[ 10.150] (II) modesetting: Driver for Modesetting Kernel Drivers: kms
[ 10.160] (II) modeset(0): using drv /dev/dri/card0
[ 10.160] (II) modeset(0): Creating default Display subsection in Screen section
"Default Screen Section" for depth/fbbpp 24/32
[ 10.160] (==) modeset(0): Depth 24, (==) framebuffer bpp 32
[ 10.160] (==) modeset(0): RGB weight 888
[ 10.160] (==) modeset(0): Default visual is TrueColor
[ 10.160] (II) Loading sub module "glamoregl"
[ 10.160] (II) LoadModule: "glamoregl"
[ 10.161] (II) Loading /usr/lib/xorg/modules/libglamoregl.so
[ 10.173] (II) Module glamoregl: vendor="X.Org Foundation"
[ 10.173] compiled for 1.20.7, module version = 1.0.1
[ 10.173] ABI class: X.Org ANSI C Emulation, version 0.4
[ 10.193] (EE)
[ 10.193] (EE) Backtrace:
[ 10.193] (EE)
[ 10.193] (EE) Segmentation fault at address 0xdda8
[ 10.193] (EE)
Fatal server error:
[ 10.193] (EE) Caught signal 11 (Segmentation fault). Server aborting
[ 10.193] (EE)
[ 10.193] (EE)
Please consult the The X.Org Foundation support
at http://wiki.x.org
for help.
[ 10.193] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.
[ 10.193] (EE)
[ 10.200] (EE) Server terminated with error (1). Closing log file.
Does anyone have any idea what might be causing this? I have run memtester (as suggested in the comments) and it didn't identify any issues with the RAM. The system seems perfectly stable in a console terminal.
It's possible the issue is just a bad package from Arch ARM, which has broken the system during the last update. Does anyone have any thoughts on what package is likely to be broken? (if so, I'll try rolling back)
|
As posted in comments above:
The Xorg log file indicates that glamoregl is crashing, pointing to an issue with hardware acceleration.
Temporary workaround: start X while disabling GLX based on this post ie:
startx -- :2 vt2 -extension GLX
One thing I was wondering is whether you have a specific proprietary/open source driver for your GPU (I understand it should be Mali graphics T764 for your model). This post suggests xf86-video-armsoc-rockchip and veyron-libgl. Possibly, I would also have a look at developer.arm.com
| X11 crashing on login (Arch ARM) |
1,458,970,997,000 |
I recently encountered some difficulties, that I want to write a script to automatically log in to nested servers to collect some info on each of them:(specifically, use nvidia-smi to collect the GPU usage info on each machine)
the nested server structure is like:
user@boss(user@machine1, user@machine2, user@machine3, ...)
normally we have to use ssh to log into user@boss, then ssh to specific machine to do our work, but it is not convenient to monitor all machine GPU usage, I tried to write one script like:
sshpass -p "xxxx" ssh -o StrictHostKeyChecking=no [email protected]
for v in machine1 machine2
do
sshpass -p "xxxx" ssh -o StrictHostKeyChecking=no v
echo $v
nvidia-smi
done
but it only log into the user@boss, I'm not familar with server stuffs, is user@boss the root node, then machine1, machine2, ... are child node? can someone help?(note I dont have root priviledge)
ADD the servers including user@boss and user@machine1, user@machine2, ... all dont have sshpass installed, only ssh surported
|
Don't use password authentication. Use public-key authentication only, and have good, strong passphrases for your ssh keys.
See Why is using an SSH key more secure than using passwords? and the Linked and Related posts for interesting discussions about keys vs passwords.
You can configure ssh to always connect to a remote host using a proxy host.
e.g. in your ~/.ssh/config:
Host machine1 machine2 machine3
ProxyJump user@boss
then ssh machine1 will always connect via boss.
From man ssh_config:
ProxyJump
Specifies one or more jump proxies as [user@]host[:port].
Multiple proxies may be separated by comma characters and will be
visited sequentially.
Setting this option will cause ssh(1) to connect to the target host by
first making a ssh(1) connection to the specified ProxyJump host and
then establishing a TCP forwarding to the ultimate target from there.
| ssh to nested server and collect some information |
1,458,970,997,000 |
Many of my programs are not running, with this error:
get chip id failed: -1 [13]
param: 4, val: 0
[intel_init_bufmgr:1189] Error initializing buffer manager.
Segmentation fault
When I try running glxinfo this is what I get:
Xlib: extension "GLX" missing on display ":0".
Error: couldn't find RGB GLX visual or fbconfig
I have two GPU's one Integrated intel, and another AMD Radeon 6490hd with open source radeon drivers on Debian testing.
I can't even log on into KDE and Gnome, but I can log on into i3, lxde and dwm.
Update:
Here is my Xorg.0.log:
http://pastebin.com/gJkFLAh7
Update 2:
It seems I was unable to update any of the xserver-xorg-video-* packages because they wanted the xorg-abi-20, even though I already had xorg-abi-23
NOw I updated those manually with gdebi.
sudo update-glx --config-glx gives me:
There are 2 choices for the alternative glx (providing /usr/lib/glx).
Selection Path Priority Status
------------------------------------------------------------
* 0 /usr/lib/nvidia 100 auto mode
1 /usr/lib/mesa-diverted 5 manual mode
2 /usr/lib/nvidia 100 manual mode
NOte, I have an integrated Intel and Radeon HD6490. I have two monitors. But I get the same issues with using only one.
|
As the log shows, you have installed the GLX module for NVIDIA cards,
(II) LoadModule: "glx"
(II) Loading /usr/lib/xorg/modules/linux/libglx.so
(II) Module glx: vendor="NVIDIA Corporation"
compiled for 4.0.2, module version = 1.0.0
Module class: X.Org Server Extension
(II) NVIDIA GLX Module 375.26 Thu Dec 8 17:59:51 PST 2016
which only works for NVIDIA cards and nothing else. OTOH, both the modesetting driver for the Intel card and the radeon driver get initialized. One Monitor is connected to the HDMI output of the Intel card, the Radeon driver only has a VGA output, but doesn't get EDID information for it, so I'm not sure if anything is connected to that.
Install the correct GLX packages (AFAIK, libgl1-mesa-* for all Intel cards, at least that's what I use for my Intel card, and for the Radeon card as well), and verify in the log that they work.
If your second monitor is actually connected to the Intel card and not the Radeon, you might consider disabling the Radeon card.
| GLX problem, many programs not running |
1,458,970,997,000 |
I was having some problems with my laptop (asked a related question yesterday: http://elementaryos.org/answers/after-locking-the-computer-screens-look-all-messed-up) regarding some dual-screen issues. This morning my laptop fan was working at full speed. Looking for some help I found a Q&A that made some sense (I thought): http://elementaryos.org/answers/luna-running-hot-on-my-laptop-1 , so I followed the instructions:
Did this:
sudo apt-get install linux-generic-lts-raring
Then this:
wget http://launchpadlibrarian.net/121675171/fixplymouth
chmod +x fixplymouth
./fixplymouth
After rebooting, my dual-screen (which used to work fine) stopped working, having this error message when I tried to change second monitor settings:
"required virtual size does not fit available size: requested=(...), minimum=(...), maximum=(...)"
I tried installing AMD Catalyst drivers for my GPU (which apparently went fine, but at the end it said that something went wrong) and when I tried to reboot, I could not get to the desktop anymore: I just get a black screen with the keyboard cursor blinking on the top left corner of the screen.
Any ideas on how to solve this (if there is something I can do :( )?
|
Finally I got to fix one of my problems. I can see my desktop now the same way I had it before installing the ATI drivers. Did it by:
sudo apt-get update
sudo apt-get upgrade
sudo apt-get remove --purge fglrx
Found this solution here. Apparently, having elementary os drivers and ATI drivers gave me some conflict so I had to uninstall one of them.
Thanks to @JohnWHSmith for leading me to the solution ;)
| elementary os will not start after installing GPU drivers |
1,718,580,146,000 |
I have a Asus gaming laptop and I want to format and use it with Linux (I'm new user) but there is some problem:
The main one is Asus doesn't have Armoury Crate and without some kinda of controller (for fan speed, disabling nvidia gpu, keyboard light and...) it's not a pleasant experience, the only thing that I found is asusctrl but it can't control everything and only supports Fedora and G-Helper (Better than Armoury Crate it self) won't Support Linux
Is there any suggestion?
PS: the best experience I had with couple of Distros that I tested was Pop_OS! because of the nvidia driver
|
I'm sorry to disappoint, but I highly doubt that there is an all-in-one app to control everything. You will most likely have to cobble together a bunch of different programs for different pieces of hardware.
For the keyboard RGB, OpenRGB is available on Linux and seems to have decent support for asus hardware.
For fan speed and GPU switching, try taking a look at this archwiki article. Keep in mind that this article is written with Arch linux in mind, but most of what it says should be applicable to any distro.
What I can tell you is that setting up this kind of thing definitely won't be a plug-and-play experience, especially for a novice user. We often joke that getting nvidia drivers to work requires bringing a fragment of your sanity as sacrifice to Jensen; this is doubly true for laptops.
Finally, if there is something that you absolutely cannot find a software controller for, you may have to dual-boot Linux and Windows, and just switch into windows and use Armoury Crate. If the device is actually using an internal USB connection, it might also be possible to run Windows in a virtual machine (I recommend qemu) and temporarily route the device into the VM to change settings.
EDIT: If you do decide to dual boot, please note that Linux (and windows) has a feature called Hibernate which basically allows you to turn off your computer without losing your work. It basically saves your RAM to disk before turning off. It's very useful if you do end up switching operating system often, or going into BIOS to change settings on the fly. For hibernate to work, you need to set the size of your swapfile or swap partition to be at least as large as your RAM when installing Linux (two times the size of your RAM is the recommended amount).
| Hardware controller for Laptop in Linux |
1,718,580,146,000 |
This week I returned to my job after 2 week break and I discovered one important error. Any of my 3 monitors is not responding. All monitors keep displaying only "No signal".
I have tried multiple solutions, f.e. keep connected only one monitor (I have tried both DP and HDMI), which did not help, after that I have tried connecting all monitors to other PC and they were working without any issues (so there is not any problem with monitors itself or cables).
After that, I've tried restarting gdm3 service with command below:
$ sudo service gdm3 restart
Which also did not solv anything, so last option was restarting whole PC:
$ sudo reboot
Some important PC specs:
inxi -Fx
CPU: Intel Core i7-3930K
GPU: Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (I'm not quite sure, which one is inside)
MB: Gigabyte X79-UD3
OS: Linux 6.0.0-0.deb11.6-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.0.12-1~bpo11+1 (2022-12-19) x86_64 GNU/Linux
What is interesting, some time ago I had similar issues on my home PC (Ubuntu 22.04, core i5, RTX 2060; PC was not able to connect to HDMI display, although second DP display was working normally). I'm using GPU driver directly from nVidia. After I updated drivers, this issue disappeared.
So, my question is, could be Radeon GPU dead (in that case, would PC even boot up?), or might it be some driver issue and updating it might resolve the problem?
I'd be more than happy for any advice.
Best wishes,
Ondrej.
|
yesterday I've solved it by first stopping gdm3 service with command
$ sudo systemctl stop gdm3
After that, I again turned gdm3 service on with
$ sudo systemctl restart gdm3.service
All three monitors are now working.
| DP/HDMI monitors show "no signal" - Debian 11 |
1,718,580,146,000 |
I recently bought a new Lenovo Ideapad Slim 3 laptop and I am having problems getting the amdgpu driver working properly in Arch Linux. The GPU seems to work fine straight away with a Mint live USB (glxgears plays, etc.); however, in the Arch system I am trying to install to the SSD, I get this error with glxinfo -B:
$ glxinfo -B
name of display: :0
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 151 (GLX)
Minor opcode of failed request: 24 (X_GLXCreateNewContext)
Value in failed request: 0x0
Serial number of failed request: 37
Current serial number in output stream: 38
I can't see any errors in the dmesg log relating to [drm] and amdgpu - the messages in Arch seem very similar to those in Mint. However, in Arch I see the following error in my Xorg.0.log file:
[ 42.568] (II) Loading sub module "glamoregl"
[ 42.568] (II) LoadModule: "glamoregl"
[ 42.568] (II) Loading /usr/lib/xorg/modules/libglamoregl.so
[ 42.572] (II) Module glamoregl: vendor="X.Org Foundation"
[ 42.572] compiled for 1.21.1.8, module version = 1.0.1
[ 42.572] ABI class: X.Org ANSI C Emulation, version 0.4
[ 42.577] (EE) AMDGPU(0): eglGetDisplay() failed
[ 42.577] (EE) AMDGPU(0): glamor detected, failed to initialize EGL.
[ 42.577] (WW) AMDGPU(0): amdgpu_glamor_pre_init returned FALSE, using ShadowFB
The Xorg.0.log file for the Mint live USB doesn't show this error:
[ 17.992] (II) Loading sub module "glamoregl"
[ 17.992] (II) LoadModule: "glamoregl"
[ 17.992] (II) Loading /usr/lib/xorg/modules/libglamoregl.so
[ 17.995] (II) Module glamoregl: vendor="X.Org Foundation"
[ 17.995] compiled for 1.21.1.3, module version = 1.0.1
[ 17.995] ABI class: X.Org ANSI C Emulation, version 0.4
[ 18.028] (II) AMDGPU(0): glamor X acceleration enabled on AMD RENOIR (LLVM 13.0.1, DRM 3.42, 5.15.0-56-generic)
[ 18.028] (II) AMDGPU(0): glamor detected, initialising EGL layer.
It seems likely this error is related to what is causing the problem. Does anyone know what might be causing this issue between amdgpu and glamor in Arch?
It's a brand new laptop, with an AMD Ryzen 5 7530U CPU, with integrated Radeon graphics.
|
I have managed to resolve the issue myself. The Arch system was copied over from a previous machine that had an NVidia GPU, and there were some graphics packages still installed relating to NVidia. I removed those and now the 3D acceleration seems to be working properly (both glxinfo and glxgears work). The old NVidia packages were:
nvidia-340xx-dkms
nvidia-340xx-utils
opencl-nvidia-340xx
xf86-video-nouveau
ffnvcodec-headers
Hopefully this info might help, if someone else has a similar problem in future.
| Problem with AMD integrated gpu on new Lenovo laptop |
1,718,580,146,000 |
I've been search a lot about this, but seems that no one knows how Hive OS guys do this trick. A lot of places and articles says that nvidia linux driver don't support core voltage controls, which seems true, however, Hive OS can do it, and its based on Ubuntu 16.04 LTS.
Any ideas how to undervolting nvidia GPUs in ubuntu?
|
Voltage adjustment is not available for discrete NVIDIA GPUs under Linux.
It is available only for AMD GPUs.
https://hiveos.farm/getting_started-start_oc/#overclocking-nvidia-gpus
| Ubuntu vs HiveOS NVIDIA GPU Undervolting, how? |
1,718,580,146,000 |
I am using Unity3D on Arch Linux: https://wiki.archlinux.org/title/Unity3D for game development.
I have a Nvidia GTX 1650. All my nvidia packages are up to date (tensorflow-gpu for example works fine). But when I run a game within unity3D it does not use the GPU at all.
How can I instruct unity3D to use the GPU when developing games?
Details of my GPU below:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 465.27 Driver Version: 465.27 CUDA Version: 11.3 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| N/A 53C P8 2W / N/A | 4MiB / 3914MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 615 G /usr/lib/Xorg 4MiB |
+-----------------------------------------------------------------------------+
|
I have found that the issue was due to my laptop's hybrid Intel/Nvidia setup defaulting to the Intel graphics over the dedicated graphics card.
This can be fixed by using Nvidia-Optimus to instruct the machine to use the dedicated GPU for rendering particular applications using PRIME:
sudo pacman -S nvidia nvidia-prime
prime-run unityhub
| Use Nvidia GPU with Unity3D for game development |
1,718,580,146,000 |
Until version v87.0.4280.141 smooth scrolling worked without issues on my Arch Linux system.
After updating to newer versions smooth scrolling is not working at all, but when I resize the window to half of the screen then smooth scrolling seems to be fine.
I am using Nvidia GPU and my monitor resolution is 3840x2160.
How it can be fixed?
glxinfo:
name of display: :0
display: :0 screen: 0
direct rendering: Yes
server glx vendor string: NVIDIA Corporation
server glx version string: 1.4
server glx extensions:
GLX_ARB_context_flush_control, GLX_ARB_create_context,
GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile,
GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float,
GLX_ARB_multisample, GLX_EXT_buffer_age,
GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile,
GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_libglvnd,
GLX_EXT_stereo_tree, GLX_EXT_swap_control, GLX_EXT_swap_control_tear,
GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating,
GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer,
GLX_NV_multigpu_context, GLX_NV_robustness_video_memory_purge,
GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control,
GLX_SGI_video_sync
client glx vendor string: NVIDIA Corporation
client glx version string: 1.4
client glx extensions:
GLX_ARB_context_flush_control, GLX_ARB_create_context,
GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile,
GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float,
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_buffer_age,
GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile,
GLX_EXT_fbconfig_packed_float, GLX_EXT_framebuffer_sRGB,
GLX_EXT_import_context, GLX_EXT_stereo_tree, GLX_EXT_swap_control,
GLX_EXT_swap_control_tear, GLX_EXT_texture_from_pixmap,
GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_NV_copy_buffer,
GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer,
GLX_NV_multigpu_context, GLX_NV_multisample_coverage,
GLX_NV_robustness_video_memory_purge, GLX_NV_swap_group,
GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control,
GLX_SGI_video_sync
GLX version: 1.4
GLX extensions:
GLX_ARB_context_flush_control, GLX_ARB_create_context,
GLX_ARB_create_context_no_error, GLX_ARB_create_context_profile,
GLX_ARB_create_context_robustness, GLX_ARB_fbconfig_float,
GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_buffer_age,
GLX_EXT_create_context_es2_profile, GLX_EXT_create_context_es_profile,
GLX_EXT_framebuffer_sRGB, GLX_EXT_import_context, GLX_EXT_stereo_tree,
GLX_EXT_swap_control, GLX_EXT_swap_control_tear,
GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating,
GLX_NV_copy_image, GLX_NV_delay_before_swap, GLX_NV_float_buffer,
GLX_NV_multigpu_context, GLX_NV_robustness_video_memory_purge,
GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGI_swap_control,
GLX_SGI_video_sync
Memory info (GL_NVX_gpu_memory_info):
Dedicated video memory: 4096 MB
Total available memory: 4096 MB
Currently available dedicated video memory: 3129 MB
OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce MX130/PCIe/SSE2
OpenGL core profile version string: 4.6.0 NVIDIA 460.39
OpenGL core profile shading language version string: 4.60 NVIDIA
OpenGL core profile context flags: (none)
OpenGL core profile profile mask: core profile
OpenGL core profile extensions:
GL_AMD_multi_draw_indirect, GL_AMD_seamless_cubemap_per_texture,
GL_ARB_ES2_compatibility, GL_ARB_ES3_1_compatibility,
GL_ARB_ES3_2_compatibility, GL_ARB_ES3_compatibility,
GL_ARB_arrays_of_arrays, GL_ARB_base_instance, GL_ARB_bindless_texture,
GL_ARB_blend_func_extended, GL_ARB_buffer_storage,
GL_ARB_clear_buffer_object, GL_ARB_clear_texture, GL_ARB_clip_control,
GL_ARB_color_buffer_float, GL_ARB_compressed_texture_pixel_storage,
GL_ARB_compute_shader, GL_ARB_compute_variable_group_size,
GL_ARB_conditional_render_inverted, GL_ARB_conservative_depth,
GL_ARB_copy_buffer, GL_ARB_copy_image, GL_ARB_cull_distance,
GL_ARB_debug_output, GL_ARB_depth_buffer_float, GL_ARB_depth_clamp,
GL_ARB_depth_texture, GL_ARB_derivative_control,
GL_ARB_direct_state_access, GL_ARB_draw_buffers,
GL_ARB_draw_buffers_blend, GL_ARB_draw_elements_base_vertex,
GL_ARB_draw_indirect, GL_ARB_draw_instanced, GL_ARB_enhanced_layouts,
GL_ARB_explicit_attrib_location, GL_ARB_explicit_uniform_location,
GL_ARB_fragment_coord_conventions, GL_ARB_fragment_layer_viewport,
GL_ARB_fragment_program, GL_ARB_fragment_program_shadow,
GL_ARB_fragment_shader, GL_ARB_framebuffer_no_attachments,
GL_ARB_framebuffer_object, GL_ARB_framebuffer_sRGB,
GL_ARB_geometry_shader4, GL_ARB_get_program_binary,
GL_ARB_get_texture_sub_image, GL_ARB_gl_spirv, GL_ARB_gpu_shader5,
GL_ARB_gpu_shader_fp64, GL_ARB_gpu_shader_int64, GL_ARB_half_float_pixel,
GL_ARB_half_float_vertex, GL_ARB_imaging, GL_ARB_indirect_parameters,
GL_ARB_instanced_arrays, GL_ARB_internalformat_query,
GL_ARB_internalformat_query2, GL_ARB_invalidate_subdata,
GL_ARB_map_buffer_alignment, GL_ARB_map_buffer_range, GL_ARB_multi_bind,
GL_ARB_multi_draw_indirect, GL_ARB_multisample, GL_ARB_multitexture,
GL_ARB_occlusion_query, GL_ARB_occlusion_query2,
GL_ARB_parallel_shader_compile, GL_ARB_pipeline_statistics_query,
GL_ARB_pixel_buffer_object, GL_ARB_point_parameters, GL_ARB_point_sprite,
GL_ARB_polygon_offset_clamp, GL_ARB_program_interface_query,
GL_ARB_provoking_vertex, GL_ARB_query_buffer_object,
GL_ARB_robust_buffer_access_behavior, GL_ARB_robustness,
GL_ARB_sample_shading, GL_ARB_sampler_objects, GL_ARB_seamless_cube_map,
GL_ARB_seamless_cubemap_per_texture, GL_ARB_separate_shader_objects,
GL_ARB_shader_atomic_counter_ops, GL_ARB_shader_atomic_counters,
GL_ARB_shader_ballot, GL_ARB_shader_bit_encoding, GL_ARB_shader_clock,
GL_ARB_shader_draw_parameters, GL_ARB_shader_group_vote,
GL_ARB_shader_image_load_store, GL_ARB_shader_image_size,
GL_ARB_shader_objects, GL_ARB_shader_precision,
GL_ARB_shader_storage_buffer_object, GL_ARB_shader_subroutine,
GL_ARB_shader_texture_image_samples, GL_ARB_shader_texture_lod,
GL_ARB_shading_language_100, GL_ARB_shading_language_420pack,
GL_ARB_shading_language_include, GL_ARB_shading_language_packing,
GL_ARB_shadow, GL_ARB_sparse_buffer, GL_ARB_sparse_texture,
GL_ARB_spirv_extensions, GL_ARB_stencil_texturing, GL_ARB_sync,
GL_ARB_tessellation_shader, GL_ARB_texture_barrier,
GL_ARB_texture_border_clamp, GL_ARB_texture_buffer_object,
GL_ARB_texture_buffer_object_rgb32, GL_ARB_texture_buffer_range,
GL_ARB_texture_compression, GL_ARB_texture_compression_bptc,
GL_ARB_texture_compression_rgtc, GL_ARB_texture_cube_map,
GL_ARB_texture_cube_map_array, GL_ARB_texture_env_add,
GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar,
GL_ARB_texture_env_dot3, GL_ARB_texture_filter_anisotropic,
GL_ARB_texture_float, GL_ARB_texture_gather,
GL_ARB_texture_mirror_clamp_to_edge, GL_ARB_texture_mirrored_repeat,
GL_ARB_texture_multisample, GL_ARB_texture_non_power_of_two,
GL_ARB_texture_query_levels, GL_ARB_texture_query_lod,
GL_ARB_texture_rectangle, GL_ARB_texture_rg, GL_ARB_texture_rgb10_a2ui,
GL_ARB_texture_stencil8, GL_ARB_texture_storage,
GL_ARB_texture_storage_multisample, GL_ARB_texture_swizzle,
GL_ARB_texture_view, GL_ARB_timer_query, GL_ARB_transform_feedback2,
GL_ARB_transform_feedback3, GL_ARB_transform_feedback_instanced,
GL_ARB_transform_feedback_overflow_query, GL_ARB_transpose_matrix,
GL_ARB_uniform_buffer_object, GL_ARB_vertex_array_bgra,
GL_ARB_vertex_array_object, GL_ARB_vertex_attrib_64bit,
GL_ARB_vertex_attrib_binding, GL_ARB_vertex_buffer_object,
GL_ARB_vertex_program, GL_ARB_vertex_shader,
GL_ARB_vertex_type_10f_11f_11f_rev, GL_ARB_vertex_type_2_10_10_10_rev,
GL_ARB_viewport_array, GL_ARB_window_pos, GL_ATI_draw_buffers,
GL_ATI_texture_float, GL_ATI_texture_mirror_once,
GL_EXTX_framebuffer_mixed_formats, GL_EXT_Cg_shader, GL_EXT_abgr,
GL_EXT_bgra, GL_EXT_bindable_uniform, GL_EXT_blend_color,
GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate,
GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_compiled_vertex_array,
GL_EXT_depth_bounds_test, GL_EXT_direct_state_access,
GL_EXT_draw_buffers2, GL_EXT_draw_instanced, GL_EXT_draw_range_elements,
GL_EXT_fog_coord, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample,
GL_EXT_framebuffer_multisample_blit_scaled, GL_EXT_framebuffer_object,
GL_EXT_framebuffer_sRGB, GL_EXT_geometry_shader4,
GL_EXT_gpu_program_parameters, GL_EXT_gpu_shader4,
GL_EXT_import_sync_object, GL_EXT_memory_object, GL_EXT_memory_object_fd,
GL_EXT_multi_draw_arrays, GL_EXT_multiview_texture_multisample,
GL_EXT_multiview_timer_query, GL_EXT_packed_depth_stencil,
GL_EXT_packed_float, GL_EXT_packed_pixels, GL_EXT_pixel_buffer_object,
GL_EXT_point_parameters, GL_EXT_polygon_offset_clamp,
GL_EXT_provoking_vertex, GL_EXT_rescale_normal, GL_EXT_secondary_color,
GL_EXT_semaphore, GL_EXT_semaphore_fd, GL_EXT_separate_shader_objects,
GL_EXT_separate_specular_color, GL_EXT_shader_image_load_formatted,
GL_EXT_shader_image_load_store, GL_EXT_shader_integer_mix,
GL_EXT_shadow_funcs, GL_EXT_stencil_two_side, GL_EXT_stencil_wrap,
GL_EXT_texture3D, GL_EXT_texture_array, GL_EXT_texture_buffer_object,
GL_EXT_texture_compression_dxt1, GL_EXT_texture_compression_latc,
GL_EXT_texture_compression_rgtc, GL_EXT_texture_compression_s3tc,
GL_EXT_texture_cube_map, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_add, GL_EXT_texture_env_combine,
GL_EXT_texture_env_dot3, GL_EXT_texture_filter_anisotropic,
GL_EXT_texture_integer, GL_EXT_texture_lod, GL_EXT_texture_lod_bias,
GL_EXT_texture_mirror_clamp, GL_EXT_texture_object, GL_EXT_texture_sRGB,
GL_EXT_texture_sRGB_R8, GL_EXT_texture_sRGB_decode,
GL_EXT_texture_shadow_lod, GL_EXT_texture_shared_exponent,
GL_EXT_texture_storage, GL_EXT_texture_swizzle, GL_EXT_timer_query,
GL_EXT_transform_feedback2, GL_EXT_vertex_array, GL_EXT_vertex_array_bgra,
GL_EXT_vertex_attrib_64bit, GL_EXT_window_rectangles,
GL_EXT_x11_sync_object, GL_IBM_rasterpos_clip,
GL_IBM_texture_mirrored_repeat, GL_KHR_blend_equation_advanced,
GL_KHR_blend_equation_advanced_coherent, GL_KHR_context_flush_control,
GL_KHR_debug, GL_KHR_no_error, GL_KHR_parallel_shader_compile,
GL_KHR_robust_buffer_access_behavior, GL_KHR_robustness,
GL_KHR_shader_subgroup, GL_KTX_buffer_region, GL_NVX_conditional_render,
GL_NVX_gpu_memory_info, GL_NVX_nvenc_interop, GL_NVX_progress_fence,
GL_NV_ES1_1_compatibility, GL_NV_ES3_1_compatibility,
GL_NV_alpha_to_coverage_dither_control, GL_NV_bindless_multi_draw_indirect,
GL_NV_bindless_multi_draw_indirect_count, GL_NV_bindless_texture,
GL_NV_blend_equation_advanced, GL_NV_blend_equation_advanced_coherent,
GL_NV_blend_minmax_factor, GL_NV_blend_square, GL_NV_command_list,
GL_NV_compute_program5, GL_NV_conditional_render,
GL_NV_copy_depth_to_color, GL_NV_copy_image, GL_NV_depth_buffer_float,
GL_NV_depth_clamp, GL_NV_draw_texture, GL_NV_draw_vulkan_image,
GL_NV_explicit_multisample, GL_NV_feature_query, GL_NV_fence,
GL_NV_float_buffer, GL_NV_fog_distance, GL_NV_fragment_program,
GL_NV_fragment_program2, GL_NV_fragment_program_option,
GL_NV_framebuffer_multisample_coverage, GL_NV_geometry_shader4,
GL_NV_gpu_multicast, GL_NV_gpu_program4, GL_NV_gpu_program4_1,
GL_NV_gpu_program5, GL_NV_gpu_program5_mem_extended,
GL_NV_gpu_program_fp64, GL_NV_gpu_shader5, GL_NV_half_float,
GL_NV_internalformat_sample_query, GL_NV_light_max_exponent,
GL_NV_multisample_coverage, GL_NV_multisample_filter_hint,
GL_NV_occlusion_query, GL_NV_packed_depth_stencil,
GL_NV_parameter_buffer_object, GL_NV_parameter_buffer_object2,
GL_NV_path_rendering, GL_NV_pixel_data_range, GL_NV_point_sprite,
GL_NV_primitive_restart, GL_NV_query_resource, GL_NV_query_resource_tag,
GL_NV_register_combiners, GL_NV_register_combiners2,
GL_NV_robustness_video_memory_purge, GL_NV_shader_atomic_counters,
GL_NV_shader_atomic_float, GL_NV_shader_atomic_int64,
GL_NV_shader_buffer_load, GL_NV_shader_storage_buffer_object,
GL_NV_shader_subgroup_partitioned, GL_NV_shader_thread_group,
GL_NV_shader_thread_shuffle, GL_NV_texgen_reflection,
GL_NV_texture_barrier, GL_NV_texture_compression_vtc,
GL_NV_texture_env_combine4, GL_NV_texture_multisample,
GL_NV_texture_rectangle, GL_NV_texture_rectangle_compressed,
GL_NV_texture_shader, GL_NV_texture_shader2, GL_NV_texture_shader3,
GL_NV_timeline_semaphore, GL_NV_transform_feedback,
GL_NV_transform_feedback2, GL_NV_uniform_buffer_unified_memory,
GL_NV_vdpau_interop, GL_NV_vdpau_interop2, GL_NV_vertex_array_range,
GL_NV_vertex_array_range2, GL_NV_vertex_attrib_integer_64bit,
GL_NV_vertex_buffer_unified_memory, GL_NV_vertex_program,
GL_NV_vertex_program1_1, GL_NV_vertex_program2,
GL_NV_vertex_program2_option, GL_NV_vertex_program3, GL_OVR_multiview,
GL_OVR_multiview2, GL_S3_s3tc, GL_SGIS_generate_mipmap,
GL_SGIS_texture_lod, GL_SGIX_depth_texture, GL_SGIX_shadow,
GL_SUN_slice_accum
OpenGL version string: 4.6.0 NVIDIA 460.39
OpenGL shading language version string: 4.60 NVIDIA
OpenGL context flags: (none)
OpenGL profile mask: (none)
OpenGL extensions:
GL_AMD_multi_draw_indirect, GL_AMD_seamless_cubemap_per_texture,
GL_ARB_ES2_compatibility, GL_ARB_ES3_1_compatibility,
GL_ARB_ES3_2_compatibility, GL_ARB_ES3_compatibility,
GL_ARB_arrays_of_arrays, GL_ARB_base_instance, GL_ARB_bindless_texture,
GL_ARB_blend_func_extended, GL_ARB_buffer_storage,
GL_ARB_clear_buffer_object, GL_ARB_clear_texture, GL_ARB_clip_control,
GL_ARB_color_buffer_float, GL_ARB_compatibility,
GL_ARB_compressed_texture_pixel_storage, GL_ARB_compute_shader,
GL_ARB_compute_variable_group_size, GL_ARB_conditional_render_inverted,
GL_ARB_conservative_depth, GL_ARB_copy_buffer, GL_ARB_copy_image,
GL_ARB_cull_distance, GL_ARB_debug_output, GL_ARB_depth_buffer_float,
GL_ARB_depth_clamp, GL_ARB_depth_texture, GL_ARB_derivative_control,
GL_ARB_direct_state_access, GL_ARB_draw_buffers,
GL_ARB_draw_buffers_blend, GL_ARB_draw_elements_base_vertex,
GL_ARB_draw_indirect, GL_ARB_draw_instanced, GL_ARB_enhanced_layouts,
GL_ARB_explicit_attrib_location, GL_ARB_explicit_uniform_location,
GL_ARB_fragment_coord_conventions, GL_ARB_fragment_layer_viewport,
GL_ARB_fragment_program, GL_ARB_fragment_program_shadow,
GL_ARB_fragment_shader, GL_ARB_framebuffer_no_attachments,
GL_ARB_framebuffer_object, GL_ARB_framebuffer_sRGB,
GL_ARB_geometry_shader4, GL_ARB_get_program_binary,
GL_ARB_get_texture_sub_image, GL_ARB_gl_spirv, GL_ARB_gpu_shader5,
GL_ARB_gpu_shader_fp64, GL_ARB_gpu_shader_int64, GL_ARB_half_float_pixel,
GL_ARB_half_float_vertex, GL_ARB_imaging, GL_ARB_indirect_parameters,
GL_ARB_instanced_arrays, GL_ARB_internalformat_query,
GL_ARB_internalformat_query2, GL_ARB_invalidate_subdata,
GL_ARB_map_buffer_alignment, GL_ARB_map_buffer_range, GL_ARB_multi_bind,
GL_ARB_multi_draw_indirect, GL_ARB_multisample, GL_ARB_multitexture,
GL_ARB_occlusion_query, GL_ARB_occlusion_query2,
GL_ARB_parallel_shader_compile, GL_ARB_pipeline_statistics_query,
GL_ARB_pixel_buffer_object, GL_ARB_point_parameters, GL_ARB_point_sprite,
GL_ARB_polygon_offset_clamp, GL_ARB_program_interface_query,
GL_ARB_provoking_vertex, GL_ARB_query_buffer_object,
GL_ARB_robust_buffer_access_behavior, GL_ARB_robustness,
GL_ARB_sample_shading, GL_ARB_sampler_objects, GL_ARB_seamless_cube_map,
GL_ARB_seamless_cubemap_per_texture, GL_ARB_separate_shader_objects,
GL_ARB_shader_atomic_counter_ops, GL_ARB_shader_atomic_counters,
GL_ARB_shader_ballot, GL_ARB_shader_bit_encoding, GL_ARB_shader_clock,
GL_ARB_shader_draw_parameters, GL_ARB_shader_group_vote,
GL_ARB_shader_image_load_store, GL_ARB_shader_image_size,
GL_ARB_shader_objects, GL_ARB_shader_precision,
GL_ARB_shader_storage_buffer_object, GL_ARB_shader_subroutine,
GL_ARB_shader_texture_image_samples, GL_ARB_shader_texture_lod,
GL_ARB_shading_language_100, GL_ARB_shading_language_420pack,
GL_ARB_shading_language_include, GL_ARB_shading_language_packing,
GL_ARB_shadow, GL_ARB_sparse_buffer, GL_ARB_sparse_texture,
GL_ARB_spirv_extensions, GL_ARB_stencil_texturing, GL_ARB_sync,
GL_ARB_tessellation_shader, GL_ARB_texture_barrier,
GL_ARB_texture_border_clamp, GL_ARB_texture_buffer_object,
GL_ARB_texture_buffer_object_rgb32, GL_ARB_texture_buffer_range,
GL_ARB_texture_compression, GL_ARB_texture_compression_bptc,
GL_ARB_texture_compression_rgtc, GL_ARB_texture_cube_map,
GL_ARB_texture_cube_map_array, GL_ARB_texture_env_add,
GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar,
GL_ARB_texture_env_dot3, GL_ARB_texture_filter_anisotropic,
GL_ARB_texture_float, GL_ARB_texture_gather,
GL_ARB_texture_mirror_clamp_to_edge, GL_ARB_texture_mirrored_repeat,
GL_ARB_texture_multisample, GL_ARB_texture_non_power_of_two,
GL_ARB_texture_query_levels, GL_ARB_texture_query_lod,
GL_ARB_texture_rectangle, GL_ARB_texture_rg, GL_ARB_texture_rgb10_a2ui,
GL_ARB_texture_stencil8, GL_ARB_texture_storage,
GL_ARB_texture_storage_multisample, GL_ARB_texture_swizzle,
GL_ARB_texture_view, GL_ARB_timer_query, GL_ARB_transform_feedback2,
GL_ARB_transform_feedback3, GL_ARB_transform_feedback_instanced,
GL_ARB_transform_feedback_overflow_query, GL_ARB_transpose_matrix,
GL_ARB_uniform_buffer_object, GL_ARB_vertex_array_bgra,
GL_ARB_vertex_array_object, GL_ARB_vertex_attrib_64bit,
GL_ARB_vertex_attrib_binding, GL_ARB_vertex_buffer_object,
GL_ARB_vertex_program, GL_ARB_vertex_shader,
GL_ARB_vertex_type_10f_11f_11f_rev, GL_ARB_vertex_type_2_10_10_10_rev,
GL_ARB_viewport_array, GL_ARB_window_pos, GL_ATI_draw_buffers,
GL_ATI_texture_float, GL_ATI_texture_mirror_once,
GL_EXTX_framebuffer_mixed_formats, GL_EXT_Cg_shader, GL_EXT_abgr,
GL_EXT_bgra, GL_EXT_bindable_uniform, GL_EXT_blend_color,
GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate,
GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_compiled_vertex_array,
GL_EXT_depth_bounds_test, GL_EXT_direct_state_access,
GL_EXT_draw_buffers2, GL_EXT_draw_instanced, GL_EXT_draw_range_elements,
GL_EXT_fog_coord, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample,
GL_EXT_framebuffer_multisample_blit_scaled, GL_EXT_framebuffer_object,
GL_EXT_framebuffer_sRGB, GL_EXT_geometry_shader4,
GL_EXT_gpu_program_parameters, GL_EXT_gpu_shader4,
GL_EXT_import_sync_object, GL_EXT_memory_object, GL_EXT_memory_object_fd,
GL_EXT_multi_draw_arrays, GL_EXT_multiview_texture_multisample,
GL_EXT_multiview_timer_query, GL_EXT_packed_depth_stencil,
GL_EXT_packed_float, GL_EXT_packed_pixels, GL_EXT_pixel_buffer_object,
GL_EXT_point_parameters, GL_EXT_polygon_offset_clamp,
GL_EXT_provoking_vertex, GL_EXT_rescale_normal, GL_EXT_secondary_color,
GL_EXT_semaphore, GL_EXT_semaphore_fd, GL_EXT_separate_shader_objects,
GL_EXT_separate_specular_color, GL_EXT_shader_image_load_formatted,
GL_EXT_shader_image_load_store, GL_EXT_shader_integer_mix,
GL_EXT_shadow_funcs, GL_EXT_stencil_two_side, GL_EXT_stencil_wrap,
GL_EXT_texture3D, GL_EXT_texture_array, GL_EXT_texture_buffer_object,
GL_EXT_texture_compression_dxt1, GL_EXT_texture_compression_latc,
GL_EXT_texture_compression_rgtc, GL_EXT_texture_compression_s3tc,
GL_EXT_texture_cube_map, GL_EXT_texture_edge_clamp,
GL_EXT_texture_env_add, GL_EXT_texture_env_combine,
GL_EXT_texture_env_dot3, GL_EXT_texture_filter_anisotropic,
GL_EXT_texture_integer, GL_EXT_texture_lod, GL_EXT_texture_lod_bias,
GL_EXT_texture_mirror_clamp, GL_EXT_texture_object, GL_EXT_texture_sRGB,
GL_EXT_texture_sRGB_R8, GL_EXT_texture_sRGB_decode,
GL_EXT_texture_shadow_lod, GL_EXT_texture_shared_exponent,
GL_EXT_texture_storage, GL_EXT_texture_swizzle, GL_EXT_timer_query,
GL_EXT_transform_feedback2, GL_EXT_vertex_array, GL_EXT_vertex_array_bgra,
GL_EXT_vertex_attrib_64bit, GL_EXT_window_rectangles,
GL_EXT_x11_sync_object, GL_IBM_rasterpos_clip,
GL_IBM_texture_mirrored_repeat, GL_KHR_blend_equation_advanced,
GL_KHR_blend_equation_advanced_coherent, GL_KHR_context_flush_control,
GL_KHR_debug, GL_KHR_no_error, GL_KHR_parallel_shader_compile,
GL_KHR_robust_buffer_access_behavior, GL_KHR_robustness,
GL_KHR_shader_subgroup, GL_KTX_buffer_region, GL_NVX_conditional_render,
GL_NVX_gpu_memory_info, GL_NVX_nvenc_interop, GL_NVX_progress_fence,
GL_NV_ES1_1_compatibility, GL_NV_ES3_1_compatibility,
GL_NV_alpha_to_coverage_dither_control, GL_NV_bindless_multi_draw_indirect,
GL_NV_bindless_multi_draw_indirect_count, GL_NV_bindless_texture,
GL_NV_blend_equation_advanced, GL_NV_blend_equation_advanced_coherent,
GL_NV_blend_minmax_factor, GL_NV_blend_square, GL_NV_command_list,
GL_NV_compute_program5, GL_NV_conditional_render,
GL_NV_copy_depth_to_color, GL_NV_copy_image, GL_NV_depth_buffer_float,
GL_NV_depth_clamp, GL_NV_draw_texture, GL_NV_draw_vulkan_image,
GL_NV_explicit_multisample, GL_NV_feature_query, GL_NV_fence,
GL_NV_float_buffer, GL_NV_fog_distance, GL_NV_fragment_program,
GL_NV_fragment_program2, GL_NV_fragment_program_option,
GL_NV_framebuffer_multisample_coverage, GL_NV_geometry_shader4,
GL_NV_gpu_multicast, GL_NV_gpu_program4, GL_NV_gpu_program4_1,
GL_NV_gpu_program5, GL_NV_gpu_program5_mem_extended,
GL_NV_gpu_program_fp64, GL_NV_gpu_shader5, GL_NV_half_float,
GL_NV_internalformat_sample_query, GL_NV_light_max_exponent,
GL_NV_multisample_coverage, GL_NV_multisample_filter_hint,
GL_NV_occlusion_query, GL_NV_packed_depth_stencil,
GL_NV_parameter_buffer_object, GL_NV_parameter_buffer_object2,
GL_NV_path_rendering, GL_NV_pixel_data_range, GL_NV_point_sprite,
GL_NV_primitive_restart, GL_NV_query_resource, GL_NV_query_resource_tag,
GL_NV_register_combiners, GL_NV_register_combiners2,
GL_NV_robustness_video_memory_purge, GL_NV_shader_atomic_counters,
GL_NV_shader_atomic_float, GL_NV_shader_atomic_int64,
GL_NV_shader_buffer_load, GL_NV_shader_storage_buffer_object,
GL_NV_shader_subgroup_partitioned, GL_NV_shader_thread_group,
GL_NV_shader_thread_shuffle, GL_NV_texgen_reflection,
GL_NV_texture_barrier, GL_NV_texture_compression_vtc,
GL_NV_texture_env_combine4, GL_NV_texture_multisample,
GL_NV_texture_rectangle, GL_NV_texture_rectangle_compressed,
GL_NV_texture_shader, GL_NV_texture_shader2, GL_NV_texture_shader3,
GL_NV_timeline_semaphore, GL_NV_transform_feedback,
GL_NV_transform_feedback2, GL_NV_uniform_buffer_unified_memory,
GL_NV_vdpau_interop, GL_NV_vdpau_interop2, GL_NV_vertex_array_range,
GL_NV_vertex_array_range2, GL_NV_vertex_attrib_integer_64bit,
GL_NV_vertex_buffer_unified_memory, GL_NV_vertex_program,
GL_NV_vertex_program1_1, GL_NV_vertex_program2,
GL_NV_vertex_program2_option, GL_NV_vertex_program3, GL_OVR_multiview,
GL_OVR_multiview2, GL_S3_s3tc, GL_SGIS_generate_mipmap,
GL_SGIS_texture_lod, GL_SGIX_depth_texture, GL_SGIX_shadow,
GL_SUN_slice_accum
OpenGL ES profile version string: OpenGL ES 3.2 NVIDIA 460.39
OpenGL ES profile shading language version string: OpenGL ES GLSL ES 3.20
OpenGL ES profile extensions:
GL_ANDROID_extension_pack_es31a, GL_EXT_EGL_image_external_wrap_modes,
GL_EXT_base_instance, GL_EXT_blend_func_extended, GL_EXT_blend_minmax,
GL_EXT_buffer_storage, GL_EXT_clear_texture, GL_EXT_clip_control,
GL_EXT_clip_cull_distance, GL_EXT_color_buffer_float,
GL_EXT_color_buffer_half_float, GL_EXT_compressed_ETC1_RGB8_sub_texture,
GL_EXT_conservative_depth, GL_EXT_copy_image, GL_EXT_debug_label,
GL_EXT_depth_clamp, GL_EXT_discard_framebuffer,
GL_EXT_disjoint_timer_query, GL_EXT_draw_buffers_indexed,
GL_EXT_draw_elements_base_vertex, GL_EXT_draw_transform_feedback,
GL_EXT_float_blend, GL_EXT_frag_depth, GL_EXT_geometry_point_size,
GL_EXT_geometry_shader, GL_EXT_gpu_shader5, GL_EXT_map_buffer_range,
GL_EXT_memory_object, GL_EXT_memory_object_fd, GL_EXT_multi_draw_indirect,
GL_EXT_multisample_compatibility, GL_EXT_multisampled_render_to_texture,
GL_EXT_multisampled_render_to_texture2,
GL_EXT_multiview_texture_multisample, GL_EXT_multiview_timer_query,
GL_EXT_occlusion_query_boolean, GL_EXT_polygon_offset_clamp,
GL_EXT_primitive_bounding_box, GL_EXT_render_snorm, GL_EXT_robustness,
GL_EXT_sRGB, GL_EXT_sRGB_write_control, GL_EXT_semaphore,
GL_EXT_semaphore_fd, GL_EXT_separate_shader_objects,
GL_EXT_shader_group_vote, GL_EXT_shader_implicit_conversions,
GL_EXT_shader_integer_mix, GL_EXT_shader_io_blocks,
GL_EXT_shader_non_constant_global_initializers, GL_EXT_shader_texture_lod,
GL_EXT_shadow_samplers, GL_EXT_sparse_texture,
GL_EXT_tessellation_point_size, GL_EXT_tessellation_shader,
GL_EXT_texture_border_clamp, GL_EXT_texture_buffer,
GL_EXT_texture_compression_bptc, GL_EXT_texture_compression_dxt1,
GL_EXT_texture_compression_rgtc, GL_EXT_texture_compression_s3tc,
GL_EXT_texture_cube_map_array, GL_EXT_texture_filter_anisotropic,
GL_EXT_texture_format_BGRA8888, GL_EXT_texture_mirror_clamp_to_edge,
GL_EXT_texture_norm16, GL_EXT_texture_query_lod, GL_EXT_texture_rg,
GL_EXT_texture_sRGB_R8, GL_EXT_texture_sRGB_decode,
GL_EXT_texture_shadow_lod, GL_EXT_texture_storage, GL_EXT_texture_view,
GL_EXT_unpack_subimage, GL_EXT_window_rectangles,
GL_KHR_blend_equation_advanced, GL_KHR_blend_equation_advanced_coherent,
GL_KHR_context_flush_control, GL_KHR_debug, GL_KHR_no_error,
GL_KHR_parallel_shader_compile, GL_KHR_robust_buffer_access_behavior,
GL_KHR_robustness, GL_KHR_shader_subgroup,
GL_KHR_texture_compression_astc_ldr,
GL_KHR_texture_compression_astc_sliced_3d, GL_NV_bgr,
GL_NV_bindless_texture, GL_NV_blend_equation_advanced,
GL_NV_blend_equation_advanced_coherent, GL_NV_blend_minmax_factor,
GL_NV_conditional_render, GL_NV_copy_buffer, GL_NV_copy_image,
GL_NV_draw_buffers, GL_NV_draw_instanced, GL_NV_draw_texture,
GL_NV_draw_vulkan_image, GL_NV_explicit_attrib_location,
GL_NV_fbo_color_attachments, GL_NV_framebuffer_blit,
GL_NV_framebuffer_multisample, GL_NV_generate_mipmap_sRGB,
GL_NV_gpu_shader5, GL_NV_image_formats, GL_NV_instanced_arrays,
GL_NV_internalformat_sample_query, GL_NV_non_square_matrices,
GL_NV_occlusion_query_samples, GL_NV_pack_subimage, GL_NV_packed_float,
GL_NV_packed_float_linear, GL_NV_path_rendering,
GL_NV_pixel_buffer_object, GL_NV_polygon_mode, GL_NV_read_buffer,
GL_NV_read_depth, GL_NV_read_depth_stencil, GL_NV_read_stencil,
GL_NV_sRGB_formats, GL_NV_shader_noperspective_interpolation,
GL_NV_shader_subgroup_partitioned, GL_NV_shadow_samplers_array,
GL_NV_shadow_samplers_cube, GL_NV_texture_array, GL_NV_texture_barrier,
|
this fixed for me
$ pkill picom
but I'm still not sure why it's worked by killing picom composite manager.
| Choppy smooth scrolling after google chrome update |
1,718,580,146,000 |
I am trying to revive an old Asus eeepc 1215N (as donation for a student in the time of online classes), at first trying with Ubuntu 20.04. The computer "features" nvidia optimus (dual GPU) and is normally functional when booting (I can run BIOS setup and the boot manager).
The moment the Linux kernel takes over (kernel and initrd are loaded), the screen turns to garbage, as shown here:
The garbage stays constant (i.e. does not look like damaged but progressing boot messages), does not disappear even after an hour (when I'd suppose live USB will have booted into X11 already). I tried several Linux distros (Ubuntu 20.04, 18.04, 16.04, CentOS, Arch, Debian stable 32bit netinstall, ...), it happens the same for all of them. I tried booting with nosplash, nomodeset, nouveau.blacklist=1, i915.modeset=0 I picked up at various forums, with always the same result.
The only case which did not result in this damage was FreeDOS (which reported "invalid opcode" at some point and froze, though...).
Wiki pages about installation of various distros on these machines (such as Arch) don't mention anything similar; they mostly deal with power-management of the dual GPU or how to fix broken X11 drivers.
Ideas?
|
For the posterity, the issue was in the bootloader setting the video mode for the linux kernel; in that moment was the freeze with garbage screen happening. The trick to boot the machine is to:
have the bootloader use plain text mode (GRUB_TERMINAL="console" in /etc/default/grub for grub2); and
not change the video mode during the handover to the kernel (passing gfxpayload=text, or setting GRUB_GFXPAYLOAD="text in /etc/default/grub; for the older boot protocol (linux16), the equivalent is passing vga=normal on the kernel command-line).
Kernel modesetting, later during the boot itself, is not an issue at all.
Installer/live USB with graphical grub (such as ubuntu 20.04) will not work, the installation itself had to be done in a different machine, then chrooting into the installation, adjusting /etc/default/grub and runing grub-update. After putting the disk into the 1215N, it boots flawlessly.
| Asus eeepc 1215N: console garbage after kernel boot |
1,380,615,765,000 |
I would like to compress a text file using gzip command line tool while keeping the original file. By default running the following command
gzip file.txt
results in modifying this file and renaming it file.txt.gz. instead of this behavior I would like to have this new compressed file in addition to the existing one file.txt. For now I am using the following command to do that
gzip -c file.txt > file.txt.gz
It works but I am wondering why there is no easier solution to do such a common task ? Maybe I missed the option doing that ?
|
For GNU gzip 1.6 or above, FreeBSD and derivatives or recent versions of NetBSD, see don_cristi's answer.
With any version, you can use shell redirections as in:
gzip < file.txt > file.txt.gz
When not given any argument, gzip reads its standard input, compresses it and writes the compressed version to its standard output. As a bonus, when using shell redirections, you don't have to worry about files called "--help" or "-" (that latter one still being a problem for gzip -c --).
Another benefit over gzip -c file.txt > file.txt.gz is that if file.txt can't be opened, the command will fail without creating an empty file.txt.gz (or overwriting an existing file.txt.gz) and without running gzip at all.
A significant difference compared to gzip -k though is that there will be no attempt at copying the file.txt's metadata (ownership, permissions, modification time, name of uncompressed file) to file.txt.gz.
Also if file.txt.gz already existed, it will silently override it unless you have turned the noclobber option on in your shell (with set -o noclobber for instance in POSIX shells).
| How to tell gzip to keep original file? |
1,380,615,765,000 |
I have a file file.gz, when I try to unzip this file by using gunzip file.gz, it unzipped the file but only contains extracted and removes the file.gz file.
How can I unzip by keeping both unzipped file and zipped file?
|
Here are several alternatives:
Give gunzip the --keep option (version 1.6 or later)
-k --keep
Keep (don't delete) input files during compression or decompression.
gunzip -k file.gz
Pass the file to gunzip as stdin
gunzip < file.gz > file
Use zcat (or, on older systems, gzcat)
zcat file.gz > file
| Unzipping a .gz file without removing the gzipped file [duplicate] |
1,380,615,765,000 |
More and more tar archives use the xz format based on LZMA2 for compression instead of the traditional bzip2(bz2) compression. In fact kernel.org made a late "Good-bye bzip2" announcement, 27th Dec. 2013, indicating kernel sources would from this point on be released in both tar.gz and tar.xz format - and on the main page of the website what's directly offered is in tar.xz.
Are there any specific reasons explaining why this is happening and what is the relevance of gzip in this context?
|
For distributing archives over the Internet, the following things are generally a priority:
Compression ratio (i.e., how small the compressor makes the data);
Decompression time (CPU requirements);
Decompression memory requirements; and
Compatibility (how wide-spread the decompression program is)
Compression memory & CPU requirements aren't very important, because you can use a large fast machine for that, and you only have to do it once.
Compared to bzip2, xz has a better compression ratio and lower (better) decompression time. It, however—at the compression settings typically used—requires more memory to decompress[1] and is somewhat less widespread. Gzip uses less memory than either.
So, both gzip and xz format archives are posted, allowing you to pick:
Need to decompress on a machine with very limited memory (<32 MB): gzip. Given, not very likely when talking about kernel sources.
Need to decompress minimal tools available: gzip
Want to save download time and/or bandwidth: xz
There isn't really a realistic combination of factors that'd get you to pick bzip2. So its being phased out.
I looked at compression comparisons in a blog post. I didn't attempt to replicate the results, and I suspect some of it has changed (mostly, I expect xz has improved, as its the newest.)
(There are some specific scenarios where a good bzip2 implementation may be preferable to xz: bzip2 can compresses a file with lots of zeros and genome DNA sequences better than xz. Newer versions of xz now have an (optional) block mode which allows data recovery after the point of corruption and parallel compression and [in theory] decompression. Previously, only bzip2 offered these.[2] However none of these are relevant for kernel distribution)
1: In archive size, xz -3 is around bzip -9. Then xz uses less memory to decompress. But xz -9 (as, e.g., used for Linux kernel tarballs) uses much more than bzip -9. (And even xz -0 needs more than gzip -9).
2: F21 System Wide Change: lbzip2 as default bzip2 implementation
| Why are tar archive formats switching to xz compression to replace bzip2 and what about gzip? |
1,380,615,765,000 |
root@server # tar fcz bkup.tar.gz /home/foo/
tar: Removing leading `/' from member names
How can I solve this problem and keep the / on file names ?
|
Use the --absolute-names or -P option to disable this feature.
tar fczP bkup.tar.gz /home/foo
tar fcz bkup.tar.gz --absolute-names /home/foo
| tar: Removing leading `/' from member names |
1,380,615,765,000 |
I am using this command on a 5GB archive
tar -zxvf archive.tar.gz /folder/in/archive
is this the correct way to do this? It seems to be taking forever with no command line output...
|
tar stores relative paths by default. GNU tar even says so if you try to store an absolute path:
tar -cf foo.tar /home/foo
tar: Removing leading `/' from member names
If you need to extract a particular folder, have a look at what's in the tar file:
tar -tvf foo.tar
And note the exact filename. In the case of my foo.tar file, I could extract /home/foo/bar by saying:
tar -xvf foo.tar home/foo/bar # Note: no leading slash
So no, the way you posted isn't (necessarily) the correct way to do it. You have to leave out the leading slash. If you want to simulate absolute paths, do cd / first and make sure you're the superuser. Also, this does the same:
tar -C / -xvf foo.tar home/foo/bar # -C is the ‘change directory’ option
There are very obvious, good reasons why tar converts paths to relative ones. One is the ability to restore an archive in places other than its original source. The other is security. You could extract an archive, expect its files to appear in your current working directory, and instead overwrite system files (or your own work) elsewhere by mistake.
Note: if you use the -P option, tar will archive absolute paths. So it always pays to check the contents of big archives before extracting.
| How do you extract a single folder from a large tar.gz archive? |
1,380,615,765,000 |
I have created zlib-compressed data in Python, like this:
import zlib
s = '...'
z = zlib.compress(s)
with open('/tmp/data', 'w') as f:
f.write(z)
(or one-liner in shell: echo -n '...' | python2 -c 'import sys,zlib; sys.stdout.write(zlib.compress(sys.stdin.read()))' > /tmp/data)
Now, I want to uncompress the data in shell. Neither zcat nor uncompress work:
$ cat /tmp/data | gzip -d -
gzip: stdin: not in gzip format
$ zcat /tmp/data
gzip: /tmp/data.gz: not in gzip format
$ cat /tmp/data | uncompress -
gzip: stdin: not in gzip format
It seems that I have created gzip-like file, but without any headers. Unfortunately I don't see any option to uncompress such raw data in gzip man page, and the zlib package does not contain any executable utility.
Is there a utility to uncompress raw zlib data?
|
It is also possible to decompress it using standard shell-script + gzip, if you don't have, or want to use openssl or other tools.The trick is to prepend the gzip magic number and compress method to the actual data from zlib.compress:
printf "\x1f\x8b\x08\x00\x00\x00\x00\x00" |cat - /tmp/data |gzip -dc >/tmp/out
Edits:
@d0sboots commented: For RAW Deflate data, you need to add 2 more null bytes: → "\x1f\x8b\x08\x00\x00\x00\x00\x00\x00\x00"
This Q on SO gives more information about this approach. An answer there suggests that there is also an 8 byte footer.
Users @Vitali-Kushner and @mark-bessey reported success even with truncated files, so a gzip footer does not seem strictly required.
@tobias-kienzler suggested this function for the bashrc:
zlibd() (printf "\x1f\x8b\x08\x00\x00\x00\x00\x00" | cat - "$@" | gzip -dc)
| How to uncompress zlib data in UNIX? |
1,380,615,765,000 |
Is there a way to add/update a file in a tar.gz archive? Basically, I have an archive which contains a file at /data/data/com.myapp.backup/./files/settings.txt and I'd like to pull that file from the archive (already done) and push it back into the archive once the edit has been done. How can I accomplish this? Is it problematic because of the . in the path?
|
The tar file format is just a series of files concatenated together with a few headers. It's not a very complicated job to rip it apart, put your contents in, and put it back together. That being said, Jander described how tar as a program does not have the utility functions to do this and there are additional complications with compression, which has to both before and after making a change.
There are, however, tools for the job! There are at least two system out there which will allow you to to do a loopback mount of a compressed tar archive onto a folder, then make your changes in the file system. When you are done, unmount the folder and your compressed archive is ready to roll.
The one first option would be the archivemount project for FUSE. Here is a tutorial on that. Your system probably already has FUSE and if it doesn't your distribution should have an option for it.
The other option is tarfs. It's simpler to use, but I've heard it has some trouble with corrupting bzip2 archives so you might test that pretty thoroughly first.
| How to add/update a file to an existing tar.gz archive? |
1,380,615,765,000 |
When handling log files, some end up as gzipped files thanks to logrotate and others not. So when you try something like this:
$ zcat *
you end up with a command line like zcat xyz.log xyz.log.1 xyz.log.2.gz xyz.log.3.gz and then with:
gzip: xyz.log: not in gzip format
Is there a tool that will take the magic bytes, similar to how file works, and use zcat or cat depending on the outcome so that I can pipe the output to grep for example?
NB: I know I can script it, but I am asking whether there is a tool out there already.
|
Try it with -f or --force:
zcat -f -- *
Since zcat is just a simple script that runs
exec gzip -cd "$@"
with long options that would translate to
exec gzip --stdout --decompress "$@"
and, as per the man gzip (emphasize mine):
-f --force
Force compression or decompression even if the file has multiple links
or the corresponding file already exists, or if the compressed data is
read from or written to a terminal. If the input data is not in a format
recognized by gzip, and if the option --stdout is also given, copy the
input data without change to the standard output: let zcat behave as cat.
Also:
so that I can pipe the output to grep for example
You could use zgrep for that:
zgrep -- PATTERN *
though see Stéphane's comment below.
| Is there a tool that combines zcat and cat transparently? |
1,380,615,765,000 |
I often download tarballs with wget from sourceforge.net.
The downloaded files then are named, e.g SQliteManager-1.2.4.tar.gz?r=http:%2F%2Fsourceforge.net%2Fprojects%2Fsqlitemanager%2Ffiles%2F&ts=1305711521&use_mirror=switch
When I try to
tar xzf SQliteManager-1.2.4.tar.gz\?r\=http\:%2F%2Fsourceforge.net%2Fprojects%2Fsqlitemanager%2Ffiles%2F\&ts\=1305711521\&use_mirror\=switch
I receive the following error message:
tar (child): Cannot connect to SQliteManager-1.2.4.tar.gz?r=http: resolve failed
gzip: stdin: unexpected end of file
tar: Child returned status 128
tar: Error is not recoverable: exiting now
After renaming the file to foo.tar.gz the extraction works perfect.
Is there a way, that i am not forced to rename each time the target file before extracting?
|
The reason for the error you are seeing can be found in the GNU tar documentation:
If the archive file name includes a
colon (‘:’), then it is assumed to be
a file on another machine[...]
That is, it is interpretting SQliteManager-1.2.4.tar.gz?r=http as a host name and trying to resolve it to an IP address, hence the "resolve failed" error.
That same documentation goes on to say:
If you need to use a file whose name
includes a colon, then the remote tape
drive behavior can be inhibited by
using the ‘--force-local’ option.
| tar extraction depends on filename? |
1,380,615,765,000 |
I have a directory with plenty of .txt.gz files (where the names do not follow a specific pattern.)
What is the simplest way to gunzip them? I want to preserve their original names, so that they go from whatevz.txt.gz to whatevz.txt
|
How about just this?
$ gunzip *.txt.gz
gunzip will create a gunzipped file without the .gz suffix and remove the original file by default (see below for details). *.txt.gz will be expanded by your shell to all the files matching.
This last bit can get you into trouble if it expands to a very long list of files. In that case, try using find and -exec to do the job for you.
From the man page gzip(1):
gunzip takes a list of files on its command line and replaces each file
whose name ends with .gz, -gz, .z, -z, or _z (ignoring case) and which
begins with the correct magic number with an uncompressed file without the
original extension.
Note about 'original name'
gzip can store and restore the filename used at compression time. Even if you rename the compressed file, you can be surprised to find out it restores to the original name again.
From the gzip manpage:
By default, gzip keeps the original file name and timestamp in the compressed
file. These are used when decompressing the file with the -N option. This is
useful when the compressed file name was truncated or when the time stamp was
not preserved after a file transfer.
And these file names stored in metadata can also be viewed with file:
$ echo "foo" > myfile_orig
$ gzip myfile_orig
$ mv myfile_orig.gz myfile_new.gz
$ file myfile_new.gz
myfile_new.gz: gzip compressed data, was "myfile_orig", last modified: Mon Aug 5 08:46:39 2019, from Unix
$ gunzip myfile_new.gz # gunzip without -N
$ ls myfile_*
myfile_new
$ rm myfile_*
$ echo "foo" > myfile_orig
$ gzip myfile_orig
$ mv myfile_orig.gz myfile_new.gz
# gunzip with -N
$ gunzip -N myfile_new.gz # gunzip with -N
$ ls myfile_*
myfile_orig
| gunzip all .gz files in directory |
1,380,615,765,000 |
I need to create a tarball of a given directory. However, I need to make sure hidden files are included too (such as those beginning with .).
Will the following command automatically take the hidden files into account?
tar -cvzf packed.tar.gz mydir
If not, how can I make sure I include hidden files?
|
Yes, it will.
Files starting with . are not "hidden" in all contexts. They aren't expanded by *, and ls doesn't list them by default, but tar doesn't care about the leading .. (find doesn't care either.)
(Of course, this is one of those things that's easy to find out by experiment.)
| Will tar -cvzf packed.tar.gz mydir take hidden files into account? |
1,380,615,765,000 |
I have a few thousand files that are individually GZip compressed (passing of course the -n flag so the output is deterministic). They then go into a Git repository. I just discovered that for 3 of these files, Gzip doesn't produce the same output on macOS vs Linux. Here's an example:
macOS
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | shasum -a 256
0ac378465b576991e1c7323008efcade253ce1ab08145899139f11733187e455 -
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip --fast -n | shasum -a 256
6e145c6239e64b7e28f61cbab49caacbe0dae846ce33d539bf5c7f2761053712 -
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip -n | shasum -a 256
3562fd9f1d18d52e500619b4a5d5dfa709f5da8601b9dd64088fb5da8de7b281 -
$ gzip --version
Apple gzip 272.250.1
Linux
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | shasum -a 256
0ac378465b576991e1c7323008efcade253ce1ab08145899139f11733187e455 -
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip --fast -n | shasum -a 256
10ac8b80af8d734ad3688aa6c7d9b582ab62cf7eda6bc1a0f08d6159cad96ddc -
$ cat Engine/Extras/ThirdPartyNotUE/NoRedist/EnsureIT/9.7.0/bin/finalizer | gzip -n | shasum -a 256
cbf249e3a35f62a4f3b13e2c91fe0161af5d96a58727d17cf7a62e0ac3806393 -
$ gzip --version
gzip 1.6
Copyright (C) 2007, 2010, 2011 Free Software Foundation, Inc.
Copyright (C) 1993 Jean-loup Gailly.
This is free software. You may redistribute copies of it under the terms of
the GNU General Public License <http://www.gnu.org/licenses/gpl.html>.
There is NO WARRANTY, to the extent permitted by law.
Written by Jean-loup Gailly.
How is this possible? I thought the GZip implementation was completely standard?
UPDATE: Just to confirm that macOS and Linux versions do produce the same output most of the time, both OSes output the same hash for:
$ echo "Vive la France" | gzip --fast -n | shasum -a 256
af842c0cb2dbf94ae19f31c55e05fa0e403b249c8faead413ac2fa5e9b854768 -
|
Note that the compression algorithm (Deflate) in GZip is not strictly bijective. To elaborate: For some data, there's more than one possible compressed output depending on the algorithmic implementation and used parameters. So there's no guarantee at all that Apple GZip and gzip 1.6 will return the same compressed output. These outputs are all valid GZip streams, the standard just guarantees that every of these possible outputs will be decompressed to the same original data.
| GZip doesn't produce the same compressed result on macOS vs Linux |
1,380,615,765,000 |
I have four files that I created using an svndump
test.svn
test2.svn
test.svn.gz
test2.svn.gz
now when I run this
md5sum test2.svn test.svn test.svn.gz test2.svn.gz
Here is the output
89fc1d097345b0255825286d9b4d64c3 test2.svn
89fc1d097345b0255825286d9b4d64c3 test.svn
8284ebb8b4f860fbb3e03e63168b9c9e test.svn.gz
ab9411efcb74a466ea8e6faea5c0af9d test2.svn.gz
So I can't understand why gzip is compressing files differently is it putting a timestamp somewhere before compressing? I had a similar issue with mysqldump as it was using the date field on top
|
gzip stores some of the original file's metadata in record header, including the file modification time and filename, if available. See GZIP file format specification.
So it's expected that your two gzip files aren't identical. You can work around this by passing gzip the -n flag, which stops it from including the original filename and timestamp in the header.
| Why does the gzip version of files produce a different md5 checksum |
1,380,615,765,000 |
I have a huge log file compressed in .gz format and I want to just read the first line of it without uncompressing it to just check the date of the oldest log in the file.
The logs are of the form:
YYYY-MM-DD Log content asnsenfvwen eaifnesinrng
YYYY-MM-DD Log content asnsenfvwen eaifnesinrng
YYYY-MM-DD Log content asnsenfvwen eaifnesinrng
I just want to read the date in the first line which I would do like this for an uncompressed file:
read logdate otherstuff < logfile.gz
echo $logdate
Using zcat is taking too long.
|
Piping zcat’s output to head -n 1 will decompress a small amount of data, guaranteed to be enough to show the first line, but typically no more than a few buffer-fulls (96 KiB in my experiments):
zcat logfile.gz | head -n 1
Once head has finished reading one line, it closes its input, which closes the pipe, and zcat stops after receiving a SIGPIPE (which happens when it next tries to write into the closed pipe). You can see this by running
(zcat logfile.gz; echo $? >&2) | head -n 1
This will show that zcat exits with code 141, which indicates it stopped because of a SIGPIPE (13 + 128).
You can add more post-processing, e.g. with AWK, to only extract the date:
zcat logfile.gz | awk '{ print $1; exit }'
(On macOS you might need to use gzcat rather than zcat to handle gzipped files.)
| read first line from .gz compressed file without decompressing entire file |
1,380,615,765,000 |
I have 200 GB free disk space, 16 GB of RAM (of which ~1 GB is occupied by the desktop and kernel) and 6 GB of swap.
I have a 240 GB external SSD, with 70 GB used1 and the rest free, which I need to back up to my disk.
Normally, I would dd if=/dev/sdb of=Desktop/disk.img the disk first, and then compress it, but making the image first is not an option since doing so would require far more disk space than I have, even though the compression step will result in the free space being squashed so the final archive can easily fit on my disk.
dd writes to STDOUT by default, and gzip can read from STDIN, so in theory I can write dd if=/dev/sdb | gzip -9 -, but gzip takes significantly longer to read bytes than dd can produce them.
From man pipe:
Data written to the write end of the pipe is buffered by the kernel until it is read from the read end of the pipe.
I visualise a | as being like a real pipe -- one application shoving data in and the other taking data out of the pipe's queue as quickly as possible.
What when the program on the left side writes more data more quickly than the other side of the pipe can hope to process it? Will it cause extreme memory or swap usage, or will the kernel try to create a FIFO on disk, thereby filling up the disk? Or will it just fail with SIGPIPE Broken pipe if the buffer is too large?
Basically, this boils down to two questions:
What are the implications and outcomes of shoving more data into a pipe than is read at a time?
What's the reliable way to compress a datastream to disk without putting the entire uncompressed datastream on the disk?
Note 1: I cannot just copy exactly the first 70 used GB and expect to get a working system or filesystem, because of fragmentation and other things which will require the full contents to be intact.
|
Technically you don't even need dd:
gzip < /dev/drive > drive.img.gz
If you do use dd, you should always go with larger than default blocksize like dd bs=1M or suffer the syscall hell (dd's default blocksize is 512 bytes, since it read()s and write()s that's 4096 syscalls per MiB, too much overhead).
gzip -9 uses a LOT more CPU with very little to show for it. If gzip is slowing you down, lower the compression level, or use a different (faster) compression method.
If you're doing file based backups instead of dd images, you could have some logic that decides whether to compress at all or not (there's no point in doing so for various file types). dar (tar alternative`) is one example that has options to do so.
If your free space is ZERO (because it's an SSD that reliably returns zero after TRIM and you ran fstrim and dropped caches) you can also use dd with conv=sparse flag to create an uncompressed, loop-mountable, sparse image that uses zero disk space for the zero areas. Requires the image file to be backed by a filesystem that supports sparse files.
Alternatively for some filesystems there exist programs able to only image the used areas.
| On-the-fly stream compression that doesn't spill over into hardware resources? |
1,380,615,765,000 |
Is it possible to speed up the gzip process?
I'm using
mysqldump "$database_name" | gzip > $BACKUP_DIR/$database_name.sql.gz
to backup a database into a directory, $BACKUP_DIR.
the manpage says:
-# --fast --best
Regulate the speed of compression using the
specified digit #, where -1 or --fast indi‐
cates the fastest compression method (less
compression) and -9 or --best indicates the
slowest compression method (best compression).
The default compression level is -6 (that is,
biased towards high compression at expense of
speed).
How effective would it be to use --fast?
Is this effectively lowering the CPU usage on a modern computer?
My test results
I didn't notice any acceleration:
7 min, 47 seconds (with default ratio -6)
8 min, 36 seconds (with ratio --fast ( = 9 ))
So it seems it takes even longer to use the fast compression?
Only higher compression really slows it down:
11 min, 57 seconds (with ratio --best ( = 1 ))
After getting the Idea with lzop I tested that too and it really is faster:
6 min, 14 seconds with lzop -1 -f -o $BACKUP_DIR/$database_name.sql.lzo
|
If you have a multi-core machine using pigz is much faster than traditional gzip.
pigz, which stands for parallel implementation of gzip, is a fully functional replacement for gzip that exploits multiple processors and multiple cores to the hilt when compressing data. pigz was written by Mark Adler, and uses the zlib and pthread libraries.
Pigz ca be used as a drop-in replacement for gzip. Note than only the compression can be parallelised, not the decompression.
Using pigz the command line becomes
mysqldump "$database_name" | pigz > $BACKUP_DIR/$database_name.sql.gz
| speed up gzip compression |
1,380,615,765,000 |
I usually create a tgz file for my_files with the command tar -czvf my_files.tgz my_files, and extract them with tar -zxvf my_files.tgz. Now I have a tar file created with the command tar -cvf my_files.tar my_files. I'm wondering how I can turn the my_files.tar into my_files.tgz so that later I can extract it with the command tar -zxvf my_files.tgz? Thanks.
|
A plain .tar archive created with cf (with or without v) is uncompressed; to get a .tar.gz or .tgz archive, compress it:
gzip < my_files.tar > my_files.tgz
You might want to add -9 for better compression:
gzip -9 < my_files.tar > my_files.tgz
Both variants will leave both archives around; you can use
gzip -9 my_files.tar
instead, which will produce my_files.tar.gz and delete my_files.tar (if everything goes well). You can then rename my_files.tar.gz to my_files.tgz if you wish.
With many tar implementations you can extract archives without specifying the z option, and tar will figure out what to do — so you can use the same command with compressed and uncompressed archives.
| How to turn a tar file to a tgz file? |
1,380,615,765,000 |
Why is this not possible?
pv ${dest_file} | gzip -1
pv is a progress bar
error
gzip: compressed data not written to a terminal. Use -f to force compression.
For help, type: gzip -h
0 B 0:00:00 [ 0 B/s] [> ] 0%
This works
pv ${file_in} | tar -Jxf - -C /outdir
|
What are you trying to achieve is to see the progress bar of the compression process. But it is not possible using pv. It shows only transfer progress, which you can achieve by something like this (anyway, it is the first link in the google):
pv input_file | gzip > compressed_file
The progress bar will run fast, and then it will wait for compression, which is not observable anymore using pv.
But you can do that other way round and watch the output stream, bot here you will not be able to see the actual progress, because pv does not know the actual size of the compressed file:
gzip <input_file | pv > compressed_file
The best I found so far is the one from commandlinefu even with rate limiting and compression of directories:
$D=directory
tar pcf - $D | pv -s $(du -sb $D | awk '{print $1}') --rate-limit 500k | gzip > target.tar.gz
| pv (progress bar) and gzip |
1,380,615,765,000 |
I have a directory of logs that I would like to set up a job to compress using gzip. The issue is I don't want to recompress the logs I've already compressed.
I tried using ls | grep -v gz | gzip, but that doesn't seem to work.
Is there a way to do this? Basically I want to gzip every file in the directory that does not end in .gz.
|
You can just do:
gzip *
gzip will tell you it skips the files that already have a .gz ending.
If that message gets in the way you can use:
gzip -q *
What you tried did not work, because gzip doesn't read the filenames of the files to compress from stdin, for that to work you would have to use:
ls | grep -v gz | xargs gzip
You will exclude files with the pattern gz anywhere in the file name, not just at the end.¹ You also have to take note that parsing the output of ls is dangerous when you have file names with spaces, newlines, etc., are involved.
A more clean solution, not relying on gzip to skip files with a .gz ending is, that also handles non-compressed files in subdirectories:
find . -type f ! -name "*.gz" -exec gzip {} \;
¹ As izkata commented: using .gz alone to improve this, would not work. You would need to use grep -vF .gz or grep -v '\.gz$'. That still leaves the danger of processing ls' output
| Use gzip to compress the files in a directory except for already existing .gz files |
1,380,615,765,000 |
The problem is I have some database dumps which are either compressed or in plain text. There is no difference in file extension etc. Using zcat on uncompressed files produces an error instead of the output.
Is there maybe another cat sort of tool that is smart enough to detect what type of input it gets?
|
Just add the -f option.
$ echo foo | tee file | gzip > file.gz
$ zcat file file.gz
gzip: file: not in gzip format
foo
$ zcat -f file file.gz
foo
foo
(use gzip -dcf instead of zcat -f if your zcat is not the GNU (or GNU-emulated like in modern BSDs) one and only knows about .Z files).
| Is it possible to make zcat output text even if it's uncompressed? [duplicate] |
1,380,615,765,000 |
Is it possible to compress a very large file (~30 GB) using gzip? If so, what commands, switches, and options should I use?
Or is there another program (preferably one commonly available on Ubuntu distributions) that I can use to compress/zip very large files? Do you have any experience with this?
|
AFAIK there is no limit of size for gzip - at least not 30GB. Of course, you need the space for the zipped file on your disc, both versions will be there simultanously while compressing.
bzip2 compresses files (not only big ones :-) better, but it is (sometimes a lot) slower.
| Is it possible to compress a very large file (~30 GB) using gzip? |
1,380,615,765,000 |
I have to create an archive with the command gzip (not tar - it's necessary) and the archive should contain files from another directory - for example, /etc.
I tried to use command
gzip myetc.gz /etc
But it didn't work.
|
Gzip works only with a single file, or a stream - data piped to gzip. So you first need to generate one file, like with tar, and then you could gzip that. The other option is to gzip all individual files, and then tar that into one file.
Both these solutions are stupid and should not be used. You should use tar with the built in compression option and do it all in one command.
| Create an archive with command "gzip" |
1,380,615,765,000 |
Assume i have an gzip compressed tar-ball compressedArchive.tgz (+100 files, totaling +5gb).
What would be the fastest way to remove all entries matching a given filename pattern for example prefix*.jpg and then store the remains in a gzip:ed tar-ball again?
Replacing the old archive or creating a new one is not important, whichever is fastest.
|
With GNU tar, you can do:
pigz -d < file.tgz |
tar --delete --wildcards -f - '*/prefix*.jpg' |
pigz > newfile.tgz
With bsdtar:
pigz -d < file.tgz |
bsdtar -cf - --exclude='*/prefix*.jpg' @- |
pigz > newfile.tgz
(pigz being the multi-threaded version of gzip).
You could overwrite the file over itself like:
{ pigz -d < file.tgz |
tar --delete --wildcards -f - '*/prefix*.jpg' |
pigz &&
perl -e 'truncate STDOUT, tell STDOUT'
} 1<> file.tgz
But that's quite risky, especially if the result ends up being less compressed than the original file (in which case, the second pigz may end up overwriting areas of the file which the first one has not read yet).
| Efficiently remove file(s) from large .tgz |
1,380,615,765,000 |
Is it possible to use gzip to decompress a gzipped file, without the gz extension, and without moving the file?
|
You can pass the -S option to use a suffix other than .gz.
gunzip -S .compressed file.compressed
If you want the uncompressed file to have some other name, run
gzip -dc <compressed-file >uncompressed-file
gunzip <compressed-file >uncompressed-file
(these commands are equivalent).
Normally unzipping restores the name and date of the original file (when it was compressed); this doesn't happen with -c.
If you want the compressed file and the uncompressed file to have the same name, you can't do it directly, you need to either rename the compressed file or rename the uncompressed file. In particular, gzip removes and recreates its target file, so if you need to modify the file in place because you don't have write permission in the directory, you need to use -c or redirection.
cp somefile /tmp
gunzip </tmp/somefile >|somefile
Note that gunzip <somefile >somefile will not work, because the gunzip process would see a file truncated to 0 bytes when it starts reading. If you could invoke the truncation, then gunzip would feed back on its own output; either way, this one can't be done in place.
| Gzip decompress on file with other extension? |
1,380,615,765,000 |
I am trying to save space while doing a "dumb" backup by simply dumping data into a text file. My backup script is executed daily and looks like this:
Create a directory named after the backup date.
Dump some data into a text file "$name".
If the file is valid, gzip it: gzip "$name". Otherwise, rm "$name".
Now I want to add an additional step to remove a file if the same data was also available in the day before (and create symlink or hardlink).
At first I thought of using md5sum "$name", but this does not work because I also store the filename and creation date.
Does gzip have an option to compare two gzipped files and tell me whether they are equal or not? If gzip does not have such an option, is there another way to achieve my goal?
|
@deroberts answer is great, though I want to share some other information that I have found.
gzip -l -v
gzip-compressed files contain already a hash (not secure though, see this SO post):
$ echo something > foo
$ gzip foo
$ gzip -v -l foo.gz
method crc date time compressed uncompressed ratio uncompressed_name
defla 18b1f736 Feb 8 22:34 34 10 -20.0% foo
One can combine the CRC and uncompressed size to get a quick fingerprint:
gzip -v -l foo.gz | awk '{print $2, $7}'
cmp
For checking whether two bytes are equal or not, use cmp file1 file2. Now, a gzipped file has some header with the data and footer (CRC plus original size) appended. The description of the gzip format shows that the header contains the time when the file was compressed and that the file name is a nul-terminated string that is appended after the 10-byte header.
So, assuming that the file name is constant and the same command (gzip "$name") is used, one can check whether two files are different by using cmp and skipping the first bytes including the time:
cmp -i 8 file1 file2
Note: the assumption that the same compression options is important, otherwise the command will always report the file as different. This happens because the compression options are stored in the header and may affect the compressed data. cmp just looks at raw bytes and do not interpret it as gzip.
If you have filenames of the same length, then you could try to calculate the bytes to be skipped after reading the filename. When the filenames are of different size, you could run cmp after skipping bytes, like cmp <(cut -b9- file1) <(cut -b10- file2).
zcmp
This is definitely the best way to go, it first compresses data and starts comparing the bytes with cmp (really, this is what is done in the zcmp (zdiff) shellscript).
One note, do not be afraid of the following note in the manual page:
When both files must be uncompressed before comparison, the second is uncompressed to /tmp. In all other cases, zdiff and zcmp use only a pipe.
When you have a sufficiently new Bash, compression will not use a temporary file, just a pipe. Or, as the zdiff source says:
# Reject Solaris 8's buggy /bin/bash 2.03.
| How can I check if two gzipped files are equal? |
1,380,615,765,000 |
I run commands:
tar -cf myArchive.tar myDirectory/
gzip myArchive.tar
then I copy the file over a lot of unreliable mediums, and later I unpack it using:
tar -xzf myArchive.tar.gz
The fact that I compressed the tar-ball, will that in any way guarantee the integrity, or at least a CRC of the unpacked content?
|
tar itself does not write down a checksum for later comparsion. If you gzip the tar archive you can have that functionality.
tar uses compress. If you use the -Z flag while creating the archive tar will use the compress program when reading or writing the archive. From the gzip manpage:
The standard compress format was not designed to allow consistency
checks.
But, you can use the -z parameter. Then tar reads and writes the archive through gzip. And gzip writes a crc checksum. To display that checksum use that command:
$ gzip -lv archive.tar.gz
method crc date time compressed uncompressed ratio uncompressed_name
defla 3f641c33 Sep 25 14:01 24270 122880 80.3% archive.tar
From the gzip manpage:
When using the first two formats (gzip or zip is meant), gunzip checks
a 32 bit CRC.
| Does gzip add integrity/crc check to a .tar? |
1,380,615,765,000 |
A problem with .tar.gz archives is that, when I try to just list an archive's content, the computer actually decompresses it, which would take a very long time if the file is large.
Other file formats like .7z, .rar,.zip don't have this problem. Listing their contents takes just an instant.
In my naive opinion, this is a huge drawback of the .tar.gz archive format.
So I actually have 2 questions:
why do people use .tar.gz so much, despite this drawback?
what choices (I mean other software or tools) do I have if I want the "instant content listing" capability?
|
It's important to understand there's a trade-off here.
tar means tape archiver. On a tape, you do mostly sequential reading and writing. Tapes are rarely used nowadays, but tar is still used for its ability to read and write its data as a stream.
You can do:
tar cf - files | gzip | ssh host 'cd dest && gunzip | tar xf -'
You can't do that with zip or the like.
You can't even list the content of a zip archive without storing it locally in a seekable file first. Things like:
curl -s https://github.com/dwp-forge/columns/archive/v.2016-02-27.zip | unzip -l /dev/stdin
won't work.
To achieve that quick reading of the content, zip or the like need to build an index. That index can be stored at the beginning of the file (in which case it can only be written to regular files, not streams), or at the end, which means the archiver needs to remember all the archive members before printing it in the end and means a truncated archive may not be recoverable.
That also means archive members need to be compressed individually which means a much lower compression ratio especially if there's a lot of small files.
Another drawback with formats like zip is that the archiving is linked to the compressing, you can't choose the compression algorithm. See how tar archives used to be compressed with compress (tar.Z), then with gzip, then bzip2, then xz as new more performant compression algorithms were devised. Same goes for encryption. Who would trust zip's encryption nowadays?
Now, the problem with tar.gz archives is not that much that you need to uncompress them. Uncompressing is often faster than reading off a disk (you'll probably find that listing the content of a large tgz archive is quicker that listing the same one uncompressed when not cached in memory), but that you need to read the whole archive.
Not being able to read the index quickly is not really a problem. If you do foresee needing to read the table content of an archive often, you can just store that list in a separate file. For instance, at creation time, you can do:
tar cvvf - dir 2> file.tar.xz.list | xz > file.tar.xz
A bigger problem IMO is the fact that because of the sequential aspect of the archive, you can't extract individual files without reading the whole beginning section of the archive that leads to it. IOW, you can't do random reads within the archive.
Now, for seekable files, it doesn't have to be that way.
If you compress your tar archive with gzip, that compresses it as a whole, the compression algorithm uses data seen at the beginning to compress, so you have to start from the beginning to uncompress.
But the xz format can be configured to compress data in separate individual chunks (large enough so as the compression to be efficient), that means that as long as you keep an index at the end of those compressed chunks, for seekable files, you access the uncompressed data randomly (in chunks at least).
pixz (parallel xz) uses that capability when compressing tar archives to also add an index of the start of each member of the archive at the end of the xz file.
So, for seekable files, not only can you get a list of the content of the tar archive instantly (without metadata though) if they have been compressed with pixz:
pixz -l file.tar.xz
But you can also extract individual elements without having to read the whole archive:
pixz -x archive/member.txt < file.tar.xz | tar xpf -
Now, as to why things like 7z or zip are rarely used on Unix is mostly because they can't archive Unix files. They've been designed for other operating systems. You can't do a faithful backup of data using those. They can't store metadata like owner (id and name), permission, they can't store symlinks, devices, fifos..., they can't store information about hard links, and other metadata information like extended attributes or ACLs.
Some of them can't even store members with arbitrary names (some will choke on backslash or newline or colon, or non-ascii filenames) (some tar formats also have limitations though).
Never uncompress a tgz/tar.xz file to disk!
In case it is not obvious, one doesn't use a tgz or tar.bz2, tar.xz... archive as:
unxz file.tar.xz
tar tvf file.tar
xz file.tar
If you've got an uncompressed .tar file lying about on your file system, it's that you've done something wrong.
The whole point of those xz/bzip2/gzip being stream compressors is that they can be used on the fly, in pipelines as in
unxz < file.tar.xz | tar tvf -
Though modern tar implementations know how to invoke unxz/gunzip/bzip2 by themselves, so:
tar tvf file.tar.xz
would generally also work (and again uncompress the data on the fly and not store the uncompressed version of the archive on disk).
Example
Here's a Linux kernel source tree compressed with various formats.
$ ls --block-size=1 -sS1
666210304 linux-4.6.tar
173592576 linux-4.6.zip
97038336 linux-4.6.7z
89468928 linux-4.6.tar.xz
First, as noted above, the 7z and zip ones are slightly different because they can't store the few symlinks in there and are missing most of the metadata.
Now a few timings to list the content after having flushed the system caches:
$ echo 3 | sudo tee /proc/sys/vm/drop_caches
3
$ time tar tvf linux-4.6.tar > /dev/null
tar tvf linux-4.6.tar > /dev/null 0.56s user 0.47s system 13% cpu 7.428 total
$ time tar tvf linux-4.6.tar.xz > /dev/null
tar tvf linux-4.6.tar.xz > /dev/null 8.10s user 0.52s system 118% cpu 7.297 total
$ time unzip -v linux-4.6.zip > /dev/null
unzip -v linux-4.6.zip > /dev/null 0.16s user 0.08s system 86% cpu 0.282 total
$ time 7z l linux-4.6.7z > /dev/null
7z l linux-4.6.7z > /dev/null 0.51s user 0.15s system 89% cpu 0.739 total
You'll notice listing the tar.xz file is quicker than the .tar one even on this 7 years old PC as reading those extra megabytes from the disk takes longer than reading and decompressing the smaller file.
Then OK, listing the archives with 7z or zip is quicker but that's a non-problem as as I said, it's easily worked around by storing the file list alongside the archive:
$ tar tvf linux-4.6.tar.xz | xz > linux-4.6.tar.xz.list.xz
$ ls --block-size=1 -sS1 linux-4.6.tar.xz.list.xz
434176 linux-4.6.tar.xz.list.xz
$ time xzcat linux-4.6.tar.xz.list.xz > /dev/null
xzcat linux-4.6.tar.xz.list.xz > /dev/null 0.05s user 0.00s system 99% cpu 0.051 total
Even faster than 7z or zip even after dropping caches. You'll also notice that the cumulative size of the archive and its index is still smaller than the zip or 7z archives.
Or use the pixz indexed format:
$ xzcat linux-4.6.tar.xz | pixz -9 > linux-4.6.tar.pixz
$ ls --block-size=1 -sS1 linux-4.6.tar.pixz
89841664 linux-4.6.tar.pixz
$ echo 3 | sudo tee /proc/sys/vm/drop_caches
3
$ time pixz -l linux-4.6.tar.pixz > /dev/null
pixz -l linux-4.6.tar.pixz > /dev/null 0.04s user 0.01s system 57% cpu 0.087 total
Now, to extract individual elements of the archive, the worst case scenario for a tar archive is when accessing the last element:
$ xzcat linux-4.6.tar.xz.list.xz|tail -1
-rw-rw-r-- root/root 5976 2016-05-15 23:43 linux-4.6/virt/lib/irqbypass.c
$ time tar xOf linux-4.6.tar.xz linux-4.6/virt/lib/irqbypass.c | wc
257 638 5976
tar xOf linux-4.6.tar.xz linux-4.6/virt/lib/irqbypass.c 7.27s user 1.13s system 115% cpu 7.279 total
wc 0.00s user 0.00s system 0% cpu 7.279 total
That's pretty bad as it needs to read (and uncompress) the whole archive. Compare with:
$ time unzip -p linux-4.6.zip linux-4.6/virt/lib/irqbypass.c | wc
257 638 5976
unzip -p linux-4.6.zip linux-4.6/virt/lib/irqbypass.c 0.02s user 0.01s system 19% cpu 0.119 total
wc 0.00s user 0.00s system 1% cpu 0.119 total
My version of 7z seems not to be able to do random access, so it seems to be even worse than tar.xz:
$ time 7z e -so linux-4.6.7z linux-4.6/virt/lib/irqbypass.c 2> /dev/null | wc
257 638 5976
7z e -so linux-4.6.7z linux-4.6/virt/lib/irqbypass.c 2> /dev/null 7.28s user 0.12s system 89% cpu 8.300 total
wc 0.00s user 0.00s system 0% cpu 8.299 total
Now since we have our pixz generated one from earlier:
$ time pixz < linux-4.6.tar.pixz -x linux-4.6/virt/lib/irqbypass.c | tar xOf - | wc
257 638 5976
pixz -x linux-4.6/virt/lib/irqbypass.c < linux-4.6.tar.pixz 1.37s user 0.06s system 84% cpu 1.687 total
tar xOf - 0.00s user 0.01s system 0% cpu 1.693 total
wc 0.00s user 0.00s system 0% cpu 1.688 total
It's faster but still relatively slow because the archive contains few large blocks:
$ pixz -tl linux-4.6.tar.pixz
17648865 / 134217728
15407945 / 134217728
18275381 / 134217728
19674475 / 134217728
18493914 / 129333248
336945 / 2958887
So pixz still needs to read and uncompress a (up to a) ~19MB large chunk of data.
We can make random access faster by making archives will smaller blocks (and sacrifice a bit of disk space):
$ pixz -f0.25 -9 < linux-4.6.tar > linux-4.6.tar.pixz2
$ ls --block-size=1 -sS1 linux-4.6.tar.pixz2
93745152 linux-4.6.tar.pixz2
$ time pixz < linux-4.6.tar.pixz2 -x linux-4.6/virt/lib/irqbypass.c | tar xOf - | wc
257 638 5976
pixz -x linux-4.6/virt/lib/irqbypass.c < linux-4.6.tar.pixz2 0.17s user 0.02s system 98% cpu 0.189 total
tar xOf - 0.00s user 0.00s system 1% cpu 0.188 total
wc 0.00s user 0.00s system 0% cpu 0.187 total
| Print archive file list instantly (without decompressing entire archive) |
1,380,615,765,000 |
I have a job on a batch system that runs extremely long and produces tons of output. So much actually that I have to pipe the standard output through gzip to keep the batch node from filling its work area and subsequently crashing.
longscript | gzip -9 > log.gz
Now, I would like to investigate the output of the job while it is still running.
So I do this:
gunzip log.gz
This runs very long, as it is huge file (several GB). I can see the output file being created while it is running and can look at it while it is being built.
tail log
> some-line-of-the-log-file
tail log
> some-other-line-of-the-log-file
However, ultimately, gzip encounters the end of the gzipped file. Since the job is still running and gzip is still writing the file, there is no proper footer yet, so this happens:
gzip: log.gz: unexpected end of file
After this, the extracted log file is deleted, as gzip thinks that the corrupted extracted data is of no use to me. I, however, disagree - even if the last couple of lines are scrambled, the output is still highly interesting to me.
How can I convince gzip to let me keep the "corrupted" file?
|
Apart from the very end of the file, you will be able to see the uncompressed data with zcat (or gzip -dc, or gunzip -c):
zcat log.gz | tail
or
zcat log.gz | less
or
zless log.gz
gzip will do buffering for obvious reasons (it needs to compress the data in chunks), so even though the program may have outputted some data, that data may not yet be in the log.gz file.
You may also store the uncompressed log with
zcat log.gz > log
... but that would be silly since there's obviously a reason why you compress the output in the first place.
| gzip: unexpected end of file with - how to read file anyway |
1,380,615,765,000 |
To backup a snapshot of my work, I run a command like tar -czf work.tgz work to create a gzipped tar file, which I can then drop in cloud storage. However, I have just noticed that gzip has a 4 GB size limit, and my work.tgz file is more than 4 GB.
Despite that, if I create a gzip tar file on my current computer (running Mac OS X 10.15.4, gzip version is called Apple gzip 287.100.2) I can successfully retrieve it. So gunzip works on a >4GB in my particular case. But I want to be able to create and read these large gzip files on either Mac OS X or Linux, and possibly other systems in the future.
My question is: will I be able to untar/gunzip large files anywhere? In other words, how portable is a gzip file which is more than 4 GB in size? Does it matter if I create it on Mac OS, Linux, or something else?
A bit of online reading suggests gzip will successfully gzip/gunzip a larger file, but will not correctly record the uncompressed size, because the size is stored as a 32 bit integer. Is that all the limit is?
|
I have just noticed that gzip has a 4 GB size limit
More accurately, the gzip format can’t correctly store uncompressed file sizes over 4GiB; it stores the lower 32 bits of the uncompressed size, and gzip -l misleadingly presents that as the size of the original data. The result is that, up to gzip 1.11 included, gzip -l won’t show the right size for any compressed file whose original size is over 4GiB.
Apart from that, there is no limit due to gzip itself, and gzipped files over 4GiB are portable. The format is specified by RFC 1952 and support for it is widely available.
The confusion over the information presented by gzip -l has been fixed in gzip 1.12; gzip -l now decompresses the data to determine the real size of the original data, instead of showing the stored size.
Will I be able to untar/gunzip large files anywhere?
Anywhere that can handle large files, and where spec-compliant implementations of tar and gunzip are available.
In other words, how portable is a gzip file which is more than 4 GB in size?
The gzip format itself is portable, and gzip files are also portable, regardless of the size of the data they contain.
Does it matter if I create it on Mac OS, Linux, or something else?
No, a gzip file created on any platform can be uncompressed on any other platform with the required capabilities (in particular, the ability to store large files, in the context of this question).
See also Compression Utility Max Files Size Limit | Unix/Linux.
| How portable is a gzip file over 4 GB in size? |
1,380,615,765,000 |
When I run gunzip -dc /path_to_tar.gz_file/zip1.tar.gz | tar xf - in the directory where the tar.gz file is located, it extracts just fine.
How do I tell it to place the contents from the tar.gz file into a specific directory?
I tried this gunzip -dc /path_to_tar.gz_file/zip1.tar.gz | tar xf /target_directory with a tar error.
I should also note here that I am attempting to do this in a bash script and that I'm running Solaris 10.
|
You can do a single tar command to extract the contents where you want:
tar -zxvf path_to_file -C output_directory
As explained in the tar manpages:
-C directory, --cd directory, --directory directory
In c and r mode, this changes the directory before adding the
following files. In x mode, change directories after opening the
archive but before extracting entries from the archive.
As you added that you are using Solaris, I think you could try:
gunzip -dc path_to_file | tar xf - -C path_to_extract
| How do I use gunzip and tar to extract my tar.gz file to the specific directory I want? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.