date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,370,268,943,000
My output of xrandr is: Screen 0: minimum 8 x 8, current 1920 x 1080, maximum 32767 x 32767 eDP1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 340mm x 190mm 1366x768 60.02*+ 1280x720 59.86 60.00 59.74 1024x768 60.00 1024x576 60.00 59.90 59.82 960x540 60.00 59.63 59.82 800x600 60.32 56.25 864x486 60.00 59.92 59.57 640x480 59.94 720x405 59.51 60.00 58.99 680x384 60.00 640x360 59.84 59.32 60.00 DP1 disconnected (normal left inverted right x axis y axis) HDMI1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 480mm x 270mm 1920x1080i 60.00 + 50.00 59.94 1920x1200 59.95 1920x1080 60.00* 50.00 59.94 1680x1050 59.88 1280x1024 60.02 1440x900 59.90 1280x720 60.00 50.00 59.94 1024x768 60.00 800x600 60.32 720x576 50.00 720x480 60.00 59.94 640x480 60.00 59.94 720x400 70.08 VGA1 disconnected (normal left inverted right x axis y axis) VIRTUAL1 disconnected (normal left inverted right x axis y axis) I want to mirror my laptop screen to my TV monitor. But how I have it currently set up, the laptop screen is too zoomed in. That is, I can't see the task bar (panel) or whisker menu because they are off the screen. I have found many thread of people changing settings with xrandr, but I can't get it to work for me. Most people seem to have issues with panning or overscanning. My issue seems much simpler but I can't fix it. I simply want both displays to fit the screen. I'm using EndeavourOS, if that's important to know. Could anyone please suggest how I can fix this? EDIT: I've just found the scaling options in the display settings GUI. I now have it fit the screen, but the UI is smaller and text is hard to read. I assume there's a way to do this properly.
Mirroring isn't going to work this way. The OS keeps track of positions of windows or your mouse cursor using coordinates. Top left coordinates are 0:0, screen resolution shows the bottom right coordinates. That's what keeps your mouse from disappearing from view when it hits the edge of the screen - that's the end of the coordinate matrix. If you put your mouse on the top left corner, both screens will translate that to 0:0. Now move your mouse 100 pixels right and down on the laptop screen - how should that be translated to the larger resolution? We could translate it directly to 100:100, no problem. Now move your mouse on the bottom right corner of the laptop screen. The coordinates are now 1366:768. On your larger screen that's a bit right and down from the center, and the mouse can't move further as it hit the coordinate max. Or we could use the larger resolution as the base for translation. Move the mouse to the bottom right corner of the larger screen and it will be in coordinates 1920:1080 - your laptop monitor can't show it. Consequently mirroring a screen = exact same resolution. If you want both screens to have the max resolution the only option is to extend the desktop so that both screens are used individually.
How can I have two mirrored displays (laptop screen + TV monitor) with different resolutions?
1,370,268,943,000
Symptoms Every few minutes/hours the displays connected to the USB3 DisplayLink dock turn off for a few seconds and back on again. Hardware Dell docking station model D6000 USB3 connection to HP laptop (happens on at least three different models) DisplayPort connections to two HP monitors OS Arch Linux Ubuntu 20.04 Software displaylink 5.3.1.34-4 and other versions Logs I was lucky to catch only a few log entries in the journal while one of these instances happened. I've included all the raw logs around this time, including before the disconnect and after the reconnect: Dec 04 09:54:25 host gnome-shell[1676]: libinput error: event5 - Das Keyboard: client bug: event processing lagging behind by 14ms, your system is too slow Dec 04 09:55:43 host kernel: usb 4-1.1: Disable of device-initiated U1 failed. Dec 04 09:55:43 host kernel: usb 4-1.1: Disable of device-initiated U2 failed. Dec 04 09:55:43 host kernel: cdc_ncm 4-1.1:1.5 ens4u1u1i5: unregister 'cdc_ncm' usb-0000:37:00.0-1.1, CDC NCM Dec 04 09:55:43 host NetworkManager[1027]: <info> [1607028943.8418] device (ens4u1u1i5): state change: unavailable -> unmanaged (reason 'removed', sys-iface-state: 'removed') Dec 04 09:55:43 host dhcpcd[950]: ens4u1u1i5: removing interface Dec 04 09:55:43 host kernel: usb 4-1.1: Set SEL for device-initiated U1 failed. Dec 04 09:55:43 host kernel: usb 4-1.1: Set SEL for device-initiated U2 failed. Dec 04 09:55:44 host kernel: usb 4-1.1: reset SuperSpeed Gen 1 USB device number 3 using xhci_hcd Dec 04 09:55:44 host kernel: usb 4-1.1: Warning! Unlikely big volume range (=767), cval->res is probably wrong. Dec 04 09:55:44 host kernel: usb 4-1.1: [4] FU [Mic Capture Volume] ch = 2, val = -4592/7680/16 Dec 04 09:55:44 host kernel: usb 4-1.1: Warning! Unlikely big volume range (=672), cval->res is probably wrong. Dec 04 09:55:44 host kernel: usb 4-1.1: [7] FU [Dell USB Audio Playback Volume] ch = 6, val = -10752/0/16 Dec 04 09:55:44 host upowerd[1197]: treating change event as add on /sys/devices/pci0000:00/0000:00:1c.0/0000:01:00.0/0000:02:02.0/0000:37:00.0/usb4/4-1/4-1.1 Dec 04 09:55:44 host kernel: cdc_ncm 4-1.1:1.5: MAC-Address: 9c:eb:e8:f2:8e:31 Dec 04 09:55:44 host kernel: cdc_ncm 4-1.1:1.5: setting rx_max = 16384 Dec 04 09:55:44 host kernel: cdc_ncm 4-1.1:1.5: setting tx_max = 16384 Dec 04 09:55:44 host kernel: cdc_ncm 4-1.1:1.5 usb0: register 'cdc_ncm' at usb-0000:37:00.0-1.1, CDC NCM, 9c:eb:e8:f2:8e:31 Dec 04 09:55:44 host kernel: usb 4-1.1: usbfs: process 173165 (ActiveCommandQu) did not claim interface 0 before use Dec 04 09:55:44 host NetworkManager[1027]: <info> [1607028944.2517] manager: (usb0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/9) Dec 04 09:55:44 host systemd-udevd[181574]: Using default interface naming scheme 'v245'. Dec 04 09:55:44 host boltd[941]: probing: started [1000] Dec 04 09:55:44 host systemd-udevd[181574]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Dec 04 09:55:44 host kernel: cdc_ncm 4-1.1:1.5 ens4u1u1i5: renamed from usb0 Dec 04 09:55:44 host upowerd[1197]: treating change event as add on /sys/devices/pci0000:00/0000:00:1c.0/0000:01:00.0/0000:02:02.0/0000:37:00.0/usb4/4-1/4-1.1 Dec 04 09:55:44 host NetworkManager[1027]: <info> [1607028944.2787] device (usb0): interface index 8 renamed iface from 'usb0' to 'ens4u1u1i5' Dec 04 09:55:44 host NetworkManager[1027]: <info> [1607028944.2835] device (ens4u1u1i5): state change: unmanaged -> unavailable (reason 'managed', sys-iface-state: 'external') Dec 04 09:55:44 host NetworkManager[1027]: <info> [1607028944.2859] settings: (ens4u1u1i5): created default wired connection 'Wired connection 1' Dec 04 09:55:44 host systemd-udevd[181575]: Using default interface naming scheme 'v245'. Dec 04 09:55:44 host systemd-udevd[181575]: ethtool: autonegotiation is unset or enabled, the speed and duplex are not writable. Dec 04 09:55:44 host dhcpcd[950]: ens4u1u1i5: waiting for carrier Dec 04 09:55:44 host dhcpcd[950]: ens4u1u1i5: waiting for carrier Dec 04 09:55:44 host kernel: cdc_ncm 4-1.1:1.5 ens4u1u1i5: network connection: disconnected Dec 04 09:55:47 host boltd[941]: probing: timeout, done: [2973976] (2000000) Other This has been observed by a bunch of people on similar software and hardware, not just the ones above. It also happens on Windows 10, for example.
Old question, but I'll still provide answer since it might help new DisplayLink users. I've used DisplayLink with both Linux and Windows, had two different devices. Here are the common points that were causing issues. I'll address Windows too since some users might be dual-booting. Check the USB suspend/powersave settings, and try to disable them, especially if you have displaylink device connected via USB hub or a dock. Powertop is a helpful tool for that under Linux. Under Windows, try power saving settings. Some issues can be laptop specific, check for any additional BIOS power saving configuration for graphics, USB or general performance. Check if you don't have wifi adapter or a Bluetooth wireless device too close to the DisplayLink adapter. I had this issue with my older laptop, using the other USB or connecting via LAN cable solved it. Check what versions of Displaylink and Evdi you're using. Your distro might have older versions that don't have some of the newer bugs fixed. Also some Linux kernel versions have bugs or features that are not yet integrated with Evdi. Try both latest stable and LTS version of kernels if available. Try switching to default Intel modesetting driver on Linux and install the latest DisplayLink and graphics drivers on Windows. If it still won't work, check the evdi github page, especially the Issues section. You might find a hint or advice what version works best. Arch wiki DisplayLink page offers some good installation and troubleshooting tips for Displaylink (suspend issues, display recognition problems, redraw issues, etc.).
How to avoid DisplayLink dock monitors disconnecting intermittently?
1,370,268,943,000
I've tried this : https://wiki.archlinux.org/index.php/xrandr#Permanently_adding_undetected_resolutions Here I was advised on the Debian IRC channel to change the path of the config file to: /etc/X11/xorg.conf.d instead of /etc/X11/xorg.conf.d/10-monitor.conf as per the ArchWiki. However, the person who was helping me out vanished. and also this : How to set custom resolution using xrandr when the resolution is not available in 'Display Settings' none of the above worked. Using arandr instead I see that the maximum available resolution is 1024x768 which is weird considering that the "manual" says it can support up to 1080p 60Hz which is what I want. Mind helping me out ?
In Debian, the default directory that contains xorg.conf files is indeed /usr/share/X11/xorg.conf.d/. This would be the correct location to place custom X settings. Here is the configuration file you need to force a specific resolution (and have none other to choose from!). Save it in /usr/share/X11/xorg.conf.d/10-monitor.conf and reboot. Note, I generated the modeline with the command cvt 1280 1080. Section "Monitor" Identifier "VGA1" Modeline "1280x1080_60.00" 115.00 1280 1368 1496 1712 1080 1083 1093 1120 -hsync +vsync EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "VGA1" SubSection "Display" Modes "1280x1080_60.00" EndSubSection EndSection Section "Device" Identifier "Device0" Driver "intel" EndSection If this still doesn't work, then you VGA adapter probably doesn't support this mode, despite what the manual says.
Can't set the resolution to 1080 on a VGA to HDMI adapter on Debian 10 (Thinkpad X220)
1,370,268,943,000
On my Odroid running Ubuntu 16.04.3 LTS I have a python2 program that interacts with a display device (projector). When I run this program from the command line: python ~/imgProc/torcam.py everything works fine. I want this program to run at startup so I created a service file: /lib/systemd/system/torcam.service It contains the following: [Unit] Description=Torcam Service After=rc-local.service network-online.target [Service] User=root ExecStart=/home/odroid/imgProc/starttor.sh [Install] WantedBy=multi-user.target The startup script, starttor.sh contains: #!/bin/bash export DISPLAY=:0 cd /home/odroid/imgProc python ./torcam.py If I run this script from the command line everything works fine, but when I run this at boot or using systemctl I get an error saying "cannont open display: :0" Here's how I installed the service: odroid@odroid:~/imgProc$ sudo systemctl enable torcam odroid@odroid:~/imgProc$ sudo systemctl start torcam odroid@odroid:~/imgProc$ sudo systemctl status torcam Here's the output: odroid@odroid:~/imgProc$ sudo systemctl status torcam ● torcam.service - Torcam Service Loaded: loaded (/lib/systemd/system/torcam.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Fri 2016-02-12 08:39:21 EST; 6min ago Process: 1807 ExecStart=/home/odroid/imgProc/starttor.sh (code=exited, status=1/FAILURE) Main PID: 1807 (code=exited, status=1/FAILURE) Feb 12 08:39:19 odroid systemd[1]: Started Torcam Service. Feb 12 08:39:19 odroid systemd[1807]: torcam.service: Executing: /home/odroid/imgProc/starttor.sh Feb 12 08:39:21 odroid starttor.sh[1807]: No protocol specified Feb 12 08:39:21 odroid starttor.sh[1807]: (test:1808): Gtk-WARNING **: cannot open display: :0 Feb 12 08:39:21 odroid systemd[1]: torcam.service: Main process exited, code=exited, status=1/FAILURE Feb 12 08:39:21 odroid systemd[1]: torcam.service: Unit entered failed state. Feb 12 08:39:21 odroid systemd[1]: torcam.service: Failed with result 'exit-code'. I understand that the problem is related to starting a service that interacts with the display, but I'm stuck on how to enable this so it works on boot. Any suggestions?
Your torcam needs an X server, and systemd scripts are not meant to use the X server (it may not even have started when the scripts are run). If you need this program to access the display device/projector independent of the user logging in via keyboard/monitor, consider using two X servers (one for the monitor, one for the projector), and starting torcam with correct authentication (man xauth etc.) when the X server for the projector has started. You'll need to read up on xorg.conf, and how to restrict the X server to just some outputs (assuming it's a single graphics card with multiple outputs). You'll also need to read up on how an X server is started (display manager like xdm etc., what you need for your monitor, vs. directly).
Linux Service Cannot Open Display
1,370,268,943,000
I am experiencing some flickering problems with Google Chrome (and Chromium, but not on Firefox) on certain websites. The thing is that the flickering manifests only on Cinnamon desktop environment, and not on Xfce. Turning off hardware acceleration does not help: it turns the flickering into tearing. Notes: Cinnamon was installed after Xfce, the Chrome-flickering issue is the only problem I am facing with Cinnamon. Also, I am using Bumblebee to run programs on nVidia graphics card on-demand, but since everything works well on Xfce, I am guessing that it's a (Cinnamon/Chrome)-related problem. Here is the inxi -Fxz output on Xfce. I have observed that Cinnamon uses GTK3, whereas Xfce uses GTK2. Any suggestions on how I can approach this issue, to at least find the culprit?
It appears that this solves it: sudo apt-get purge xserver-xorg-video-intel
Google Chrome flickers on Cinnamon, but not on Xfce
1,370,268,943,000
I want a python script to send notification. The script can be run successfully and show what I want. but crontab did not boot it in right way. the following is the the code refer to the libnotify def SendMessage(title, message): pynotify.init("ChinaBank") notice = pynotify.Notification(title,message) notice.show() return what I did in crontab is * * * * * display=`/home/li/script/FetchDisplay.sh` && export DISPLAY=$display && /home/li/projects/fetch-data/EuroForex.py 2>/home/li/error Here, the FetchDisplay.sh is to get the display as following: #!/bin/bash if [ "$DISPLAY" != "" ]; then echo $DISPLAY exit fi if ["$USER" = "" ]; then USER=`whoami` fi pinky -fw | awk -v user=$USER 'NF == 6 {if($1 == user) {print $6}}' | awk 'NR==1{print $0} the error output is Traceback (most recent call last): File "/home/li/projects/fetch-data/EuroForex.py", line 43, in <module> SendMessage("Please be ready to sell", str(SellData)) File "/home/li/projects/fetch-data/EuroForex.py", line 15, in SendMessage notice.show() glib.GError: Error spawning command line 'dbus-launch --autolaunch=970be6bbf9ff49009918057c308cf56e --binary-syntax --close-stderr': Child process exited with code 1 I know the DISPLAY is :0, through the command echo $DISPLAY therefore, I test the gnome-screensaver-command with it. * * * * * export DISPLAY=:0 && /usr/bin/gnome-screensaver-command --lock 2>/home/li/screenerror Unfortunately, it did not work, and the output is ** Message: Failed to get session bus: Error spawning command line 'dbus-launch --autolaunch=970be6bbf9ff49009918057c308cf56e --binary-syntax --close-stderr': Child process exited with code 1 I thought it is the problem of DISPLAY, but the variable DISPLAY is correct. Could you tell what is going on?
I find the answer from here. Update Pidgin IM status on Ubuntu using cron cron run in its own environment, therefore, we have to find the variables DBUS_SESSION_BUS_ADDRESS, XAUTHORITY, and DISPLAY. I followed the instruction and succeeded to set the variables. My script can work now!
dbus-laauch failed caused by the child process exited
1,370,268,943,000
For presentations I want to use the external USB connected video adapter connected to the projector (especially as the standard VGA output in my laptop got damaged). My idea was to start a dedicated X server on this additional video adapter, and make it available via VNC. Then I can connect to it with xvncviewer to manage the presentation from my main X session (and have additional applications open, which are not visible for the audience). Hover the problem is, that if I start the X server in this additional video adapter, it uses certain virtual console. As soon as I switch the virtual console to return to my main X session (to access the presentation X server via VNC), the presentation X server stops refreshing (as it sees, that its virtual console is not active). Is there any way to start the additional X server in such a way, that it keeps refreshing, when I'm working in my main X session?
Sorry for answering my own question, but as no other responses were published, I've decided to do it. It seems, that I have found a working solution: I start the X server with the -sharevts flag: X -config displaylink.conf -sharevts :2 I start my applications: DISPLAY=2: x-window-manager DISPLAY=2: xterm I start the vnc server: x11vnc -localhost -display :2 The displaylink.conf has the following contents: Section "Device" Identifier "dl1" driver "fbdev" Option "fbdev" "/dev/fb1" EndSection Section "InputDevice" Identifier "Generic Keyboard" Driver "void" Option "CoreKeyboard" EndSection Section "InputDevice" Identifier "Configured Mouse" Driver "void" Option "CorePointer" EndSection Section "Monitor" Identifier "monitor0" EndSection Section "Screen" Identifier "screen0" Device "dl1" Monitor "monitor0" DefaultDepth 16 EndSection Section "ServerLayout" Identifier "external" Screen "screen0" InputDevice "Generic Keyboard" "CoreKeyboard" InputDevice "Configured Mouse" "CorePointer" Option "AutoAddDevices" "Off" EndSection Special case - additional display bigger than the main Last time I have faced a situation, where the projector used for presentation had higher resolution than my laptop. It resulted in necessity to scroll presentation screen to access menu or toolbar, which was very inconvenient during a lecture. After some unsuccessful attempts to force the projector to use one of lower resolutions acceptable for the LCD screen of my laptop, I have found, that I can use the "-scale" option in x11vnc. So the 3rd point above should look as follows: I start the vnc server: x11vnc -localhost -scale 1024x768 -display :2 The solution is not perfect, as the desktop image is a little blurred (on the laptop, not on the screen), but I can easily navigate through my presentation and demo applications.
How to have X server on another graphics card which keeps refreshing when I'm working on my main display?
1,370,268,943,000
Background: I perfectly installed Linux (dual-boot) on an iMac 2010 a couple of years ago. Recently I spent hours to figure out why the iMac was not running "Target Display" mode under MacOs / OSX. Got it, it needs High Sierra or earlier (I incorrectly assumed High Sierra or newer). More and more older iMacs are getting (somewhat) obsolete. That is a shame since we can really re-use them well, the screens are great (2560x1440), and with SSD they are very fast. I just hate to throw away good working hardware. Question: Can I boot the iMac in Linux and make it work in a kind of 'monitor mode' by using the Mini DisplayPort port? I really want to connect via a cable, prefer not to mirror screen via WiFi etc, like the links supplied for example here.
I did not find an answer for this, but I did find a workaround. But that one is not Linux related. What I did: I decided to only use this iMac as an external monitor. I unplugged the mac from internet (disconnect eth0 cable and shut down WiFi too) I created a new user 'screen' with a simple password that I can type fast I configged that user to automatically log in (need to turn off FileVault for that) I connected an old keyboard to it Now I just turn on the mac, it starts OSX High Sierra and logs in the user. Then I connect my laptop and I have a 'free' 27" screen, without hands! And a great screen too! When shutting down, just press the start/shut-down button on the back of the Mac, and use the cheap keyboard to select 'shutdown'. it also wakes up the iMac as external screen nicely Personally, for me it really is an environmental choice. I mean, a 27" monitor is 180 euro or dollar or so, our company can afford that. But why buy a new monitor that is shipped all the way from China, uses costly material to produce, when the old screen is still good? Reduce, reuse, recycle! Even though the iMac consumes ± 150 Watt / hour, it is better for the environment when you are using it not to much. Hope this helps!
Run Linux on iMac 2010 and use as external monitor ("Target Display" mode)?
1,370,268,943,000
Using a fresh install of Fedora 34 Workstation edition(GNOME) on a dell latitude 7400 laptop, I'm struggling to make my external display to work, an Asus ROG PG278Q 27". Connecting through the USB-C port, the display shows up in the system settings as an "unknown display" and the resolution is capped at 1024x768. Things I already did: updated all packages with dnf update tried setting the resolution with xrandr, following this resolution xrandr --newmode "2560x1440-144ghz" 808.75 2560 2792 3072 3584 1440 1443 1448 1568 -hsync +vsync xrandr --addmode XWAYLAND1 "2560x1440-144ghz" xrandr --output XWAYLAND1 --mode "2560x1440-144ghz" - gives me "X Error of failed request: BadValue (integer parameter out of range for operation) " system info: Graphics: Mesa Intel® UHD Graphics 620 (WHL GT2) Windowing system: Wayland GNOME Version: 40.4 lspci | grep VGA output: 00:02.0 VGA compatible controller: Intel Corporation WhiskeyLake-U GT2 [UHD Graphics 620] (rev 02) This is my first time in many years switching to a linux desktop, this display used to work on my previous windows 10 instalation and it also worked in my (brief) rocky linux 8 installation.
I was able to finally get this fixed by changing my "Windowing System" from Wayland (default on gnome) to X11, then following the xrandr guide I posted in the original question. You can check your windowing system in "settings > about" and you can change the windowing system when you log out and there should be the option on the right-hand bottom side of the screen where you input your password
External monitor stuck on "Unknown display" 1024x768 - Fedora 34
1,539,605,791,000
user@domain:~ $ echo $DISPLAY :0 user@domain:~ $ DISPLAY=:0 DISPLAY=:0: Command not found. Just trying to run the command here: https://stackoverflow.com/questions/46810043/notify-send-doesnt-work-over-ssh - How come it's not recognized as a command? I'm using CentOS v7.7.
tcsh has different syntax from Bash. To set your variable in that shell and make it available for the programs you'll call, you have to use the following command: setenv DISPLAY :0 ssh .... Or if you want that variable only to launch ssh (as mentioned in the answer you linked), you can use env before the command: env DISPLAY=:0 ssh ......
DISPLAY=:0: Command not found
1,539,605,791,000
Everytime i open a new console,the error info shown xxd ~/.Xauthority 00000000: 0100 0006 6465 6269 616e 0002 3130 0012 ....debian..10.. 00000010: 4d49 542d 4d41 4749 432d 434f 4f4b 4945 MIT-MAGIC-COOKIE 00000020: 2d31 0010 1fba cba8 1f6a f8b6 e00d 8c1a -1.......j...... 00000030: c7cb 7d86 0100 0006 6465 6269 616e 0001 ..}.....debian.. 00000040: 3000 124d 4954 2d4d 4147 4943 2d43 4f4f 0..MIT-MAGIC-COO 00000050: 4b49 452d 3100 1050 f7f6 b85b 77e1 49e4 KIE-1..P...[w.I. 00000060: a0c6 470d 7b11 a9 ..G.{.. How to fix it?
Invalid MIT-MAGIC-COOKIE-1 keyxhost: unable to open display ":0" This is actually two error messages that have been printed out on the same line: Invalid MIT-MAGIC-COOKIE-1 key xhost: unable to open display ":0" When you log in using a X11 GUI, that session is automatically given a DISPLAY environment variable and a session-specific access key (stored either in ~/.Xauthority or in a file specified by the XAUTHORITY environment variable). Console logins are separate from the GUI login, and so a console login session will not automatically get any of that. And you cannot use xhost to configure the access control of the GUI session unless you have access to the GUI session in the first place. When a GUI session ends and a X11 server is restarted, a new session key is generated at the X11 server side, which automatically invalidates the previous key. But the old session key may be left in the user's .Xauthority file. It will automatically be replaced during the next GUI login. So the existence of a MIT-MAGIC-COOKIE-1 key in the .Xauthority file does not mean it's necessarily the current key. If you run pgrep -a Xorg, you might see the command-line parameters of the X server process as something like Xorg -nolisten tcp -auth <some path> <other options...>. The path specified by the -auth option is the current server-side session key file: if you have root access, you can view it with e.g. xauth -f <some path> list and compare it to the contents of your own .Xauthority file, which is best viewed with xauth list. The output will be one or more lines like this: debian/unix:0 MIT-MAGIC-COOKIE-1 <actual key in hexadecimal> The server-side key file should always have exactly 1 line, but if you have used SSH connections with X11 forwarding, you might have other lines in your own .Xauthority file, starting with e.g. debian/unix:10 or higher display numbers. If the xauth list output from your .Xauthority file includes a line that exactly matches the single line displayed by xauth -f <some path> list, you will be able to access the X server; if there is no matching line, the X server will reject your requests with an Invalid MIT-MAGIC-COOKIE-1 key error. I guess you may have a xhost command in your ~/.profile, ~/.bashrc or any similar login script. You should wrap it in a test that will test for the presence of the $DISPLAY variable before running xhost, so instead of e.g.: xhost +local: you would have e.g. if [ "$DISPLAY" != "" ] then xhost +local: fi But if the default location for the ~/.Xauthority file is used, and you are doing this only to allow using GUI administration tools when using sudo to get root access, there might be a more secure way. Instead of adding xhost +local: (= allows everyone who e.g. SSHs into the same host access to your GUI session), you could add something like this to your ~/.bashrc: if [ "$SUDO_USER" != "" ] && [ "$DISPLAY" != "" ] then export XAUTHORITY=$(grep "^${SUDO_USER}:" /etc/passwd | cut -d : -f 6)/.Xauthority fi Instead of relaxing the security on your GUI session, this uses the fact that root can read everything (so it won't work if your home directory is a NFS-mount that is exported with a root_squash option). When you are using sudo, it sets the XAUTHORITY variable to point directly to .Xauthority file in the home directory of your personal user account. (Also, this trick does not work if you use sudo su -. Use sudo -i instead, and add a similar snippet to /root/.bashrc or /root/.profile. But be careful when editing those files: an unfortunate mistake could make it very difficult to get a working root shell again.)
Invalid MIT-MAGIC-COOKIE-1 keyxhost: unable to open display ":0"
1,539,605,791,000
How can I display the number of registered users on the system who have their home directory in /home and simultaneously have Bash Shell as the command interpreter?
You could just grep the /etc/password file for lines that have :/home (so a field that starts with /home), then more non-: characters and only one more : before the end, which should be followed by /bin/bash: $ grep ':/home/[^:]*:/bin/bash' /etc/passwd terdon:x:1000:1000::/home/terdon:/bin/bash bib:x:1001:1001::/home/bib:/bin/bash So, to display the number only: $ grep -c ':/home/[^:]*:/bin/bash' /etc/passwd 2
Display the number of registered users
1,539,605,791,000
My computer has only one display. Does it correspond to $DISPLAY :0? Can an application run on an arbitrary display number, even if I don't see it? $ DISPLAY=:40 firefox Can a X server run on an arbitrary display number? Will the kernel implicitly create a virtual display? $ xpra start :7 Thanks.
You can specify an arbitrary display, but you won’t get far if there’s no corresponding X server. The display number is specified when the X server is started, by whatever starts the X server — typically your display manager, or yourself in your Xpra example. It’s :0 by default (see the Xserver manpage). It can be chosen arbitrarily, but the X server won’t start if the corresponding resources aren’t available (port 6000 + the display number if it’s configured to listen on TCP, /tmp/.X11-unix/X followed by the display number if it’s configured to listen on a Unix domain socket, etc.). The kernel isn’t involved.
Can I specify an arbitrary `$DISPLAY`?
1,539,605,791,000
I am trying to run Arch Linux ARM on my Raspberry Pi 2B and running into some issues with displaying anything with Xorg. I have the xf86-video-fbdev driver installed, which is what I have seen suggested in many other posts. I did try configuring a .xinitrc to run just a window manager, as well as installing and enabling sddm. Both produced the same result of just showing a black screen on startup. I've also seen in other posts that using the xf86-video-fbturbo-git driver is recommended, but this no longer seems to be an option in the alarm repo, so I'm not sure how I would go about installing that. What might be the issue here?
For those that are having a similar issue, I ended up finding a solution that works for me. My solution was to create a file at /etc/X11/xorg.conf.d/10-driver.conf with the following contents. Section "Device" Identifier "card0" Driver "fbdev" VendorName "All" BoardName "All" EndSection This will force Xorg to choose the fbdev driver to be used on graphics card 0 (which seems to be the default). After creating this file, a reboot displayed graphics!
Arch Linux ARM on RPi 2B, Xorg not Displaying Anything
1,539,605,791,000
I want to setup read-only screens around a room that displays different messages at different times. A single laptop would run a BASH script that checks the times, and decides which messages to send to which display, and when. This could either be text only (maybe a console, but with a huge font) or pictures, whichever is possible will be fine. A very basic script might look something like this: #!/bin/bash echo "Good morning!" > /dev/screen1 echo "Please stack the blocks as high as you can!" > /dev/screen2 My workplace has many unused Lenovo Yogas, so I could fold back the keyboard and mount seven or eight of them around the room with just the screens showing. I don't think the workplace would allow me to remove Windows 10 from these devices. Text or pictures, either is fine. The Yogas do have HDMI ports. People in the room won't interact with the screens, they display information only. The question is, how do I get text (or optionally pictures) from a BASH script on a laptop running Linux to display on these different messages or pictures on the Windows screens scattered around the room?
Set up a simple http server on your Linux machine with a number of static html pages. Write your messages from bash to those pages directly. Open those pages in a browser on your windows machine. You can use some javascript magic to autoreload its contents when a new data is coming. Example: On a Linux machine: Set up a static http server and let it serve from /var/www/room/: mkdir /var/www/room/ cd /var/www/room/ python3 -m http.server Create a page /var/www/room/index.html: <head> <meta charset="UTF-8"> </head> <body> <div id="data"> <!-- here will be an autoreloaded data --> </div> <script> const AUTORELOAD_TIMEOUT = 1000; // milliseconds setInterval(async () => { /* Load data from an address after the hash-sign (#) and put it into div#data E.g. if the browser location is: http://somesite/some/path#some/file/name then the function will load data from the page: http://somesite/some/file/name */ const hash = document.location.hash if (hash.length <= 1) { return } const file = hash.slice(1) const response = await fetch(file) if (response.status === 200) { document.getElementById("data").innerHTML = await response.text() } }, AUTORELOAD_TIMEOUT) </script> </body> On a Windows machine: Open a browser at http://your-linux-machine-ip:your-linux-machine-port/index.html#screen1 On the Linux machine: Write to a file screen1: echo "Hello, world!" > /var/www/room/screen1 Check out the Windows machine: the page should show a text Hello, world!
How to control screens on several Windows computers placed around a room?
1,539,605,791,000
I want to preface this by saying that I understand RPM-Fusion exists and that it has a way to install Nvidia drivers. But, I am using Nvidia's cuda repo instead because I am interested in development rather than playing games (which is what my Windows computer on the remote controlled display switch is for). Unfortunately, Windows is a very difficult platform for development compared to Unix-based OS's: https://developer.download.nvidia.com/compute/cuda/repos/fedora33/x86_64 From there, I have installed Nvidia's drivers (and Cuda drivers) via yum repo and dnf, but it appears as if Nvidia is not yet enabled. I trust Nvidia to deliver working Nvidia code, but it seems they missed something when it comes to working Linux packages -- that is fine. But, there is some bit that needs to be set in some boot configuration somewhere to activate the installed Nvidia drivers. This is getting back into the weeds and revisiting the manual installation process with grubby, gdm and Fedora (which is now obfuscated by the RPM-Fusion installation content). The question is thus: what are the manual boot configuration settings that need to be adjusted to enable an installed Nvidia driver? (I have already disabled, blacklisted and removed Nouveau)
Edit /etc/sysconfig/grub and append rd.driver.blacklist=nouveau to end of GRUB_CMDLINE_LINUX=”…”. ## Example row with Fedora 33 BTRFS ## GRUB_CMDLINE_LINUX="rhgb quiet rd.driver.blacklist=nouveau" ## OR with LVM ## GRUB_CMDLINE_LINUX="rd.lvm.lv=fedora/swap rd.lvm.lv=fedora/root rhgb quiet rd.driver.blacklist=nouveau" Then update the grub2 conf ## BIOS ## grub2-mkconfig -o /boot/grub2/grub.cfg ## UEFI ## grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
Fedora 33: Installing Nvidia Drivers from Nvidia
1,539,605,791,000
I connect from a Windows PC to a Linux PC with ssh using MobaXTerm. Within the ssh session, I have a tmux session with a few windows and panes. The ssh session will typically disconnect after a few hours of inactivity (I've tried toying with what looked like relevant keep-alive settings in MobaXTerm, but it's never worked). First-world problem: After starting a new ssh session and reattaching to my existing tmux sessions, the $DISPLAY variable will sometimes be set "incorrectly" - by which I mean, when I launch a GUI that uses X-Windows (e.g. Firefox), I'll get "cannot open display" error messages. E.g.: $ firefox & [1] 23077 $ Unable to init server: Broadway display type not supported: localhost:11.0 Error: cannot open display: localhost:11.0 [1]+ Exit 1 firefox $ echo $DISPLAY localhost:11.0 Usually when I open a new terminal, I'll get the updated/correct value of $DISPLAY - from this terminal, I'll be able to successfully launch X-Window-using GUIs. Question: is there any way I can dynamically "update" the value of $DISPLAY in an existing terminal (i.e. a terminal that's been alive since before the ssh disconnection)? I.e. I'd like to try avoid having to launch a new terminal just for the purpose of getting/discovering the new value of $DISPLAY. I don't really have a solid understanding of what $DISPLAY represents - so I'd be grateful if someone could explain what it represents and does in the context of the described.
DISPLAY is used by X clients (application programs) to find the corresponding X server to connect. It's of the form hostname:displaynr.screennr, but usually you only see something like :0, which means the first display of the X server running on localhost. Using a hostname in there is not secure, because the X protocol is not encrypted. So ssh with X forwarding piggybacks on this schema by finding a free display number, usually 10 or larger, and pretending to be an X server at this display number. But in reality it just forwards the X protocol over the ssh connection to the X server running on the host where the ssh client was called. So that's why you get different DISPLAY contents when you reconnect with ssh - each connection with ssh sets this variable, possibly to a different value. This new correct value is visible before you attach to tmux. It's also visible when you open a new terminal, because a new terminal will copy the freshly set DISPLAY variable, while an old terminal will keep the DISPLAY variable it already has. So if instead of attaching to tmux you run a self-written script, which reads out DISPLAY, attaches to tmux, and then sets DISPLAY in the existing sessions, you can automate what you need to do. However, doing that with tmux doesn't seem to be straightforward. Here is another question with a few suggestions how to do that that may work for you (or not).
How do I correctly (re)set $DISPLAY?
1,539,605,791,000
I'm getting a different screen resolution (in a Python script) depending on whether I type commands into Cygwin manually or run them using a shell script, and I can't understand why. I'm running Windows 8.1 / Cygwin 2.8.2 (and XWin Server at startup) / Python 2.7.13 (and am new to Cygwin and Python; please let me know if I can provide more details). When I start Windows, open a Cygwin terminal, enter export DISPLAY=:0.0 and then enter python c:/users/<my directories>/<python-file>.py, my GUI Python script runs normally. (If w, h = root.winfo_screenwidth(), root.winfo_screenheight() appears in the Python script, for example, then the outputted values, corresponding to the screen resolution, are (1920, 1200).) However, if I try to automate this process by double-clicking on the following Windows batch file: c:\cygwin\bin\bash c:/users/<my directories>/<shell-file>.sh where the shell file contains #!/bin/sh export DISPLAY=:0.0 python c:/users/<my directories>/<python-file>.py then my GUI Python script runs but is distorted (i.e., it's too large for the screen), and the resolution comes back as (1280, 800). Why do these two methods give different results? How might I get a Python script to run with a resolution of (1920, 1200) in an automated way? Thank you.
I don't know exactly what the issue is but here are a couple of things: Environment My first guess would be a difference in the environment when you use the cygwin terminal and when you execute bash through the batch file. When running from the cygwin terminal execute the command env to see what the environment is (it's been a while since I've used windows, but I suspect env is part of cygwin or built into bash). Then add the same command to the start of your shell script. #!/bin/sh export DISPLAY=:0.0 env python c:/users/<my directories>/<python-file>.py Compare the two environments and see if there are any differences. pro-tip: you can write the output of the env command to a file with env > c:/users/<your directories>/env1.txt (then use env2.txt from your shell script) and then compare them with the command line tool diff. i.e. diff env1.txt env2.txt. Interpreter The next thing is related, but what python interpreter is being executed in each case. Do you only have one python installed on your system? Or might you have multiple. For example did you install through both cygwin and anaconda? If you have multiple, then check which version is being executed with which python in each case. GUI Toolkit What gui toolkit are you using. I'm not familiar with the API winfo_screenwidth(). Presumably this is provided by some library/package you are using. Which package is that? I'm not sure about the windows world/cygwin, but in the linux world it is possible for a program to determine whether or not stdin is a terminal. The toolkit you are using might have different behavior in either case. If so, the toolkit documentation would be where to go for that info. In addition, IIRC there are two ways of starting a program in windows. There's the standard way i.e. int main() and the GUI way (I don't recall the name of the main function in this case). I recall that for some interpreted languages there are different interpreter binaries depending on which entry point you wanted. The gui one might start or end with a w. For instance I think javaw.exe is the gui version and java.exe is the console version (for java). Is there an equivalent for python in windows?
Windows/Cygwin/Python: Resolution depends on manual entry or shell script?
1,539,605,791,000
local machine [mukesh@centos ~]$ xhost 192.168.4.200 192.168.4.200 being added to access control list remote VM machine [mukesh@centos ~]$ ssh [email protected] [email protected]'s password: Last login: Fri Jul 7 02:38:07 2017 [user@labipa ~]$ DISPLAY=192.168.1.3:0.0;export DISPLAY [user@labipa ~]$ firefox Error: cannot open display: 192.168.1.3:0.0 [user@labipa ~]$ su - Password: Last login: Fri Jul 7 02:47:53 EDT 2017 on pts/1 [root@labipa ~]# cat /etc/ssh/sshd_config | grep X11F X11Forwarding yes # X11Forwarding no Also, as per http://www.softpanorama.org/Xwindows/Troubleshooting/can_not_open_display.shtml on remote machine [root@labipa ~]# netstat -tulpen | grep "\(177\|6000\)" tcp 0 0 0.0.0.0:6000 0.0.0.0:* LISTEN 0 50364 1512/Xorg tcp6 0 0 :::6000 :::* LISTEN 0 50363 1512/Xorg udp 0 0 0.0.0.0:177 0.0.0.0:* 0 48805 1476/gdm contents of /etc/gdm/custom.conf [security] DisallowTCP=false [xdmcp] Enable=true
If X11 forwarding is enabled in both the client and the server, ssh will automatically set up the DISPLAY variable (to a local proxy). You don't need to set it, and especially not directly to the machine's IP address; that will completely bypass the ssh mechanism. Use echo $DISPLAY to verify the display is set by ssh. If you only enabled X11 forwarding on the server (as shown), and don't want to enable it generally, use ssh -X to also enable it in the client on a per-use base.
Why can i export the display from linux to linux?
1,539,605,791,000
My vim always gets distorted after I switch between tabs in the terminal. The correct display The distorted display This always happen when I switch back to the vim tab and gets back to normal after I press a move like j or k.
Do you, by any chance, have different font size in your other tabs? When switching between tabs of different font size in a maximized/fullscreen gnome-terminal (or mate-terminal), weird sizing issues occur. The terminal emulator wants to resize itself (to keep the same number of character cells as you can see with unmaximized windows; keeping the same number of pixels couldn't work together with grid-aligned resizes), but on the other hand, the window manager pushes back and reverts the terminal emulator's resize attempt. This generates two consecutive back-n-forth resize events towards the client application. See e.g. https://bugzilla.gnome.org/show_bug.cgi?id=731137. In the mean time, we've seen multiple such bugreports in terminal emulators where it eventually turned out that vim fails to correctly handle resize events that arrive in short succession. This should be brought into vim developers' attention and fixed by them.
terminal vim window gets distorted after switching tab
1,539,605,791,000
My primary display isn't working and I'm using a monitor. Unfortunately, Fedora 22 doesn't seem to pick up on this and acts as though I had two displays working simultaneously. As all system settings / the control panel, as well as several programs, launch on the primary display, I cannot get to some of the files I need (eg, Libre Calc), let alone adjust settings. How can I do this via command line, or through some shortcut?
You could try to set your primary display with xrandr. First query your displays by just executing xrandr Then set the primary with e.g. xrandr --output HDMI1 --primary Or disable the other display with xrandr --output eDP1 --off. Or you could write a /etc/X11/xorg.conf file.
Disabling primary display in Fedora 22 (without primary display visible)
1,539,605,791,000
The X window system in a desktop Linux (where just one physical monitor is used) usually uses display 0, screen 0. The output of who in Ubuntu 14.04 is user1 :0 2016-06-15 14:25 (:0) where :0 is the abbreviation for :0.0 (:display.screen). Here I logged in only from the GUI. Then I opened a terminal emulator; I ran screen and I created two different windows (each of them simply contained bash). The resulting output of who was: user1 :0 2016-06-15 14:25 (:0) user1 pts/1 2016-06-15 14:26 (:0) user1 pts/11 2016-06-15 16:31 (:0:S.0) user1 pts/11 2016-06-15 16:31 (:0:S.1) Why is this syntax used? It seems to be :display:display.screen. Does screen emulate another display inside the physical display?
You're referring to the text at the end of the line. That is written by screen to indicate which pseudo-terminal connection it is using, as well as which window-number screen has assigned to it. Comments in the code indicate what it does: /* * Construct a utmp entry for window wi. * the hostname field reflects what we know about the user (display) * location. If d_loginhost is not set, then he is local and we write * down the name of his terminal line; else he is remote and we keep * the hostname here. The letter S and the window id will be appended. * A saved utmp entry in wi->w_savut serves as a template, usually. */ and later /* * we want to set our ut_host field to something like * ":ttyhf:s.0" or * "faui45:s.0" or * "132.199.81.4:s.0" (even this may hurt..), but not * "faui45.informati"......:s.0 * HPUX uses host:0.0, so chop at "." and ":" (Eric Backus) */ concluding with the actual code which you might recognize: sprintf(host + strlen(host), ":S.%d", win->w_number); strncpy(u.ut_host, host, sizeof(u.ut_host)); which stores the string in the ut_host member of the utmp/utmpx structure. Further reading: utmp, wtmp - login records who - show who is logged on
Display used by screen in utmp
1,539,605,791,000
In VirtualBox 5.0.18 on Windows 10 I'm running the guest: Linux Mint 17.3 Cinnamon 64-bit Cinnamon 2.8.8 Linux kernel 3.19.0-32-generic with 3D acceleration enabled. Everything was working fine until I updated Linux this morning and started Atom. Atom displayed partially, then showed random glitchy images on the screen, then finally loaded. Then I couldn't close it at all. When pushing ctrl-alt-esc the window would close then restart continuously. I couldn't click anything on the screen. Now after rebooting the VM I can't click any 'system' areas - e.g. the taskbar, or notifications. I can click only the desktop to open files and things. But I can open the start menu using the Windows key. I can start & close apps successfully with the keyboard, but not Atom, that still hangs and forces me to reboot. And I need it for work. I've tried with and without 3D acceleration enabled. I have all latest Windows updates and graphics drivers. What else to try? Any idea what is going wrong? UPDATE It's not just Atom. Even though other applications (Firefox, LibreOffice) work fine, Visual Studio Code too stays frozen on the screen blocking all other apps when I minimise it: So now I can't work. Maybe the latest Cinnamon is broken? UPDATE It's not just Mint & Cinnamon. Trying to install Visual Studio Code on Ubuntu 16.04 also has the same problems with display, and then later crashing my whole virtual machine. There are too many variables here to pinpoint the problem - laptop graphics card, Virtualbox, Linux operating system, and the two applications themselves - Atom & VS Code. Although at home on my old desktop pc Atom runs fine in a Linux Mint VM on Windows 10.
The problem is caused by VBoxGuestAdditions_5.0.18, I believe the graphics driver is broken in it. Try to install VBoxGuestAdditions_5.0.16, that works just fine.
How do I fix my display after Atom broke it completely?
1,539,605,791,000
I recently started getting a fuzzy screen on start up when booting into Fedora. The best way to describe it is to say the screen looks like it has a H-Scroll problem - fuzzy horizontal lines. The mouse pointer is stable however. My current work around is to log out and back in, then I get a nice stable login screen (difficult to do since you are clicking on menu items you cannot see properly). I´m looking for a permanent fix. Fuzzy login screen - Fedora Clear mouse on fuzzy screen - Fedora
I had this exact issue on my machine where I am running both Fedora 20 x64 and Windows 8.1. Sometime in the summer I changed the GPU (Gigabyte GeForce GTX 750 Ti) to support two monitors both using the HDMI interface. Today I needed to switch onto Fedora and was surprised by the same fuzzy (corrupted) screen on both monitors (LG 22M45). I did the following (below I list two links where I found the procedure): Ctrl + Alt + F4 at the fuzzy screen in order to bring up a terminal which was displayed correctly. Then I logged on and performed: sudo yum update Afterwards I did a reboot. Installed the nvidia drivers (the author of the solution recommends akmod instead of kmod): sudo yum install kmod-nvidia xorg-x11-drv-nvidia-libs Finally, I removed Nouveau from initramfs by performing: mv /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r)-nouveau.img and, also: dracut /boot/initramfs-$(uname -r).img $(uname -r) Then, I did a reboot and the next time the Fedora logon screen appeared correctly. Here is the original procedure. The same instructions, but more concise, are also here.
Fuzzy screen on startup
1,539,605,791,000
I am working on a microcomputer with Yocto operating system (based on openembedded) where I don't have any graphical environment. The system is quite heavily loaded, and I would like to avoid adding a GUI to it. Recently there has been a need to project video from cameras, connected to the microcomputer (RTSP), to a screen, via HDMI. Assuming I can install any application on the device, do you know of any solution that would allow me to display the image on the screen? Currently, after connecting the HDMI cable, all I see is the system terminal.
Thanks to @dirkt I finally did it using ffmpeg with command like ffmpeg -fflags nobuffer -flags low_delay -rtsp_transport tcp -stimeout 1000000 -i <RTSP_stream_addr> -pix_fmt bgra -loglevel quiet -f fbdev /dev/fb0
Displaying video on a monitor without a graphic interface
1,539,605,791,000
I am using slim login manager on Debian 10. slim-1.3.6 part of the slim package is slimlock, a screenlocker. When I lock my screen (Ctrl+Alt+Del), two things happen: typical lock screen appears, where I have to provide password to unlock after few seconds my display goes to sleep When I then come back and either move mouse, or type on keyboard, then the display wakes up and the lock screen appears again, where I can type my password to unlock. I need to change the following: When screen is locked and display is asleep, do not wake up on mouse movements. Only when I type on keyboard should the display wake up, and the lock screen password dialog appear. How can I do this? Does this need to be patched in the slim/slimlock package, or is this xserver ? I would be happy to recompile the slim package, if somebody could kindly point me what to modify and where.
Mouse movements cause the DPMS extension in the X11 server to wakeup the monitor. One way to stop this is to disable the mouse device before starting slimlock and re-enabling it afterwards. First find the name of the mouse using xinput --list. For me it was Logitech USB Optical Mouse. You can then disable it with dev='Logitech USB Optical Mouse' xinput --disable "$dev" On returning from slimlock use the same command with --enable instead. If you need the mouse to become active again once you start using the keyboard, for example to click some icon, you will need to poll the state of the monitor and notice when it turns on, indicating use of the keyboard. You can do this with, say, while xset q | grep -q 'Monitor is Off' do sleep 15 done xinput --enable "$dev" To avoid this immediately detecting that the monitor is on at the start, you might want to add a suitable sleep before this loop.
Slim login manager: patch slimlock to not wake up display on mouse movement
1,539,605,791,000
I have a modern 1920x1200 LED display, capable of up to 96 kHz HorizSync, up to 76 Hz VertRefresh, and up to 205 MHz pixel clock. Apart from its native resolution, the display can also run in 1600x1200 (4:3) resolution at 75 Hz: 1600x1200 (0xa1) 202.500MHz +HSync +VSync h: width 1600 start 1664 end 1856 total 2160 skew 0 clock 93.75KHz v: height 1200 start 1201 end 1204 total 1250 clock 75.00Hz Now, I'm trying to switch it to a 800x600 doublescan mode equivalent to the above 1600x1200 mode (at 75 Hz, too; all modelines taken from here): xrandr --newmode "800x600d" 101.25 800 832 928 1080 600 600 602 625 DoubleScan +HSync +VSync xrandr --addmode DP1 800x600d The resulting video mode gets added successfully and is clearly visible in the output of xrandr: 800x600d (0x1f7) 101.250MHz +HSync +VSync DoubleScan h: width 800 start 832 end 928 total 1080 skew 0 clock 93.75KHz v: height 600 start 600 end 602 total 625 clock 75.00Hz But once I attempt to switch to the new mode, I immediately receive an error: # Mode by name $ xrandr --output DP1 --mode 800x600d xrandr: cannot find mode 800x600d # Mode by id $ xrandr --output DP1 --mode 0x1f7 xrandr: Configure crtc 0 failed Are modern graphics cards no longer able to run in a doublescan mode? Or should I blame my display?
Partial answer, and quick one: Graphics cards have evolved from the legacy model based on a CRTC (CRT controller) that started out as an 6845, with an index and data register to change internal registers, to something completely different in major graphics hardware lines (Intel, Nvidia, ATI/AMD, and others as well). In legacy VGA it was a sort-of agreed bit in one of the registers. Because legacy VGA is still supported by basically all graphic cards, that's still there. But in particular the circuitry to drive digital outputs has moved, so you really really need the exact model of the graphics card, and you need to dig up a datasheet to be sure if it is supported or not. Sometimes a datasheet of a similar model will help, but you need to experiment. E.g. have a look at the Intel G35 datasheet (which is already ancient), and you'll see two "display pipelines" in section 2.7, but the legacy double scan in register CR09. If you want to revive old hardware with VGA out, chances are it will work right out of the box, if you can fine a suitable X version with a suitable driver for the card (but even the VESA driver might do, not sure if it does double scan, though). Adding a ModeLine via xrandr with double scan basically just sets a flag in the modeline data structure. The driver will need to interpret this flag, and use it set up the registers correctly, and that (as e.g. the datasheet above shows) depends very much on the hardware.
An attempt to switch to a doublescan X11 modeline fails with "Configure crtc 0 failed"
1,539,605,791,000
How can I know the brand/model of the display used in a laptop? I have been roaming with dmidecode but I haven't found that information.
If your GPU and monitor can talk via i2c, you'll see your monitor model in a X.org log. The commands to achieve that manually: sudo ddcutil detect or ddccontrol -p Lastly you could download and install a trial version of Windows 10 Enterprise and install SoftMCCS.
How to check the laptop ID for the LCD panel manufacturer?
1,539,605,791,000
From https://unix.stackexchange.com/a/17278/674 Local displays correspond to a socket in /tmp/.X11-unix. (cd /tmp/.X11-unix && for x in X*; do echo ":${x#X}"; done) On a computer: $ (cd /tmp/.X11-unix && for x in X*; do echo ":${x#X}"; done) :0 :1 :10 :11 :2 :3 :4 :5 :6 :7 :8 :9 I was wondering how to find out the X servers (command, or pid) for the given display numbers? Thanks.
lsof /tmp/.X11-unix/X* as root will list the commands and pids corresponding to each socket. You can then match the socket name to the display number, as you’ve done in your example.
How can I find out the X servers (command, or pid) for display numbers?
1,545,320,948,000
I have connected my server through displayport-VGA cabel to my laptop but my Ubuntu 14 on laptop recognizes it as a view destination, not a view source. How can I revert it and get view from a server displayed on a laptop monitor?
So you used a Male-Male VGA cable to connect one video output port to another video output port? Working as expected, I'm afraid. This is basically a much less dangerous version of expecting everything to be fine when connecting a (deliberately-very-hard-to-find) Male-Male power cord between two outlets. You need to connect an output to an input.
Laptop monitor as a server's display
1,545,320,948,000
I've read the answers to this question but I don't have ACPI, the /sys/class/drm/card0-socket/status method does not work and the xrandr method chokes my CPU. udevadm monitor shows nothing when (un)plugging the monitor. I've got a circa 2013 Lenovo ThinkPad w530 with nVidia quadro something. I'm running Lubuntu 18.04 with the nouveau driver. The monitor is a 27" Philips 271S. I'm using a VGA cable. How do I do detect monitor (un)plugging?
I resorted to polling for the external screen EDID. I installed the read-edid package, added a line in visudo %sudo ALL=(ALL:ALL) NOPASSWD:/usr/bin/get-edid to allow passwordless get-edid and used the following loop: #!/bin/bash # edid_based_automatic_display_loop.sh export NEW_CONNECTION=1 export NEW_DISCONNECTION=1 while : do sleep 1 sudo get-edid 2>/dev/null|parse-edid 2>/dev/null|grep "PHL 271S7Q">/dev/null _DISCONNECTED=`echo $?` # echo "DISCONNECTED $_DISCONNECTED" if [ $_DISCONNECTED = "0" -a $NEW_CONNECTION = "1" ] ; then export NEW_DISCONNECTION=1; export NEW_CONNECTION=0; bash /home/bruno/.screenlayout/only_external.sh elif [ $_DISCONNECTED = "1" -a $NEW_DISCONNECTION = "1" ] ; then export NEW_DISCONNECTION=0; export NEW_CONNECTION=1; bash /home/bruno/.screenlayout/only_laptop.sh fi done
How do I detect when a monitor is plugged in or unplugged without acpi, xrandr, /sys and udev?
1,545,320,948,000
This might be a strange question. I have a very lightweight Ubuntu Server 14.04 box running underneath my TV, and I want to connect my TV to it to monitor some things. However, since it is a very lightweight server, I cannot have anything impacting it's performance. So my question is, will using my TV as a monitor for the server impact the server's performance?
Just hooking up a viewing device like a monitor/TV to a display port on your server won't affect performance. Technically the OS is already prepared to send a display signal and waiting for you to hook up a display to receive. The action of sending the signal would not even register as intensive on any machine made since 1980 (just some humor). If you have to install special drivers, software, etc. to get the display out, it could possibly effect performance. The biggest thing not to do is install a GUI or anything else eating up cycles - like specialized TV-out software for instance.
Will using a display impact performance?
1,545,320,948,000
I have just upgraded my PC. It was an old intel Q8400 with an nvidia graphics card. I changed the CPU (now an i3-6300) + Motherboard, and ditched the graphics card (want to use intel's graphics card that comes with the CPU). Now, however, I get the message that Cinnamon is running without video hardware acceleration, and everything on screen is "laggy". I also have a dual monitor setup and when I go to system settings -> display I only see one display named as "laptop". Display detection does not work, and they are mirrored. Logically it has to do with the drivers. Based on this thread I should look for a related xorg.conf file, but there seems to be none related (below are my xorg.conf file search results) /usr/share/X11/xorg.conf.d /usr/share/X11/xorg.conf.d/10-evdev.conf /usr/share/X11/xorg.conf.d/10-quirks.conf /usr/share/X11/xorg.conf.d/11-evdev-quirks.conf /usr/share/X11/xorg.conf.d/11-evdev-trackpoint.conf /usr/share/X11/xorg.conf.d/50-synaptics.conf /usr/share/X11/xorg.conf.d/50-vmmouse.conf /usr/share/X11/xorg.conf.d/50-wacom.conf /usr/share/X11/xorg.conf.d/51-synaptics-quirks.conf /usr/share/man/man5/xorg.conf.5.gz /usr/share/man/man5/xorg.conf.d.5.gz System info based on inxi (seems like Intel drivers are actually being used): $ inxi -b System: Host: HomePC Kernel: 3.16.0-38-generic x86_64 (64 bit) Desktop: Cinnamon 2.6.13 Distro: Linux Mint 17.2 Rafaela Machine: Mobo: ASUSTeK model: B150M-C version: Rev X.0x serial: 151055956202383 Bios: American Megatrends version: 0402 date: 09/25/2015 CPU: Dual core Intel Core i3-6300 CPU (-HT-MCP-) clocked at 760.00 MHz Graphics: Card: Intel Device 1912 X.org: 1.15.1 drivers: fbdev,intel (unloaded: vesa) tty size: 80x24 Advanced Data: N/A for root Network: Card: Realtek RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller driver: r8169 Drives: HDD Total Size: 256.1GB (6.7% used) Info: Processes: 215 Uptime: 1 min Memory: 1472.4/15928.1MB Client: Shell (bash) inxi: 1.9.17
This question is very old and no longer relevant (for me at least), so I am going to close it. For the record, I ditched cinnamon for gnome at the time. Unfortunately I cannot tell if any of the answers will work. Thank you for commenting though!
Cinnamon running without video hardware acceleration after moving HD to a new PC [closed]
1,545,320,948,000
My usecase: User opertes on Server1 station. He can use ssh or whatever to connect with Remote Station and start some application on it and display it on his station (Server1). Now user has to change his place and operate on Server2. I know that he can start new instance of application and display it on Server2. But I want to dispaly the same running instane of application on Server2. Is it possible? How to do that?
I don't think you can reattach if the application has already been started in X. But maybe the following questions help? xpra Detach/reattach an application run on X over SSH? Reattach to lost X11 session Also, VNC could be a solution for you. But all these applications require that you restart the gui application on Server1
Is possible to change X server without kill client? [duplicate]
1,545,320,948,000
We have a small linux cluster ~12 computers and a similar number of users. Is it possible for a superuser to launch a graphical application - say firefox or even a python script on another machine that is being used by another user?
Actually what you want is a basic concept of the X window system. However, recent Linuxes don't allow remote X clients by default. You have to enable it first. The tool xhost can do that. Running xhost + simply disables access control and any client can interact with that server. X server instances are addressed by <host>:<display>. To have a xterm you run on your machine render to a remote X server you would write DISPLAY=remotehost.my.doma.in:0 xterm or if you prefer using IP-addresses, you could write DISPLAY=192.168.0.1:0 xterm. :0 identifies the xserver uniquely within a host. Usually, display managers start xserver numbering at 0 so it's a relatively safe bet to assume the remote xserver is display 0. This example command will start an xterm on your machine as you (uid) but render to and get events from the remote xserver at 192.168.0.1, which means there is a xterm running as you on your machine, but it is used by somebody else on another machine. Honoring the DISPLAY environment is part of Xlib and therefore supported by each and every X application.
Is it possible to display a graphical application on another host/user [duplicate]
1,545,320,948,000
I am trying to set up the teacher's computer in a university room, having two identical displays on the table. One of these displays should be duplicated on the e-Blackboard connected via HDMI. Obviously, the computer has two graphical cards (the internal card does not support more than two outputs at one, despite having four ports: VGA, DVI, DP and HDMI. Another card has DVI and VGA. We managed to successfully connect two desktop monitors via two DVI ports. We also can connect the e-Blackboard via the HDMI port and make an extended display, not using anythyng more than the standard MATE Display configuration GUI tool. In xrandr output, this looks like: profesors@ZK10-431-P:~$ xrandr --listmonitors Monitors: 3 0: +*HDMI-1-1 1920/521x1080/293+0+0 HDMI-1-1 1: +DVI-0 1920/521x1080/293+1920+0 DVI-0 2: +HDMI-1-2 3840/1660x2160/934+3840+0 HDMI-1-2 However, if I try to make the monitor 2 duplicated on the e-Blackboard 3 AND scale the image twice to stretch it over the whole blackboard, I am having issues. xrandr --output HDMI-1-1 --mode 1920x1080 --primary --pos 0x0 --output DVI-0 --mode 1920x1080 --pos 1920x0 --output HDMI-1-2 --scale-from 1920x1080 --same-as DVI-0 causes the e-Blackboard to only show the 2nd display in the upper left quarter, with the same resolution as on the desktop monitor. Same is with: xrandr --output HDMI-1-1 --mode 1920x1080 --primary --pos 0x0 --output DVI-0 --mode 1920x1080 --pos 1920x0 --output HDMI-1-2 --mode 3840x2160 --scale 0.5x0.5 --scale-from 1920x1080 --same-as DVI-0 and also xrandr --output HDMI-1-1 --mode 1920x1080 --primary --pos 0x0 --output DVI-0 --mode 1920x1080 --pos 1920x0 --output HDMI-1-2 --same-as DVI-0 --mode 3840x2160 --scale 0.5x0.5 EDIT: Interestingly, whenever I call xrandr again with any combination of options, even same options in different order, it causes a crash after which it becomes impossible to apply any xrandr command (same error message will be printed after this and any subsequent xrandr command): X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 140 (RANDR) Minor opcode of failed request: 7 (RRSetScreenSize) Value in failed request: 0x0 Serial number of failed request: 50 Current serial number in output stream: 51 Also, if I put the e-Blackboard (HDMI-1-2) in front of DVI-0 and added --same-as HDMI-1-2 to DVI-0, the MATE (or should I say lightdm?) crashed altogether, throwing me out to the login screen, often with black desktop monitors and picture only on the e-Blackboard (which was super convenient), and the only way to resolve was to restart the PC. Finally: xrandr --output HDMI-1-1 --mode 1920x1080 --primary --pos 0x0 --rotate normal --output DVI-0 --mode 3840x2160 --scale 2x2 --pos 1920x0 --rotate normal --output HDMI-1-2 --same-as DVI-0 causes: xrandr: cannot find mode 3840x2160 for obvious reasons... Can anyone suggest anything please?
My colleague came in and suggested the obvious: I just changed the e-Blackboard's resolution to 1920x1060 instead of 3840x2160. This solution was so plainly obvious that it completely escaped me before... <:-D
Duplicate only one of dual monitors on a Smart e-Blackboard
1,545,320,948,000
I have a laptop with Artix Linux on it that I'm using as a web server. I want to keep it minimalist, w/o graphical environment and only the absolute necessary software. My problem is I still don't know how to turn off/on the display (to save energy) when I'm not interacting with it (which I do very rarely). I am aware of these posts: Turn off monitor using command line How to turn off the monitor under TTY But they either talk about solutions that work for a graphical environment or they use some additional software (vbetool) that I'm not even able to install. It would also be very cool If I could turn the display off/on through ssh
Nevermind, I found a page in ArchWiki that explains everything. No additional software is needed. All I have to do is change the value in /sys/class/backlight/intel_backlight/brightness to 0 to turn the display off. To turn it back on, I can use any value grater than 0. The maximum value can be found in /sys/class/backlight/intel_backlight/max_brightness. Note that the intel_backlight part is hardware dependent. It might be something else, like acpi_video0 on a different machine.
Turn off/on laptop display from TTY without additional software
1,545,320,948,000
I am running Arch Linux on a Dell XPS-13 9380 with the DWM window manager (which uses X11). I recently started using an external monitor with my laptop. My issue is that when and only when the second display is active, my terminal emulator glitches when I type into it. Please note that this only affects the terminal emulator on my laptop display, NOT the external display. It is especially annoying when editing files in emacs or vim. Here is the script that I run to activate the second display: xrandr --output DP-1 --auto --left-of eDP-1 The external dislay is DP-1, the integrated display is eDP-1. By "glitches", I mean when a key is pressed that changes the display of the terminal, the text will flicker between the new change and the old change. If I am moving the cursor around with the arrow keys, the cursor will flicker around and sometimes settle in the wrong location. Note that the cursor is actually where it is supposed to be, it is just rendering improperly. This is purely a graphical issue. I noticed the issue on the Alacritty and Kitty terminal emulators. These are both GPU accelerated emulators, so I tried URXVT and did not notice the same issues. Note: I do not believe that the Dell XPS-13 9380 has onboard graphics. I am not sure if this is a firmware issue or an X11 issue. Any ideas to get this to stop? I do not want to have to switch to another emulator. UPDATE: It also glitches for ST, meaning that the problem likely has nothing to do with the fact that alacritty and kitty are GPU accelerated. I am not sure why urxvt works fine...
It was an X11 problem. After installing the xf86-video-intel drivers (for the Intel UHD Graphics 620 chip. Consult the arch wiki to be sure). I added this to /etc/X11/xorg.conf.d: 20-intel.conf: Section "Device" Identifier "Intel Graphics" Driver "intel" Option "TripleBuffer" "true" Option "TearFree" "true" EndSection
Terminal emulators glitch when using two displays (DWM)
1,545,320,948,000
There are many articles about how to add undetected resolutions. My problem is the opposite: gnome-control-center shows many resolutions I won't ever use. Here is my list with 33 entries on a 3840x2160 monitor: How to clean up this list (per monitor) and only keep these 2 entries? 3840x2160 (native resolution) 1920x1080
Automatically detected modes cannot be deleted. Ping developers: https://gitlab.freedesktop.org/xorg/xserver/-/issues/353 You can try creating and using a custom EDID file for your monitor: https://wiki.archlinux.org/title/Kernel_mode_setting#Forcing_modes_and_EDID
Remove detected resolutions
1,545,320,948,000
I have a setup with 2 monitors. I'm on Manjaro, I installed v4l2loopback from AUR (here is github link: https://github.com/umlaeute/v4l2loopback)and it works great, no problem there. But my question is how to specify which monitor I want it to use? What I did was: $ sudo modprobe v4l2loopback exclusive_caps=1 $ ffmpeg -f x11grab -r xllgrab -r 15 -s 1920x1080 -i :0.0+0,0 -vcodec rawvideo -pix_fmt yuv420p -threads 0 -f v4l2 /dev/video0 And sure it works greater, but I want to stream a different monitor. How can I do that? Also (its a PC and I never had any cameras so /video0 is the fake webcam: $ v4l2-ctl --list-devices Dummy video device (0x0000) (platform:v4l2loopback-000): /dev/video0 The one with a DP (DisplayPort) is the one I want to stream $ xrandr --listmonitors Monitors: 2 0: +*DP-4 1920/480x1080/270+1920+0 DP-4 1: +HDMI-0 1920/531x1080/299+0+0 HDMI-0
The ffmpeg-all man page says x11grab takes an option, [<hostname>]:<display_number>.<screen_number>[+<x_offset>,<y_offset>] which in your case is :0.0+0,0 and determines what to grab. Depending on your configuration, you can try :0.1+0,0 for a second screen or :0.0+1920,0 for an offset in a single virtual screen, or even :1.0+0,0 for a second display.
How to specify display to stream using v4l2loopback?
1,545,320,948,000
If I want to use the name of my primary screen (HDMI-0, found out via xrandr) within another command, the device is never found. Instead I have to use the name HEAD-0. From what I've already read I assume this is probably a nVidia-thing, but I don't understand how it works, why it's done and most importantly: How can I find out, which of my screens has which HEAD-name?
Not sure if it's the correct answer, but you can query connected displays via nvidia-settings --query dpys. If I understand it correctly, HEAD-x is mapped to a display of the output of nvidia-settings in the order they appear. For example: HEAD-0 is the first connected display, HEAD-3 the fourth, etc.
How to find out, which screen is HEAD-0, HEAD-1, etc.?
1,545,320,948,000
System: Linux Mint 19.1 Cinnamon. Hardware: Laptop screen: 3840x2160 + HDMI screen 1920x1080; GPU: NVIDIA, GeForce GTX 1060, Max-Q Design, 6 GB VRAM I have enabled Double DPI in General settings. It works fine for the laptop display, but now I connected an external FullHD display, the things on it are twice as large. I currently use the following command to correct the scaling: xrandr --output HDMI-0 --scale 2x2 My intention is to make this permanent and persistent on reboots etcetera. How and in what file can I do this? I have tried to look for an xorg.conf, but this is what I only get: $ locate -i xorg.conf /usr/share/X11/xorg.conf.d /usr/share/X11/xorg.conf.d/10-amdgpu.conf /usr/share/X11/xorg.conf.d/10-nvidia.conf /usr/share/X11/xorg.conf.d/10-quirks.conf /usr/share/X11/xorg.conf.d/10-radeon.conf /usr/share/X11/xorg.conf.d/11-nvidia-prime.conf /usr/share/X11/xorg.conf.d/40-libinput.conf /usr/share/X11/xorg.conf.d/70-wacom.conf /usr/share/doc/xserver-xorg-video-intel/xorg.conf /usr/share/man/man5/xorg.conf.5.gz /usr/share/man/man5/xorg.conf.d.5.gz Note, that I use Nvidia, so these 2 might be relevant: $ cat /usr/share/X11/xorg.conf.d/10-nvidia.conf Section "OutputClass" Identifier "nvidia" MatchDriver "nvidia-drm" Driver "nvidia" Option "AllowEmptyInitialConfiguration" ModulePath "/usr/lib/x86_64-linux-gnu/nvidia/xorg" EndSection and this one: $ cat /usr/share/X11/xorg.conf.d/11-nvidia-prime.conf # DO NOT EDIT. AUTOMATICALLY GENERATED BY gpu-manager Section "OutputClass" Identifier "Nvidia Prime" MatchDriver "nvidia-drm" Driver "nvidia" Option "AllowEmptyInitialConfiguration" Option "IgnoreDisplayDevices" "CRT" Option "PrimaryGPU" "Yes" ModulePath "/x86_64-linux-gnu/nvidia/xorg" EndSection
Did you try the solution in https://enochtsang.com/articles/scaling-two-monitors-differently-on-linux-mint ? In your case, create a file called .xsession in your home directory, in the terminal you can do that with touch ~/.xsession Second, open it with a text editor and paste this: #!/usr/bin/env bash xrandr --output HDMI-0 --scale 2x2 Make the file executable chmod +x ~/.xsession Add it as a startup application in Startup Applications >> ( + ) >> Custom command. Name: Displays Command: /home/your_user_name/.xsession Comment: Startup delay: 0 Also, it wouldn't hurt to make some noise in the Mint forums or on Github, so the Linux Mint team implement this feature once and for all.
Making external display DPI settings permanent
1,545,320,948,000
Under Waayland I used busctl --user set-property org.gnome.Mutter.DisplayConfig /org/gnome/Mutter/DisplayConfig org.gnome.Mutter.DisplayConfig PowerSaveMode off to turn off / on the display however after having to go back to X11 due to Wayland being unusable this command works same as dpms force off. With X11 I can run sleep 1; xset dpms force off but this only puts the monitor into standby and will wake as soon as any input is detected such as mouse moves. This is unwanted behaviors and I prefer the ability to wake the display with a specific shortcut. This way I can be sure the display won't turn on on it's own or accidentally. So, how do I force the display to turn off in such a way as to prevent user input from waking it again under X11?
I think you possibly misunderstand what DPMS "off" means. Look at the table in Wikipedia, what DPMS actually does is to signal the power saving state by turning the horizontal sync and vertical sync signals (or the HDMI equivalent) off, and disabling the DAC in the graphics card, while the rest of the graphics card keeps running. So you are not turning everything completely off, you are entering the "deepest" power saving mode possible. OTOH, using xrandr --off really completely shuts off the output, and disables everything in the graphics card that is used to produce the output, as if the monitor was not connected to anything at all. And of course, if it is your only monitor, this doesn't work, as then there is no more graphics display to draw anything on. This is really for enabling and disabling additional second or third monitors. So you don't want it "completely off", you want the deepest DPMS power saving state which happens to be called "off". Your busctl command tells Wayland to use PowerSaveMode, i.e. DPMS. And Wayland doesn't seem to re-enable DPMS when it detects mouse or keyboard inputs, so it stays off. In the same way, xset dpms tells the X server to use DPMS. This is completely the same thing. The difference is that the X server re-enables DPMS when it detects inputs. As to "why", it's how the developers decided how it should work. In X, xset dpms works even when there is not extra screensaver, which is why the way to turn the screen on again was incorporated in the X server. For Wayland, the designers seem to have decided that you always need an extra screensaver program (whose job it is to communicate the wanted PowerSaveMode to Wayland), so it leaves it to the screen saver to monitor inputs and turn the screen on again. That you are able to fake being a screensaver program using busctl is more or less an accident. It's not a bug, it's different design. As I said, try grabbing the mouse and keyboard inputs with evtest --grab /dev/input/eventX (use just evtest to see which device is which. Careful, numbers don't need stay the same across boots, look at the udev symlinks) or the equivalent ioctl if you are writing your own screensaver program. If you want to monitor the inputs for a specific combination, you need to do that anyway.
Turn off X11 / Xorg display (not standby)
1,545,320,948,000
I'm pretty new to Linux (running linux mint). I've been messing around with the display resolution and changed it from 1280x720 to 1920x1080. All the icons and taskbar are pretty small. I want to scale everything up, not just the font size. How do I do this? Thanks! If this helps: I think I'm looking for the linux equivalent of this on windows (image below):
This is for the MATE version : Fonts: Click 'Menu', type 'Apparence' and click, select tab : Polices , then update font and/or size as needed. Desktop icons size: Right click the icon, there is a 'Resize icon' entry, drag the icon's corner.
How do I scale up everything on my display?
1,545,320,948,000
When I configured VNC server there were strings VNCSERVERS="1:oracle" VNCSERVERARGS[1]="-geometry 800x600" When I connected via VNC and run w command, I saw that current display is :1. [oracle@localhost ~]$ w 06:53:24 up 11 days, 22:15, 2 users, load average: 0.38, 0.16, 0.10 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/1 win-73viuifj4th 06:39 23.00s 0.04s 0.04s -bash oracle pts/2 :1.0 09Sep15 0.00s 0.01s 0.00s w I can't get the display meaning. :0 mean that this is physical display of the current machine, but what :1 and :2 mean?
:0. :1, :2 all are display nembers (also display names for a single motinor on the same host). If multiple X servers are running on the host then their displays are numbered as incremental values like :X starting from :0 to uniqely identify each one. Quoting the DISPLAY NAMES section from man 7 X : Display Names From the user's perspective, every X server has a display name of the form: hostname:displaynumber.screennumber This information is used by the application to determine how it should connect to the server and which screen it should use by default (on displays with multiple monitors): hostname The hostname specifies the name of the machine to which the display isphysically connected. If the hostname is not given, the most efficient way of communicating to a server on the same machine will be used. displaynumber The phrase "display" is usually used to refer to collection of monitors that share a common keyboard and pointer (mouse, tablet, etc.). Most workstations tend to only have one keyboard, and therefore, only one display. Larger, multi-user systems, however, frequently have several displays so that more than one person can be doing graphics work at once. To avoid confusion, each display on a machine is assigned a display number (beginning at 0) when the X server for that display is started. The display number must always be given in a display name. screennumber Some displays share a single keyboard and pointer among two or more monitors. Since each monitor has its own set of windows, each screen is assigned a screen number (beginning at 0) when the X server for that display is started. If the screen number is not given, screen 0 will be used.
What does :0 mean in vnc configuration?
1,545,320,948,000
I'm using CentOS 6, with Xfce as a desktop environment and have switched to xdm from gdm as a display manager. However, after making this change, I am observing a very strange oddity: graphical applications can run without $XAUTHORITY being defined: $ echo $DISPLAY :0.0 $ echo $XAUTHORITY $ zenity --error --text ".........." $ echo $? 0 And yet, when I sudo: $ sudo -s [sudo] password for xxxxxx: # echo $DISPLAY :0.0 # echo $XAUTHORITY # zenity --error --text "........." No protocol specified (zenity:3793): Gtk-WARNING **: cannot open display: :0.0 I thought both $DISPLAY and $XAUTHORITY need to be defined for a GUI to run, but this isn't happening. Does anyone have a clue as to what is going on? EDIT: It was suggested in the comments to inspect and use the value of DBUS_SESSION_ADDRESS_VALUE, but: $ echo $DBUS_SESSION_BUS_ADDRESS unix:abstract=/tmp/dbus-ypE50rEtQu,guid=7e2bc970a8ca43af3f7bb01000000255 $ echo $DISPLAY :0.0 $ sudo -s # export DBUS_SESSION_BUS_ADDRESS="unix:abstract=/tmp/dbus-ypE50rEtQu,guid=7e2bc970a8ca43af3f7bb01000000255" # echo $DISPLAY :0.0 # echo $DBUS_SESSION_BUS_ADDRESS unix:abstract=/tmp/dbus-ypE50rEtQu,guid=7e2bc970a8ca43af3f7bb01000000255 # zenity --error --text "..........." No protocol specified (zenity:16931): Gtk-WARNING **: cannot open display: :0.0
The X(7) overview man page (recommend reading the whole thing, by the way) tells us: The file from which Xlib extracts authorization data can be specified with the environment variable XAUTHORITY, and defaults to the file .Xauthority in the home directory. So no, XAUTHORITY is not mandatory if you have your authorization file in the usual location. It's perfectly normal for X clients to work without it. Switching users can break it because the home directory is different, and setting the environment variable helps in that case.
GUI running without $XAUTHORITY being defined, but not for root
1,545,320,948,000
I am connecting to a Linux CentOS 2.6 computer via SSH wit the display deported on my screen (using MobaXterm). It is also possible to access to this computer "physically". However, I would like a way to inform people that may want to access "physically" to the computer that I am currently using it via SSH. One way would be for them to make who in a terminal and see whether someone is connected from another computer. However, I don't want them to have to check this manually each time (they may - and will - forget this check). Thus, I would like a way to open a window on THEIR display (i.e. the screen plugged to the computer) with a message for them. Somthing like a text into gedit would do the work. Can I do that? Can I open an application on a different display than mine?
Can I do that? Can I open an application on a different display than mine? Yes, if you have the appropriate permissions. For example, on a desktop where you are the only current user with a GUI, try switching to a console (e.g., via alt-ctl-F4), log in as the same user, and try: xterm -display 0:0 Your mileage may vary with regard to the display id (see comments), which is actually a network address. Presuming xterm is installed and there are no errors (note this is a foreground process, so don't ctrl-c or otherwise interrupt this from the console), you should now be able to switch back the GUI and find an xterm floating somewhere. Most GUI applications should accept this -display option. You can do the same thing via ssh. If there are multiple X servers running, the displays are usually numbered starting with 0:0, then 0:1 -- at least, that's how they are if they are all using the same physical card and monitor; I'm not sure how it works if you have multiple X servers running simultaneously connected to separate physical displays (perhaps 0:0, 1:0,...). Again, note that you need appropriate permissions to do this. The superuser can start an application on anyone's display, but if you are just normal user bob, you will not be allowed to launch something on normal user sue's desktop. You can also start an X server via ssh and launch applications on it this way.
Acting on a different display when using SSH [duplicate]
1,545,320,948,000
Actions before the GDM failure I was trying to install RT tool in my system. However, the RT tool required Perl 5.10. So, I did yum remove perl to remove the existing version. While the command was getting executed, I was not able to navigate to any other tab or window. The system was stuck. So, I long pressed the power button and turned it on. After Turning ON I got the below error messages (Not exactly in the same order, but after trying various options) Couldn't start X server on card 0. (I think when I did the startx command) Xserver not found. Install x server or correct GDM configuration and restart GDM. I tried to do the below command. cp xorg.conf failsafe xorg.conf However, I did not find the file xorg.conf file itself. I tried to update the GDM by doing yum install GDM after removing GDM. However, I still do not get the display. I want to bring back the display without having to reinstall the entire OS again.
I figured it out. It was the unexpected shutdown which triggered the display problem. Since, I had shut it down using long press while the yum remove perl was going on, the transaction had locked some of the files and it was pending. I did the following commands to bring back the display. yum -complete-transaction --cleanup package-cleanup -problems
RHEL - GDM could not be initialized
1,702,159,574,000
I can understand why 100%, 200%, 300%... scalings are technically easier. But when it comes to fractional scaling, why is it limited to 125%, 150%, and 175%? Is it more difficult to implement, for example, 110% than to implement 125%?
Not knowing how this is actually implemented on different systems, I can only guess: It is indeed more efficient to implement a bitmap scaling by those values. Think of 150% scaling: at 100%: 1 2 3 4 ... --- --- --- -- -- -- -- -- at 150%: 1'2'3'4'5'6'... Pixel 1' will be the same ass pixel 1, pixel 2' will be (pixel 1 + pixel 2) / 2, pixel 3' will be the same as pixel 2 and so on. So you need a simple add+shift operation, which is done in efficiently in hardware in any GPU or vector engine. Likewise the double shift that you need for 125% scaling, while other relations like said 110% require an actual divide operation.
Technical reason why Gnome's fracional scaling is limited to multiple of 1/4?
1,702,159,574,000
I am a teacher and I have some students in the room and others via web, using  3 screens my laptop + a Wacom one tablet + a video projector screen.  I need to  (using an HP zbook 15u G3 wich can use 3 screens) sync the video projector screen  with  [ a part of ] my 'Wacom one' screen. Map the stylus only to the Wacom one's screen ( so I can see what I am writing by just looking at my tablet) So far I was able to do (1) but not (2) with plasma and (2) but not (1) with gnome.  With plasma even with only 2 screens I was not able to get a correct mapping using kde-config-tablet app (the result was always that  moving the stylus across tablet will mean moving the pointer from the left of one screen to the right of the other). I suspected it is bugged but maybe I am just stupid enough not to make it work... With Gnome the mapping was correct (limited to the tablet) but I could not have anything else than an extended desktop, where the video projector's screen and the wacom's  screen are next to each other and not sync (nor overlapping) . Pretty Please If someone know any solutions. I am very willing to try (even with command line tool [xrandr?]) I desperately need this to work!
The good way is simply to use xsetwacom. (an example in french :http://doc.ubuntu-fr.org/wacom ) -Use xrandr to obtain the list of the screens -Use xsetwacom list dev to identify the devices [pen,eraser...] And xsetwacom --set "Name of Wacom device [pen,eraser...]" MapToOutput "Name of the tablet screen" For each device Cheers
Configuring Wacom one to overlap on a third screen and map it appropriately
1,702,159,574,000
I use two screens: my laptop screen (eDPI-1, 1920x1080) and my external monitor (HDMI-2, 2560x1440). All my applications render just fine on both, at the resolution I would expect, with the sole exception of Zoom (the videoconferencing app) which we use for work. Zoom appears super zoomed in (no pun intended) and doesn't fit in my splits, with all the controls massive and blurry. I'm using X and have had this problem both with PopShell (GNOME) and i3 (which is what I'm mostly using). I'm not sure if Zoom is a QT app, or GTK, or whatever else, so I don't know if this is a framework issue or an individual app issue, but I was wondering if anyone has experienced something like this and knows how to fix it, perhaps by changing the scaling/DPI for a particular app. You can see from the screenshots that the controls are way bigger on the 1080p screen than the 1440p screen, proportionally to the resolution.
I have a similar scaling issue with xfce on an Arch fork, using Zoom version 5.1.412382.0614. It seems to be a recent regression though as it does not occur with Zoom 5.0.418682.0603, so maybe you could try an older version.
Some apps (zoom.us) have incorrect scaling on eDPI (1080p) screen but not HDMI (1440p) screen
1,702,159,574,000
In Debian 10 stable with KDE changing the scaling under the display settings has a lot of bugs. For example the tabs in the bottom panel are too large and have a font-size that's much too large so that only very few open windows are shown there. Furthermore, the text-size of many buttons is too large for the buttons, text and icons are unaligned in the left sidebar in dolphin and the space between icons on the desktop is too large. There also are some other problems. Some of these might be solvable with some other display settings (like adjusting the font size) - I haven't tried it much because I prefer the original scale and instead change the scale within apps as needed. How to increase the maximum zoom of Qt apps like the dolphin file explorer or GTK apps like lutris?
I reported the bug within another issue (a feature request) here. From the replies it appears the issue has been fixed in newer version of KDE Plasma. (I haven't tested it.) I asked what would be needed by Debian to distribute newer versions of KDE Plasma here. Many apps can zoom and have standard font-size and default-zoom settings. For those for which the maximum zoom is too small or which lack them the following can be used: GDK_SCALE=2 lutris For GTK apps like lutris (replace this with the app-name) QT_SCALE_FACTOR=2 dolphin For Qt apps like dolphin (replace this with the app-name)
Scaling bugs in Debian 10/KDE - how to increase the size / zoom of apps instead of scaling the display?
1,575,051,545,000
After installing Antergos with KDE everything is working fine (as far as I can tell) except my HDMI monitor doesn't work. Instead of rendering anything properly it displays whatever the first image that appears on it is but broken into lines with black lines in between. I have an NVidia GPU and I think it's Optimus (I got 2 lines following this) so I assumed the Nouveau drivers just weren't up to scratch and it was trying to render the HDMI with those. Installing the proprietary nvidia drivers with nvidia-installer made the OS unbootable (GRUB loaded, selecting Antergos gave a black screen). Installing Bumblebee (-b with nvidia-installer) had much the same effect. Attempting to revert to Nouveau (-n with nvidia-installer) fixed this however once I logged in KDE Plasma froze on load and I could get no further (particularly strange because that's exactly what I thought I had before when it worked). It's possible that the drivers aren't the issue at all but at this point I have no clue how to make my HDMI screen work. Laptop model: Gigabyte P57v7 CPU: Intel i7-7700HQ GPU: NVidia Geforce GTX 1070 I'll add logs tomorrow because I'll have to boot into non-graphical to get them.
For future readers with a similar problem: the fix that worked for me is here. The solution here of adding acpi_osi=! acpi_osi="Windows 2009" to the kernel parameters both allowed the reinstalled nouveau drivers to finish booting and the nvidia drivers to work at all.
HDMI output broken after Antergos install
1,575,051,545,000
I am running Red Hat Enterprise Linux 7.4. I have done a minimal install so that I don't have X11, xorg, xset or anything x related installed. I am trying to keep the monitor from going blank. When I log in I can run the following command which does the trick: setterm -blank 0 -powerdown 0 -powersave off However, when I put that in a script to run at boot time (I call the script from rc.local for now) it doesn't work. I am trying to keep the screen on even before a login. Any suggestions?
At least the first two parts are realised by sending an escape sequence to the console handler, as can be demonstrated like this: $ setterm -blank 0 -powerdown 0 | od -c 0000000 033 [ 9 ; 0 ] 033 [ 1 4 ; 0 ] 0000015 The easiest way of doing this automatically is to add this output to the file /etc/issue which is sent to the screen before the login: prompt: # setterm -blank 0 -powerdown 0 >> /etc/issue Now the escape sequence is always sent to the screen.
How to keep login screen from going blank with minimal RHEL install?
1,575,051,545,000
On Xubuntu I've used vbetool to turn off/on the display in my laptop, assigned to touchpad key because the display key didn't work. I now just installed Fedora 26 and want to use my script, but It seems that vbetool is not in the repository anymore. How can I install vbetool on Fedora 26? I've tried to install from the source following this article How to install vbetool on CentOS 6.6? but got warnings when running make and got an error make: *** No rule to make target '/usr/local/lib/libpci.a', needed by 'vbetool'. Stop.
I've solve the problem by searching for libpci.a file on 64bit Fedora the file was located in /usr/lib64/ so runing: sudo ln -sf /usr/lib64/libpci.a /usr/local/lib/libpci.a and then running make again solved the issue. So the whole solution (based on CentOS article): sudo dnf install pciutils-devel pciutils-devel-static libx86-devel # if you have 32bit system, just remove 64 from lib sudo ln -sf /usr/lib64/libpci.a /usr/local/lib/libpci.a # you can check if there are no new version 1.1 was latest when writing this wget http://www.codon.org.uk/~mjg59/vbetool/download/vbetool-1.1.tar.gz tar xzvf vbetool-1.1.tar.gz cd vbetool-1.1 ./configure && make && make install
How to install vbetool on 64bit Fedora
1,462,807,659,000
On the Arch-based Antergos Linux, initially Gnome Shell desktop, I have installed Mate on a laptop that normally (in Windows, OpenSuse, Ubuntu) has a dedicated key to switch between displays, which here is not working. Is there a default way of doing that? If not, how to set a key for that in Mate?
Install disper. In Ubuntu-beased systems: sudo apt-get install disper In Arch-based systems it is in AUR. Various commands available with this utility HERE. The command to cycle between clone, extended, internal and external displays should be like this: disper --cycle-stages='-e : -c : -S : -s' --cycle In that case, it would extend to the right. To cycle between the same options but extend to the left: disper --direction=left --cycle-stages='-e : -c : -S : -s' --cycle Disper will detect displays and use the maximal resolutions by default. The command can be then associated to a short-key. To add that, go to System>Control Center>Keyboard Shortcuts, Add.
How to cycle displays in Mate desktop?
1,462,807,659,000
I've been running 64-bit Linux Mint Cinnamon for a couple of years now on my current hardware. I upgraded to Qiana a month ago and hadn't had any problems with it. This morning my machine locked up when I accidentally loaded a huge file via a python script that was apparently too big to fit in memory. I couldn't even get to a tty and had to hard reboot. The machine came back up, but now my desktop and all applications are stretched horizontally maybe 50 pixels, which looks ghastly and will seriously interfere with web development work. The login screen looks fine. Adjusting the horizontal size and position of my ViewSonic monitor display via the monitor itself doesn't affect the application stretching. I have an Intel 4 Series integrated graphics controller with no proprietary drivers. I haven't manually installed or updated any new packages for at least a week, although I do run unattended-upgrades for security repos. $ xdpyinfo | grep dimensions dimensions: 1600x1200 pixels (423x318 millimeters) See here for /var/log/Xorg.0.log.
Display Settings Did you check under System-Settings -> Displays (the names might be slightly different)? The screen resolution may have changed to one that isn't the same aspect ratio as your monitor. This setting is per user, which is why your login screen looks fine. Test User Create a test user (as root):- #useradd -m testuser #passwd testuser Log out and log in as this new user and check the screen resolution. If it's good, then the issue is a configuration within your user account.
Screen stretching on Mint Cinnamon after hard reboot
1,462,807,659,000
I'm trying to list the connected monitors using xrandr that is returning the following information: Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192 eDP-1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.06*+ 1360x768 59.80 59.96 1280x720 60.00 59.99 59.86 59.74 1024x768 60.04 60.00 960x720 60.00 928x696 60.05 896x672 60.01 1024x576 59.95 59.96 59.90 59.82 960x600 59.93 60.00 960x540 59.96 59.99 59.63 59.82 800x600 60.00 60.32 56.25 840x525 60.01 59.88 864x486 59.92 59.57 800x512 60.17 700x525 59.98 800x450 59.95 59.82 640x512 60.02 720x450 59.89 700x450 59.96 59.88 640x480 60.00 59.94 720x405 59.51 58.99 684x384 59.88 59.85 680x384 59.80 59.96 640x400 59.88 59.98 576x432 60.06 640x360 59.86 59.83 59.84 59.32 512x384 60.00 512x288 60.00 59.92 480x270 59.63 59.82 400x300 60.32 56.34 432x243 59.92 59.57 320x240 60.05 360x202 59.51 59.13 320x180 59.84 59.32 DP-1 disconnected (normal left inverted right x axis y axis) HDMI-1 disconnected (normal left inverted right x axis y axis) But I don't know why the VGA port was labeled as DP-1 instead of VGA-1, while the HDMI port was clearly labeled as HDMI-1. So does the Linux kernel label the VGA, DVI and DisplayPort ports as "DP"?
This answer on AskUbuntu seems relevant to your question. Basically, the VGA port you see is just a built-in adapter for the native DP port. In this case, xrandr correctly shows you the installed hardware which is a DisplayPort.
What does "DP" device stand for?
1,462,807,659,000
ERROR: type should be string, got "\n\nhttps://unix.stackexchange.com/a/503874/674 says\n\nThe display is effectively the X server; there is exactly one display per X server. So multiple X servers can’t run simultaneously\n on the same display, and an X server can’t run simultaneously on\n multiple displays. (Strictly speaking, the latter point isn’t\n correct, but I don’t think there’s an X server which can serve\n multiple displays.)\n\nhttps://www.x.org/archive/X11R6.8.0/doc/X.7.html#sect4 says a\ndisplay can have multiple screens/monitors. $DISPLAY specifies a\nscreen, not just a display, and is used in starting a X server or a\nX client. So does a X server start in a display or a screen? So\ndoes a X server start in a display or a screen?\nhttps://unix.stackexchange.com/a/503884/674 has a diagram that\n\ndistinguishes screen and monitor, while https://www.x.org/archive/X11R6.8.0/doc/X.7.html#sect4 seems to say\nthey are the same concept when explaining screen number. Which one is correct?\nshows a X server covers all the screens in a display. So does a display server start in a display or a screen or a monitor?\n\nCan I specify an arbitrary `$DISPLAY`?\nsays:\n\nAn xserver can use a hardware framebuffer, a dummy framebuffer (Xvfb) or a window on another xserver (Xephyr). The latter two are\n examples of \"virtual\" xserver/display\n\nIs a framebuffer associated with a display or a screen or a monitor?\n\nSorry I am still confused by the multiple concepts. Thanks.\n"
So does a X server start in a display or a screen? I’m not sure how to say this in a different way than I did previously; for all intents and purposes, the X server is a display (“display” as the X Window concept, which I understand is what we’re discussing here). An X server doesn’t start in a display, it is a display. You can think of this as “an X server starts a display”, and “a display contains one or more screens”. The DISPLAY variable can be confusing since, as you say, it can specify more than the X display. Which one is correct? The diagram; see the explanation below. Does a display server start in a display or a screen or a monitor? In the X Window documentation, “display server” is synonymous with X server, so the above applies. It may help to consider that the X Window documentation was written a long time ago, at a time when virtual displays weren’t used (much, if at all), and when multi-monitor setups were complex and often involved multiple X screens, and sometimes even multiple X servers. So in the X documentation, a screen is usually a monitor. However it quickly became obvious that it was annoying to split multiple monitors into multiple screens, and once graphics cards became capable of handling multiple monitors as a single unit, usage patterns changed so that X screens tended to cover multiple monitors. Is a framebuffer associated with a display or a screen or a monitor? “Framebuffer” is a somewhat nebulous term, with multiple definitions. In the context of the comment you’re quoting, it’s associated with a screen, and you can see this with Xvfb: if you tell it to use memory-mapped files for its framebuffers, and define multiple screens, you’ll see it use one framebuffer file per screen.
Does a display server start in a display or a screen or a monitor?
1,462,807,659,000
I ssh to a remote host (without X forwarding). In the shell created by sshd on the remote host, why can't I start a GUI program on the default $DISPLAY $ eog Unable to init server: Could not connect: Connection refused (eog:31542): Gtk-WARNING **: 23:11:16.793: cannot open display: $ echo "$DISPLAY" $ while specifying explicitly $DISPLAY=:0 creates a window on the remote host? $ DISPLAY=:0 eog (eog:31546): dbind-WARNING **: 23:11:42.415: Error retrieving accessibility bus address: org.freedesktop.DBus.Error.ServiceUnknown: The name org.a11y.Bus was not provided by any .service files Isn't :0 the default value of $DISPLAY?
There is no default value for DISPLAY. If it’s not set, and you don’t specify a target display in some other way, X programs won’t be able to connect to a server. This can be useful, e.g. to start a program with no X connection when you’re running inside an X session: temporarily clearing DISPLAY will ensure the X session isn’t found. See How to change DISPLAY of currently running application for details of how DISPLAY is used, and Open a window on a remote X display (why "Cannot open display")? for details of the information required to connect to an X server.
Why can't I start a GUI program on the default `$DISPLAY` on a remote ssh server host?
1,462,807,659,000
My laptop has screen resolution of 1366x768 but I want to create an X display with a higher resolution and scroll around it. It was possible in Windows where the drivers and the graphics card allowed it. Can the same be done in Linux?
Yes. Assuming that the screen of your laptop is LVDS-1 (get the real name with xrandr | grep -w connected): xrandr --output LVDS-1 --panning 2732x1536 But simple application windows can be larger than the root window or the screen even without that. You can check with xclock -geometry 2732x1536 if xclock(1) is installed and the window manager doesn't get in the way.
Can the screen size of an X11 window be set higher than the monitor's resolution and the view panned around the larger display?
1,462,807,659,000
I have a laptop which I want to use only remotely via rdp (xvnc server). I have setup rdp successfully. As I only use the laptop remotely, I want to disable it's display. To do that, I have already disabled lightdm. However, now at boot it shows: Antergos Linux 4.14.15-1-ARCH (tty1) simon login: I want to disable this display. However, I have no idea how to. I've tried: xset dpms force off but that only gives an error that the display cannot be found. Is x server even still running? If so, how do I disable it and blank the screen (and disable the laptop backlight)?
Since you're seeing a login prompt on tty1, the local X server has been stopped and the virtual console is in text mode, acting as a terminal emulator. (The xvnc is a separate, "virtual display" X server for incoming VNC/RDP connections. It does not deal with physical display, keyboard or mouse at all.) To force disable it, you need the setterm command: setterm --blank force But if you plan to run it remotely or from a script, you'll need to use it in a bit longer form: setterm --blank force --term linux </dev/tty1 In case you need to re-enable: setterm --blank poke --term linux </dev/tty1 Yes, the redirection is non-intuitive; it's the same special case as with the stty command. With older versions of setterm, you may have to use >/dev/tty1 instead.
How to disable tty1 and backlight using Arch Linux
1,462,807,659,000
Context I am runnning a Debian Stretch distribution with Cinnamon graphical interface. I use this command to turn off the display xset dpms force off It is useful to me when I want to sleep and just launch a video without being perturbed by the light of my screen. Note that if the mouse pointer is active (moves), then the display is turn on. Problem If the video is launched by VLC or Totem Movie Player, all is working fine. If the video is launched by mplayer, display is turn off for 12s and then the video appears which is not what I expect... I don't know why does the command "xset dpms force off" stop with the mplayer's app.
Run mplayer like this mplayer -nostop-xscreensaver [other options] video-file or add the option into config file ~/.mplayer/config : [default] stop-xscreensaver=0
Debian DPMS - Mplayer
1,462,807,659,000
I am currently using Manjaro (Juhraya 18.1.5) and I use [guake] [1] as a dropdown terminal. I have set a transparency on it and wanted to launch it as a start up program. But after logging in, I find no transparency of the background. Everything else works perfectly fine. If I quit it and relaunch it, transparency works fine. Here are some info that might be useful: System: Host: XD Kernel: 5.4.17-1-MANJARO x86_64 bits: 64 Desktop: KDE Plasma 5.17.5 Distro: Manjaro Linux Graphics: Device-1: Intel Skylake GT2 [HD Graphics 520] driver: i915 v: kernel Display: x11 server: X.Org 1.20.7 driver: intel unloaded: modesetting resolution: 1366x768~60Hz OpenGL: renderer: Mesa DRI Intel HD Graphics 520 (Skylake GT2) v: 4.6 Mesa 19.3.3 I found one solution, to add a delay timer in the start-up script. But I was wondering if there was anything more I could do to solve this problem. Also what is causing the problem here? Thanks. [1]: http://guake-project.org/
About my system information... I am using Kubuntu with KDE plasma v:5.18.5. Here is what I did. I installed "gcc" from the terminal window.The "gcc" is a program that translates the C language text code in machine code. I have have created a text file, called "start-guake.c" and then I written the code in that text document(the extension must be ".c" not ".txt"). After that,in the terminal, I changed the directory to the location of the text file and I COMPILED it with "gcc" (in the command line write gcc followed by the path of the text file that contains the code: "gcc /path/to/file.c"). This action resulted in the creation of a file named "a.out" (you will find it in the same folder where you have your text file that contains the code)and this is the actual program. I renamed the file from "a.out" to "start-guake" and moved it in the "/bin/" directory. From there I went to my applications menu and opend "autostart" and in there I added the newly created program. I restarted the computer after all these steps. This is the code: #include <stdlib.h> #include <stdio.h> #include <string.h> void waitTenSeconds(); int main(){ waitTenSeconds(); system("guake"); return 0; } void waitTenSeconds(){ system("sleep 10"); } Hope this helped. Good luck !
Transparency settings for my drop down terminal is not loading at start up
1,462,807,659,000
I installed NVIDIA's Linux x64 (AMD64/EM64T) Display Driver for my Laptop's NVIDIA GeForce GTX 970M/PCIe/SSE2 since I had issues with display a few weeks ago (I don't remember exactly which; I think it was with my [three] multi-monitor setting). Since then every second update or so (and they are presented almost daily with Tumbleweed) this driver is thrown out, TW boots into shell and I have to run: $ sudo ./NVIDIA-Linux-x86_64-525.85.05.run there. What can I do to avoid this (apart from changing to a different distribution, which I'm already thinking of)? UPDATE 1 It became even worse today: So I'm stuck on the shell. That's maximum annoyance. /var/log/nvidia-installer.log contains 72,386 lines including: a lot : error: assignment of read-only member 'vm_flags' ... https://i.sstatic.net/wQAWB.jpg (too large to insert here as image) Thank you, SUSE, for nothing! ☹️ UPDATE 2 I performed what's described in SDB:NVIDIA drivers: # zypper in nvidia-video-G06 # zypper in nvidia-gl-G06 [GL worked at the second try only. There was a curl error at the first try.] # shutdown -r now Still booting into shell AND THERE IS NO /var/lib/nvidia-pubkeys as mentioned in Secureboot! There is /var/lib/nvidia with no sub-dir and the files: 100 ... 104 *105 *... *108 dirs log What now? ☹️ UPDATE 3 I tried installing the latest NVIDIA driver before and after UPDATE 2: Really, what now? ☹️ PS: If you decide to start fiddling around with Linux be sure to always(!) have a second working(!) computer with Internet access(!) at hand. Otherwise you're going to be lost completely sooner or later. That's sad, but that's my experience over the decades since my first own kernel compilation of Slackware ~30 years ago.
Since then every second update or so (and they are presented almost daily with Tumbleweed) this driver is thrown out, TW boots into shell and I have to run: You described a "the hard way" setup as in https://en.opensuse.org/SDB:NVIDIA_the_hard_way The easy and most common is to setup using a repository as in https://en.opensuse.org/SDB:NVIDIA_drivers First uninstall the hard way: sh <NVIDIA*.run> --uninstall Then install kernel module and drivers: zypper in nvidia-video-G06 nvidia-gl-G06 nvidia-driver-G06-kmp Specifying the closed-source kernel module might avoid the open-source module being installed which doesn't support your graphics card. You also described 2 network issues / incomplete downloads, that's a separate issue to look into. For instance, after downloading a package as described in update 3, you are expected to run an integrity check to verify the download is complete. zypper does that automatically.
How to convince openSUSE Tumbleweed to NOT throw out NVIDIA display driver on updates
1,462,807,659,000
My WSL defines the DISPLAY variable for x windows on the .profile, checking the IPs I have on the resolv.conf. Why ? If I am not wrong, resolv.conf is only used for define the DNS to use, and the DISPLAY should point to my local IP. In fact, I have several (plenty) of IPs on my Windows machine due the use of VirtualBox, Vmware, and WSL, that do their own virtual ethernet cards, and the DISPLAY that is defined by default using this profile, it does not work, I had to change it manually and assign the one I have on my main ethernet card (I had to do a export DISPLAY=192.168.1.8:0.0 to have my X working and overwrite the value 192.168.1.1:0.0 that my DISPLAY variable get automatically when my WSL starts). Note also that if I find my NATed Ip (using whatismyip.com to find wich one is the public IP I am using when connecting to Internet), and I try to setup my Display to that IP so I can invoke xterm on remote machines hold on AWS, it does not work either. Why? It does if I do a ssh -X to that remote machine with my display manually setup to 192.168.1.8:0.0. I would like to know the reasons about a).- DISPLAY being setting automatically by profile incorrectly b).- Public IP of my machine not working if manually setup on DISPLAY variable c).- IP mentionated previously (192.168.1.8:0.0) working fine instead the public the one. Some additional context data: The current profile I have has: export LIBGL_ALWAYS_INDIRECT=1 export DISPLAY_NUMBER="0.0" export DISPLAY=$(grep -m 1 nameserver /etc/resolv.conf | awk '{print $2}'):$DISPLAY_NUMBER My resolv.conf is : nameserver 192.168.1.1 nameserver 192.168.1.1 nameserver fec0:0:0:ffff::1 search gorostidi-home.lan My ifconfig: andres@DCT00175:~$ ifconfig eth3: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.56.1 netmask 255.255.255.0 broadcast 192.168.56.255 inet6 fe80::1bd8:2cef:f202:d18b prefixlen 64 scopeid 0xfd<compat,link,site,host> ether 0a:00:27:00:00:0a (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth4: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.28.48.1 netmask 255.255.240.0 broadcast 172.28.63.255 inet6 fe80::f389:4534:305:8a86 prefixlen 64 scopeid 0xfd<compat,link,site,host> ether 00:15:5d:16:f9:f0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth5: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.250.1 netmask 255.255.255.0 broadcast 192.168.250.255 inet6 fe80::b10b:16fd:43b6:c962 prefixlen 64 scopeid 0xfd<compat,link,site,host> ether 0a:00:27:00:00:10 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 **eth6:** flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.8 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 fe80::2669:762c:cd7f:3da6 prefixlen 64 scopeid 0xfd<compat,link,site,host> ether 70:b3:d5:5c:0c:a1 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth7: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.115.1 netmask 255.255.255.0 broadcast 192.168.115.255 inet6 fe80::337b:d856:e3cf:dd85 prefixlen 64 scopeid 0xfd<compat,link,site,host> ether 00:50:56:c0:00:01 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth8: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.23.1 netmask 255.255.255.0 broadcast 192.168.23.255 inet6 fe80::6ce0:a4cb:44b5:b58f prefixlen 64 scopeid 0xfd<compat,link,site,host> ether 00:50:56:c0:00:08 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 eth11: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.24.16.1 netmask 255.255.240.0 broadcast 172.24.31.255 inet6 fe80::6319:6b32:8b9c:feeb prefixlen 64 scopeid 0xfd<compat,link,site,host> ether 00:15:5d:18:6c:7c (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 1500 inet 127.0.0.1 netmask 255.0.0.0 inet6 ::1 prefixlen 128 scopeid 0xfe<compat,link,site,host> loop (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 wifi2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.223.100 netmask 255.255.255.0 broadcast 192.168.223.255 inet6 fe80::7f7d:9903:bd01:6cb6 prefixlen 64 scopeid 0xfd<compat,link,site,host> ether 36:c9:3d:82:2c:29 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 My current DNS and also, current router, is 192.168.1.1 (the one is setup on resolv.conf). Thks for your help !
The settings in your .profile are yours to change as you see fit. If the current DISPLAY setting in your .profile is not right for you, then you should change it to something that actually works. a).- DISPLAY being setting automatically by profile incorrectly Your setup is more complex than expected by the creators of the WSL distribution you're currently using, and so its defaults don't work for you. Getting the local IP address from a nameserver line in resolv.conf would make sense only if the IP referred to the local host that is running both the actual X11 server and a DNS resolver/proxy server. That may have been the default in a generic installation, but your customization may have broken that assumption. Whatever the actual reason is, the fact remains that the current DISPLAY settings in your .profile are not correct for you, and you should adjust them to better fit your actual situation. As Freddy said in the comments, your settings look like they might have been copied from https://superuser.com/a/1476160/990044 or some other source, back when WSL did not yet have graphics support for its own X11 server, and had to piggy-back on a X11 server running on the Windows host OS. If your WSL is up to date, having those settings in .profile might be entirely unnecessary with the current WSL 2. b).- Public IP of my machine not working if manually setup on DISPLAY variable Your NAT does not have a port forwarding rule for port 6000/TCP (port number = display number + 6000), your local X11 server is not listening in the IP address the port forwarding points to, and/or your local software firewall (including Windows Firewall) is blocking incoming traffic to port 6000/TCP for that IP address. c).- IP mentionated previously (192.168.1.8:0.0) working fine instead of the public one In that IP address, there is no firewall blocking the port 6000/TCP, and your local X11 server is listening on that port, and so the X11 clients can connect to the X11 server and access your display using that address. In the X11 specification, the thing before the colon was originally specified to be the hostname (see man X). The ability to also accept IP addresses was added later, when running X11 servers on personal workstations that did not necessarily have a resolvable hostname become a reasonably common thing. (If you ever need to deal with old X11R5 client software on some legacy system, you may find it still doesn't accept an IP address in DISPLAY.)
How is DISPLAY variable defined for X access on WSL?
1,462,807,659,000
I have a Slimbook laptop with an external monitor (HP Elitedisplay E243) connected to it using HDMI. Suddenly, the monitor stopped working (it was at a system start). The monitor gets detected by the Linux Mint desktop (21.04 Cinnamon), but no signal is shown. It even seems to wake up when I unplug/plug the HDMI cable, but shows no image. This is the output of the xrandr command: Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 16384 x 16384 eDP-1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 309mm x 173mm 1920x1080 60.01*+ 60.01 59.97 59.96 59.93 40.00 1680x1050 59.95 59.88 1600x1024 60.17 1400x1050 59.98 1600x900 59.99 59.94 59.95 59.82 1280x1024 60.02 1440x900 59.89 1400x900 59.96 59.88 1280x960 60.00 1440x810 60.00 59.97 1368x768 59.88 59.85 1360x768 59.80 59.96 1280x800 59.99 59.97 59.81 59.91 1152x864 60.00 1280x720 60.00 59.99 59.86 59.74 1024x768 60.04 60.00 960x720 60.00 928x696 60.05 896x672 60.01 1024x576 59.95 59.96 59.90 59.82 960x600 59.93 60.00 960x540 59.96 59.99 59.63 59.82 800x600 60.00 60.32 56.25 840x525 60.01 59.88 864x486 59.92 59.57 800x512 60.17 700x525 59.98 800x450 59.95 59.82 640x512 60.02 720x450 59.89 700x450 59.96 59.88 640x480 60.00 59.94 720x405 59.51 58.99 684x384 59.88 59.85 680x384 59.80 59.96 640x400 59.88 59.98 576x432 60.06 640x360 59.86 59.83 59.84 59.32 512x384 60.00 512x288 60.00 59.92 480x270 59.63 59.82 400x300 60.32 56.34 432x243 59.92 59.57 320x240 60.05 360x202 59.51 59.13 320x180 59.84 59.32 DP-1 disconnected (normal left inverted right x axis y axis) HDMI-1 disconnected (normal left inverted right x axis y axis) HDMI-2 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 527mm x 296mm 1920x1080 60.00*+ 50.00 59.94 1680x1050 59.88 1600x900 60.00 1280x1024 60.02 1440x900 59.90 1280x800 59.91 1280x720 60.00 50.00 59.94 1024x768 60.00 800x600 60.32 720x576 50.00 720x480 60.00 59.94 640x480 60.00 59.94 720x400 70.08 What I find a bit strange, is that the monitor seems to be detected at HDMI-2 port, but the laptop only has one DisplayPort port, one HDMI port and a usb-c port. The monitor itself works when other computer gets connected, with same cable. Tried also with the Ubuntu 22.04 live CD and same result... What could be going on?
In the end, after more tests, I moved the laptop and discovered that the cable was pinched between the table and the wall. Don't know why, but it seems this to make this happen, even if other computers work with that cable being pinched. After moving the cable everything went OK!
Monitor detected but with no signal in Linux Mint and Ubuntu
1,462,807,659,000
I have two monitors setup on my machine. I would like to prevent some apps from accessing one of the two monitors while still granting permission to other apps. I would like to restrict an application only to HDMI-1-1 (check xrandr output below) and prevent it from reading eDP-1-1. Is it possible to do so? xrandr output: een 0: minimum 8 x 8, current 3840 x 1080, maximum 32767 x 32767 eDP-1-1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 344mm x 194mm 1920x1080 60.06*+ 60.01 59.97 59.96 59.93 40.04 1680x1050 59.95 59.88 1600x1024 60.17 1400x1050 59.98 1600x900 59.99 59.94 59.95 59.82 1280x1024 60.02 1440x900 59.89 1400x900 59.96 59.88 1280x960 60.00 1440x810 60.00 59.97 1368x768 59.88 59.85 1360x768 59.80 59.96 1280x800 59.99 59.97 59.81 59.91 1152x864 60.00 1280x720 60.00 59.99 59.86 59.74 1024x768 60.04 60.00 960x720 60.00 928x696 60.05 896x672 60.01 1024x576 59.95 59.96 59.90 59.82 960x600 59.93 60.00 960x540 59.96 59.99 59.63 59.82 800x600 60.00 60.32 56.25 840x525 60.01 59.88 864x486 59.92 59.57 800x512 60.17 700x525 59.98 800x450 59.95 59.82 640x512 60.02 720x450 59.89 700x450 59.96 59.88 640x480 60.00 59.94 720x405 59.51 58.99 684x384 59.88 59.85 680x384 59.80 59.96 640x400 59.88 59.98 576x432 60.06 640x360 59.86 59.83 59.84 59.32 512x384 60.00 512x288 60.00 59.92 480x270 59.63 59.82 400x300 60.32 56.34 432x243 59.92 59.57 320x240 60.05 360x202 59.51 59.13 320x180 59.84 59.32 HDMI-1-1 connected 1920x1080+1920+0 (normal left inverted right x axis y axis) 527mm x 296mm 1920x1080 60.00*+ 74.97 50.00 59.94 1680x1050 59.88 1600x900 60.00 1280x1024 60.02 1440x900 59.90 1280x800 59.91 1280x720 60.00 50.00 59.94 1024x768 60.00 800x600 60.32 720x576 50.00 720x480 60.00 59.94 640x480 60.00 59.94 720x400 70.08 Distro: Pop OS 20.04
Under X.org only session it's not possible, if you run two X.org servers (e.g. by default and then :1) programs running at them won't be able to sniff on another X.org server (not directly of course, an evil app may relunch itself with a different DISPLAY variable). So your best bet will be to have two different user accounts using two different X.org sessions. I've never used Wayland - it has a much stricter policy but I don't know how to work with it.
Restrict monitor access to certain apps
1,462,807,659,000
I want to add additional language input to my linux. I am using MX Linux and DWM desktop environment. How can I do this?
Since MX Linux is Debian based, you can set your keyboard layout using, sudo dpkg-reconfigure keyboard-configuration. The keyboard settings file is /etc/default/keyboard if you prefer to do it manually. You can set the layout, the available languages, variants and the key combination to switch layout/language. In Debian the settings in this file are respected by console and Xorg. Second part, is how to view the selected language in dwm bar. There are many ways, I'll suggest you two: I suppose you've patched DWM with systray patch. If not I suggest you to do it, many apps use the system tray. In that case you can apt install fbxkb. It's a light app which shows an icon-flag on the system tray of the selected language. Easy and nice, but I don't like the flag on the systray :) Use some dwm status bar customization. There are many available in dwm status monitor page. Most support showing current keyboard layout; I use dwm-bar. More steps to setup than the first proposal, but it's helpful to have whatever info you like on the bar - since you can add much more than just the current keyboard layout.
How to add keyboard layout in dwm
1,462,807,659,000
My roommate has a really old 1280x1024 VGA display that the driver sets to 1600x1200 by default and it causes it to display a message saying it can't display the input. I can ctrl+alt+f1 and use xrandr -d :0 to find out the output that's being used but every time I do xrandr --output CRT1 --mode "1280x1024_60.00" it says that it can't find the display. The mode is displayed when I do xrandr -d :0 so I already know it's been added. I can configure it to work properly if I connect our TV as a secondary display but the second I disconnect it, it resets to 1600x1200. I need to get it set to 1280x1024 all the time so he can use his PC.
So after installing other things to fix the drivers the crash message went away and the fix ended up being adding Modes "1280x1024" to the SubSection in the Screen section in xorg.conf
Change display output with xrandr?
1,462,807,659,000
Is there some command (or trick) to detect what DISPLAY is active? I mean active in a sense that it will "eat" all key strokes, and mouse clicks and moves; so it has the input focus. I mean also for the simple hardware installation where the user has only one physical keyboard and only one mouse (usb or ps2). The default X session loads at ctrl+alt+f7 (:0) here. I have another X at ctrl+alt+f8/f9 (:1). I want to code a script that, when I go to :1, it automatically lock :0, or :1 accordingly. When the mouse is stopped and no key is being pressed, we are just staring at the screen; but, I think the "current active? (or both are?)" X must be watching the input for changes, while the other X is unable to watch such changes, that channel must be uniquely accessed in some way... any tips?
fgconsole (if run as root) should do what you want. Ctrl-Alt-Fx switches to the Linux console #x, and fgconsole tells you the number of the currently active console.
how to detect what DISPLAY is active/has input focus?
1,462,807,659,000
I have exported my X display to another computer using this command: export DISPLAY=xxx.xxx.xxx.xxx:0.0 How can I undo it?
Just reset DISPLAY to the original value. The details depend on your system but one of these should work: export DISPLAY=:0.0 export DISPLAY=localhost:0.0
How to undo DISPLAY export in linux?
1,462,807,659,000
I can adjust brightness or gamma of my whole screen with xrandr. But is it possible to only change these settings for given window in `openbox? For instance, if my browser is too bright and I want to reduce the colors, without affecting other applications? EDIT: I am using xrandr as an example here. I don't require the solution to use xrandr. If there is some other tool that can do that, that would be fine.
xrandr is configuration utility of RandR (Resize and Rotate), part of X.org Server. Compiz config has a setting for this but I'm not aware of any similar utility for OpenBox, and don't find any direct method to affect window brightness in OpenBox documentation either. As per the FAQ you can get true 32-bit transparency on any window using xcompmgr or transset, which could be a workaround. To have a native brightness support for application windows you need a different window manager like Compiz.
openbox: change brightness or contrast or saturation only for given window / application
1,462,807,659,000
I was recently figuring out how to properly configure 2 monitors with X11 and a NVIDIA card and tried many different options. Many tutorials pointed out that a 2-monitor setup should include 2 screens in the ServerLayout section in /etc/X11/xorg.conf, like so: Section "ServerLayout" Identifier "Main" Screen 0 "Screen0" 0 0 Screen 1 "Screen1" 1920 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection When I tried this, no matter how I configured the Screen, Devices, or Monitor sections, it would result in some sort of strange display or X server error. Eventually what ended up working was using simply 1 screen in my xorg.conf which was generated by using nvidia-xconfig: Section "ServerLayout" Identifier "Main" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Lenovo Group Limited LEN C24-10" HorizSync 30.0 - 83.0 VertRefresh 50.0 - 75.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "NVIDIA GeForce GTX 1060 6GB" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "Stereo" "0" Option "nvidiaXineramaInfoOrder" "DFP-1" Option "metamodes" "HDMI-1: nvidia-auto-select +1920+0, HDMI-0: nvidia-auto-select +0+0" Option "SLI" "Off" Option "MultiGPU" "Off" Option "BaseMosaic" "off" SubSection "Display" Depth 24 EndSubSection EndSection What I don't understand however is how this works. Most dual-monitor example xorg.conf files I came across specified 2 screens in the server layout. The fact that my config works with just one screen and monitor specified is seemingly contradicting what I have read, unless I am grossly misunderstanding something. Could someone explain to me why this is? Would I be safe to assume that the line Option "metamodes" "HDMI-1: nvidia-auto-select +1920+0, HDMI-0: nvidia-auto-select +0+0" has something to do with this?
You don't usually need to manually play with xorg.conf anymore. Even an empty xorg.conf will often get it right (with the exception of monitor placement). The magic is that most of this is now done behind the scenes. The Nvidia drivers (not necessarily nvidia-xconfig) use xorg.conf for some hints but will do most of the configuration itself and apply appropriate default values for everything. In the past, things used to be more difficult. There were three main ways to do a multi-monitor setup: Multiple Screens. This would set up several independent desktops. You couldn't move windows from one screen to the other and if you ran a terminal, you could specify which monitor could run each application by setting DISPLAY= Twinview: This X11 extension allowed a single screen to span multiple monitors. However X11 didn't really recognize the seam between the screens. Therefore instead of a "primary" monitor with your status bar, your status bar would span all monitors. If you full-screen'd an application, it would span all monitors. Xinerama: This X11 extension solved my problems with TwinView. Now you could full-screen an application on only one monitor, and still move windows between the two. I'm not sure if it's an independent extension, or if it work on-top of TwinView. The "hints" provided to the nvidia drivers via xorg are indeed related to a few of the Option lines: Option "nvidiaXineramaInfoOrder" "DFP-1" Option "metamodes" "HDMI-1: nvidia-auto-select +1920+0, HDMI-0: nvidia-auto-select +0+0 These hints define which physical monitor is which, and where they should be positioned on the desktop. You can see that there is a little Xinerama in use here.
How does my dual-monitor X configuration work with just one screen specified? (Nvidia)
1,651,686,969,000
I have an MSI Optix MAG245R 23.8" display. It features USB connections. When I start up my Debian-based machine (BunsenLabs), I get warnings that say: May 4 19:41:51 localname kernel: [ 240.573980] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=4060, sector_sz=512) May 4 19:41:51 localname kernel: [ 240.573986] sd 9:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK May 4 19:41:51 localname kernel: [ 240.573988] sd 9:0:0:0: [sdc] tag#0 CDB: Read(10) 28 00 00 00 00 00 00 00 08 00 May 4 19:41:51 localname kernel: [ 240.573990] print_req_error: I/O error, dev sdc, sector 0 May 4 19:41:51 localname kernel: [ 240.673838] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=2012, sector_sz=512) May 4 19:41:51 localname kernel: [ 240.673846] sd 9:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK May 4 19:41:51 localname kernel: [ 240.673850] sd 9:0:0:0: [sdc] tag#0 CDB: Read(10) 28 00 00 00 00 00 00 00 04 00 May 4 19:41:51 localname kernel: [ 240.673852] print_req_error: I/O error, dev sdc, sector 0 May 4 19:41:51 localname kernel: [ 240.673859] Buffer I/O error on dev sdc, logical block 0, async page read May 4 19:41:51 localname kernel: [ 240.860367] Buffer I/O error on dev sdc, logical block 1, async page read May 4 19:41:52 localname kernel: [ 241.500663] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=4060, sector_sz=512) May 4 19:41:52 localname kernel: [ 241.612517] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=4060, sector_sz=512) May 4 19:41:52 localname kernel: [ 241.697840] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=2012, sector_sz=512) May 4 19:41:52 localname kernel: [ 241.785872] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=2012, sector_sz=512) May 4 19:41:52 localname kernel: [ 241.818899] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=16348, sector_sz=512) May 4 19:41:52 localname kernel: [ 241.909838] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=2012, sector_sz=512) May 4 19:41:52 localname kernel: [ 241.993839] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=2012, sector_sz=512) May 4 19:41:55 localname kernel: [ 244.842973] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=16348, sector_sz=512) May 4 19:41:55 localname kernel: [ 244.942219] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=2012, sector_sz=512) May 4 19:41:55 localname kernel: [ 245.034140] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=2012, sector_sz=512) May 4 19:41:56 localname kernel: [ 245.473992] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=2012, sector_sz=512) May 4 19:41:56 localname kernel: [ 245.557874] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=2012, sector_sz=512) May 4 19:41:57 localname kernel: [ 246.425872] sd 9:0:0:0: [sdc] Unaligned partial completion (resid=2012, sector_sz=512) May 4 19:41:57 localname kernel: [ 246.425881] sd 9:0:0:0: [sdc] tag#0 FAILED Result: hostbyte=DID_ERROR driverbyte=DRIVER_OK May 4 19:41:57 localname kernel: [ 246.425884] sd 9:0:0:0: [sdc] tag#0 CDB: Read(10) 28 00 00 00 00 00 00 00 04 00 Then, I went ahead to launch GParted to inspect my storage volumes. I know what /dev/sda and /dev/sdb are; these are what I just expected. Thus I was surprised why a /dev/sdc was detected. There, Device Information says its an MSI Optix Driver. I have tried isolating only the display's USB connection (no keyboard connections, dongles, whatever), and this is being detected as /dev/sdc. What can I do to get rid of those errors? My non-expert idea is to find some way to tell my machine that this device with UUID0123456789 is not a storage device, so don't bother testing it as a storage device. How do I do that? Or any other way?
Thanks to @telcoM for the clues. I have fixed the problem. tl;dr De-authorize the offending USB device using udev rules. How to do it Run dmesg in follow mode: $ dmesg -w Disconnect and reconnect the display's USB cable. In the output of dmesg, take note of the bus number of the device you want to disable (covered in yellow) Exit dmesg with CTRL+C. Now run the following to get the device attributes. Substitute xxxusbxxx with the bus number you got from dmesg $ udevadm info -a -p /sys/bus/usb/devices/xxxusbxxx This is the first part of the output, the one you care about. Take note of the following attributes and the path covered in the red box. DRIVER=="usb-storage" ATTR{bInterfaceClass}=="aa" ATTR{bInterfaceNumber}=="bb" ATTR{bInterfaceProtocol}=="cc" ATTR{bInterfaceSubClass}=="dd" Now create a new file /etc/udev/rules.d/10-disable-MSI-optix-storage.rules. The number 10 indicates that the rule is executed before other rules with higher numbers. Keep the .rules extension. The rest of the filename does not matter. These should be the contents of the file. Substitute strings accordingly. Reboot your computer and see that the read error message no longer appear. What this actually does When a USB device that matches the attributes specified in the udev rules file is detected, its authorized flag is immediately set to 0 and can no longer be used.
USB connection from display detected as storage device
1,651,686,969,000
I am using Ubuntu 20.04 version on Lenovo Ideapad Whenever I lock my laptop (or it gets locked in inactivity) and try to unlock, my display is crashing. I see a screen as below I have another external monitor connected to this laptop and that display looks fine. This started happening only since a couple of days. I also tried to see if there are any recent updates which caused this. But I couldn't see anything related. Here's the log. Start-Date: 2022-01-09 11:22:08 Commandline: /usr/bin/unattended-upgrade Remove: linux-headers-5.11.0-41-generic:amd64 (5.11.0-41.45~20.04.1), linux-hwe-5.11-headers-5.11.0-41:amd64 (5.11.0-41.45~20.04.1) End-Date: 2022-01-09 11:22:10 Start-Date: 2022-01-17 06:57:33 Commandline: /usr/bin/unattended-upgrade Upgrade: libsystemd0:amd64 (245.4-4ubuntu3.13, 245.4-4ubuntu3.15), systemd-timesyncd:amd64 (245.4-4ubuntu3.13, 245.4-4ubuntu3.15), systemd-sysv:amd64 (245.4-4ubuntu3.13, 245.4-4ubuntu3.15), libpam-systemd:amd64 (245.4-4ubuntu3.13, 245.4-4ubuntu3.15), systemd:amd64 (245.4-4ubuntu3.13, 245.4-4ubuntu3.15), libnss-systemd:amd64 (245.4-4ubuntu3.13, 245.4-4ubuntu3.15) End-Date: 2022-01-17 06:57:40 Start-Date: 2022-01-17 06:57:44 Commandline: /usr/bin/unattended-upgrade Upgrade: libgs9:amd64 (9.50~dfsg-5ubuntu4.4, 9.50~dfsg-5ubuntu4.5), ghostscript:amd64 (9.50~dfsg-5ubuntu4.4, 9.50~dfsg-5ubuntu4.5), ghostscript-x:amd64 (9.50~dfsg-5ubuntu4.4, 9.50~dfsg-5ubuntu4.5), libgs9-common:amd64 (9.50~dfsg-5ubuntu4.4, 9.50~dfsg-5ubuntu4.5) End-Date: 2022-01-17 06:57:46 Start-Date: 2022-01-17 06:57:50 Commandline: /usr/bin/unattended-upgrade Upgrade: linux-libc-dev:amd64 (5.4.0-92.103, 5.4.0-94.106) End-Date: 2022-01-17 06:57:50 Start-Date: 2022-01-17 06:57:54 Commandline: /usr/bin/unattended-upgrade Install: linux-image-5.11.0-46-generic:amd64 (5.11.0-46.51~20.04.1, automatic), linux-modules-5.11.0-46-generic:amd64 (5.11.0-46.51~20.04.1, automatic), linux-headers-5.11.0-46-generic:amd64 (5.11.0-46.51~20.04.1, automatic), linux-modules-extra-5.11.0-46-generic:amd64 (5.11.0-46.51~20.04.1, automatic), linux-hwe-5.11-headers-5.11.0-46:amd64 (5.11.0-46.51~20.04.1, automatic) Upgrade: linux-headers-generic-hwe-20.04:amd64 (5.11.0.44.48~20.04.22, 5.11.0.46.51~20.04.23), linux-image-generic-hwe-20.04:amd64 (5.11.0.44.48~20.04.22, 5.11.0.46.51~20.04.23), linux-generic-hwe-20.04:amd64 (5.11.0.44.48~20.04.22, 5.11.0.46.51~20.04.23) End-Date: 2022-01-17 06:58:33 Start-Date: 2022-01-17 06:58:36 Commandline: /usr/bin/unattended-upgrade Upgrade: python3-pil:amd64 (7.0.0-4ubuntu0.4, 7.0.0-4ubuntu0.5) End-Date: 2022-01-17 06:58:37 Start-Date: 2022-01-17 06:58:41 Commandline: /usr/bin/unattended-upgrade Upgrade: udev:amd64 (245.4-4ubuntu3.13, 245.4-4ubuntu3.15), libudev1:amd64 (245.4-4ubuntu3.13, 245.4-4ubuntu3.15) End-Date: 2022-01-17 06:58:57 Start-Date: 2022-01-17 06:59:00 Commandline: /usr/bin/unattended-upgrade Upgrade: firefox:amd64 (95.0.1+build2-0ubuntu0.20.04.1, 96.0+build2-0ubuntu0.20.04.1) End-Date: 2022-01-17 06:59:05 Start-Date: 2022-01-17 06:59:09 Commandline: /usr/bin/unattended-upgrade Upgrade: firefox-locale-en:amd64 (95.0.1+build2-0ubuntu0.20.04.1, 96.0+build2-0ubuntu0.20.04.1) End-Date: 2022-01-17 06:59:09 Start-Date: 2022-01-17 06:59:13 Commandline: /usr/bin/unattended-upgrade Upgrade: libexiv2-27:amd64 (0.27.2-8ubuntu2.6, 0.27.2-8ubuntu2.7) End-Date: 2022-01-17 06:59:13 Start-Date: 2022-01-18 06:46:44 Commandline: /usr/bin/unattended-upgrade Remove: linux-image-5.11.0-43-generic:amd64 (5.11.0-43.47~20.04.2), linux-modules-extra-5.11.0-43-generic:amd64 (5.11.0-43.47~20.04.2), linux-modules-5.11.0-43-generic:amd64 (5.11.0-43.47~20.04.2) End-Date: 2022-01-18 06:46:50 Start-Date: 2022-01-18 06:46:53 Commandline: /usr/bin/unattended-upgrade Remove: linux-headers-5.11.0-43-generic:amd64 (5.11.0-43.47~20.04.2) End-Date: 2022-01-18 06:46:54 Start-Date: 2022-01-18 06:46:58 Commandline: /usr/bin/unattended-upgrade Remove: linux-hwe-5.11-headers-5.11.0-43:amd64 (5.11.0-43.47~20.04.2) End-Date: 2022-01-18 06:46:58 Start-Date: 2022-01-19 06:44:47 Commandline: /usr/bin/unattended-upgrade Install: linux-modules-5.13.0-25-generic:amd64 (5.13.0-25.26~20.04.1, automatic), linux-headers-5.13.0-25-generic:amd64 (5.13.0-25.26~20.04.1, automatic), linux-modules-extra-5.13.0-25-generic:amd64 (5.13.0-25.26~20.04.1, automatic), linux-image-5.13.0-25-generic:amd64 (5.13.0-25.26~20.04.1, automatic), linux-hwe-5.13-headers-5.13.0-25:amd64 (5.13.0-25.26~20.04.1, automatic) Upgrade: linux-headers-generic-hwe-20.04:amd64 (5.11.0.46.51~20.04.23, 5.13.0.25.26~20.04.12), linux-image-generic-hwe-20.04:amd64 (5.11.0.46.51~20.04.23, 5.13.0.25.26~20.04.12), linux-generic-hwe-20.04:amd64 (5.11.0.46.51~20.04.23, 5.13.0.25.26~20.04.12) End-Date: 2022-01-19 06:45:27 Start-Date: 2022-01-20 10:25:32 Commandline: /usr/bin/unattended-upgrade Remove: linux-headers-5.11.0-44-generic:amd64 (5.11.0-44.48~20.04.2), linux-hwe-5.11-headers-5.11.0-44:amd64 (5.11.0-44.48~20.04.2) End-Date: 2022-01-20 10:25:34 Start-Date: 2022-01-20 10:25:37 Commandline: /usr/bin/unattended-upgrade Remove: linux-modules-extra-5.11.0-44-generic:amd64 (5.11.0-44.48~20.04.2) End-Date: 2022-01-20 10:25:38 Start-Date: 2022-01-20 10:25:42 Commandline: /usr/bin/unattended-upgrade Remove: linux-modules-5.11.0-44-generic:amd64 (5.11.0-44.48~20.04.2), linux-image-5.11.0-44-generic:amd64 (5.11.0-44.48~20.04.2) End-Date: 2022-01-20 10:25:46 Start-Date: 2022-01-20 10:25:51 Commandline: /usr/bin/unattended-upgrade Upgrade: linux-libc-dev:amd64 (5.4.0-94.106, 5.4.0-96.109) End-Date: 2022-01-20 10:25:51 Start-Date: 2022-01-20 10:25:55 Commandline: /usr/bin/unattended-upgrade Install: linux-modules-extra-5.13.0-27-generic:amd64 (5.13.0-27.29~20.04.1, automatic), linux-modules-5.13.0-27-generic:amd64 (5.13.0-27.29~20.04.1, automatic), linux-headers-5.13.0-27-generic:amd64 (5.13.0-27.29~20.04.1, automatic), linux-image-5.13.0-27-generic:amd64 (5.13.0-27.29~20.04.1, automatic), linux-hwe-5.13-headers-5.13.0-27:amd64 (5.13.0-27.29~20.04.1, automatic) Upgrade: linux-headers-generic-hwe-20.04:amd64 (5.13.0.25.26~20.04.12, 5.13.0.27.29~20.04.13), linux-image-generic-hwe-20.04:amd64 (5.13.0.25.26~20.04.12, 5.13.0.27.29~20.04.13), linux-generic-hwe-20.04:amd64 (5.13.0.25.26~20.04.12, 5.13.0.27.29~20.04.13) End-Date: 2022-01-20 10:26:34 Start-Date: 2022-01-21 09:54:26 Commandline: /usr/bin/unattended-upgrade Remove: linux-headers-5.11.0-46-generic:amd64 (5.11.0-46.51~20.04.1), linux-hwe-5.11-headers-5.11.0-46:amd64 (5.11.0-46.51~20.04.1) End-Date: 2022-01-21 09:54:28 Start-Date: 2022-01-21 09:54:32 Commandline: /usr/bin/unattended-upgrade Remove: linux-image-5.11.0-46-generic:amd64 (5.11.0-46.51~20.04.1), linux-modules-5.11.0-46-generic:amd64 (5.11.0-46.51~20.04.1), linux-modules-extra-5.11.0-46-generic:amd64 (5.11.0-46.51~20.04.1) End-Date: 2022-01-21 09:54:37 I have a dual boot. So tried the same on Windows and it works perfectly fine even after locking the screen. I don't know if any video drivers are disturbed in Ubuntu. Any help please!!
I have an Lenovo latop (AMD architecture) with the same issue. The problem is in newest update of kernel 5.13 newer. Check your kernel version by typing this command uname -r If you have a version 5.13 or newer you can fix the issue by downgrading to a lower version such as 5.11. It is not a final solution but it can help you until the new patch will be released. There are some tutorials which can be useful: install and boot an older kernel remove an old (unused) kernel
Display crashes after screen lock
1,651,686,969,000
I have installed Debian 11 (bullseye) on a new Lenovo LEGION 5i Pro with Nvidia RTX 3050. After installing the Nvidia drivers: sudo apt-get install nvidia-driver firmware-misc-nonfree I connected an external monitor using the HDMI port, but it was not recognized, it does not show up in the Displays settings. I tried searching about the issue and I found somewhere someone fixing a similar problem with xrandr. ~$ xrandr --listproviders Providers: number : 2 Provider 0: id: 0x4a cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 4 outputs: 7 associated providers: 0 name:modesetting Provider 1: id: 0x2af cap: 0x2, Sink Output crtcs: 4 outputs: 6 associated providers: 0 name:NVIDIA-G0 This command fixed the problem, but honestly I don't know what it does: xrandr --setprovideroutputsource 1 0 But the problem is that the changes did not persist after reboot and I had a lot of lagging and Xorg was using about 30-40% CPU as shown using top. So I have uninstalled the drivers and started all over again. Next I tried creating an /etc/X11/xorg.conf file using nvidia-xconfig, which created a file with these contents: # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 460.32.03 Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" EndSection Section "Files" EndSection Section "InputDevice" # generated from default Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "Unknown" Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection The good thing is that the external monitor was recognized and I started using it and it was showing in the Displays settings, but I couldn't use the built-in display, and if I try to use the laptop without the external display I get a blank screen and I had to delete the /etc/X11/xorg.conf file and reboot to be able to use the built-in display. How can I configure my system to be able to use both the built-in and the external display? Update: $ nvidia-xconfig --query-gpu-info Number of GPUs: 1 GPU #0: Name : GeForce RTX 3050 Laptop GPU UUID : GPU-5f21a5b3-2add-7b3d-aa6b-1cfe5dd7085e PCI BusID : PCI:1:0:0 Number of Display Devices: 1 Display Device 0 (TV-4): EDID Name : LG Electronics 24MP56 Minimum HorizSync : 30.000 kHz Maximum HorizSync : 83.000 kHz Minimum VertRefresh : 56 Hz Maximum VertRefresh : 61 Hz Maximum PixelClock : 150.000 MHz Maximum Width : 1920 pixels Maximum Height : 1080 pixels Preferred Width : 1920 pixels Preferred Height : 1080 pixels Preferred VertRefresh : 60 Hz Physical Width : 510 mm Physical Height : 290 mm Listing the monitors using xrandr: $ xrandr --listmonitors Monitors: 1 0: +*eDP-1 1920/345x1200/215+0+0 eDP-1 After using this command xrandr --setprovideroutputsource 1 0 I get this output: $ xrandr --listmonitors Monitors: 2 0: +*eDP-1 2560/345x1600/215+0+0 eDP-1 1: +HDMI-1-0 1920/510x1080/290+2560+0 HDMI-1-0 But the problem is high CPU usage by the Xorg process (30-40%).
Notebooks with separate dedicated and integrated graphics cards will try to balance which is used to improve battery life. Check nvidia-settings and bios settings to see if there is an option to specify which you would like to use.
How to configure multiple displays on Lenovo LEGION 5 Pro (Nvidia RTX 3050)
1,651,686,969,000
I have two monitors connected (one HDMI, one DVI) connected to an Nvidia GT710 GPU on my linux box. When I am logged in to my box over ssh, if I run xeyes it will always run on the same screen. I know the name of this screen because echo $DISPLAY returns :0 Based on some what I have read, I expected to be able to target X windows to my left and right monitor using :0.0 and :0.1 respectively. Same for :1. DISPLAY=:0.0 xeyes indeed does bring up xeyes on the left screen, but: ~$ DISPLAY=:0.1 xeyes Error: Can't open display: :0.1 I thought that maybe I could see what the name of my right display is by listing /tmp/.X11-unix/ but: ~$ ls /tmp/.X11-unix/ X0 So how do I address this secondary display and know what to address it as? PS. I don't care if I can't move windows between screens, as described here in the ArchWiki: https://wiki.archlinux.org/title/Multihead#Separate_screens. This would be fine for me, but it is not clear from me how to achieve that.
I expected to be able to target X windows to my left and right monitor using :0.0 and :0.1 This is only true of your left and right monitor would actually use two X screens, which isn't something you'd normally see unless you've configured it yourself. Out of the box, most of today's system use Xinerama, which means you get a single X screen with two xrandr outputs reading from the same framebuffer in different locations. But only you can tell us how your system is configured (read /var/log/Xorg.log to find out). Error: Can't open display: :0.1 That confirms the assumption made above: You don't have two X screens. So how do I address this secondary display and know what to address it as? Look at the output of xrandr, see if you have two outputs attached to the same framebuffer (i.e., all are listed under Screen 0). If yes, this means you need to place a window on a certain position to have it appear on the left or on the right monitor (or on both, one half on the left, the other half on the right). Your Window Manager (WM), which on most modern distributions is integrated in your desktop environment, can influence the placement of windows, and by configuring it correctly, it can help placing it on the position (and therefore monitor) you want. Many (but not all) X applications also support the -geometry option (read the man page), which again would place the window at a certain position, but the WM is free to override that, so if it doesn't work, that's why. If you are not running a modern desktop system, and if you don't even plan to use a window manager (which even decades ago everyone did; X is meant to have a window manager), then you need to position and size each window individually via the commandline (and that will also determine on which screen they appear in your current setip); you won't be able to resize or move windows, etc. (And if you can do this, then you have some WM somewhere, even if you don't realize this). You also can set up X so it uses two screens, one for each output. You need a custom xorg.conf file. Here's the file I used for Intel hardware with one screen for HDMI3 (when I attached my TV a while back, that's why having two screens made sense): Section "Device" Identifier "intel0" Driver "intel" Option "AccelMethod" "sna" Option "ZaphodHeads" "VGA1,HDMI1,DP1" Screen 0 EndSection Section "Device" Identifier "intel1" Driver "intel" Option "AccelMethod" "sna" Option "ZaphodHeads" "HDMI3" Screen 1 EndSection Section "Screen" Identifier "screen0" Device "intel0" EndSection Section "Screen" Identifier "screen1" Device "intel1" EndSection Section "ServerLayout" Identifier "default" Screen "screen0" 0 0 Screen "screen1" Below "screen0" EndSection You'll need to adapt this to your hardware and requirements; different hardware might need different options (for example, back then when I did this, Intel hardware needed the AccelMethod and ZaphodHeads). Expect to spend quite a bit of time to make this work right.
How can I target an X window to a secondary monitor? (and know what it is called in order to do that)
1,651,686,969,000
I recently acquired an older motherboard with SiS built-in graphics and installed Xubuntu on it. I wanted to challenge myself and see if I could figure out how to install a graphics driver for SiS. After a bit of work, I believe I actually managed to do just that by using this guide with a few tweaks. I have come to the conclusion that I was successful because originally, /var/log/Xorg.0.log was showing an error saying that the "sis" module I was trying to use failed to load. With a final tweak, /var/log/Xorg.0.log was showing the "sis" module being successfully loaded, and the resolution of the display also increased. However, when I checked the output of lspci -v and lshw -c video, there was no reference to the sis driver. Do these commands always show the graphics driver being used? Am I wrong in thinking that I did this successfully?
lspci and lshw only show driver information for hardware handled by a kernel driver. In your case, the graphics hardware is managed by a X.org driver, not a kernel driver, so the driver doesn’t show up in the output of lspci or lshw. The fact that your screen resolution increased is a strong indicator that you succeeded, as is the successful loading of the sis module traced in the X.org server logs.
Do "lspci" and "lshw" commands always show graphics driver?
1,651,686,969,000
At work, we are developing programs, each developer having his own private VM under CentOS. I disconnect my session at the end of the day, then the next morning I start Remote Desktop under Windows and join it again. But today, I went on a different Windows workstation, with lower resolution display. I started my remote session with Remote Desktop as usual in direction of CentOS who led me to another, old, already opened session of mine that I started one month before... I was troubled. I disconnected, came back to my previous workstation and retrieved the one I was expecting. Question 1 : How this behavior explains itself? Question 2 : How to drop my old session from the main one? When I do a who command from my main session, I see myself + a number of me equals to the number of shell I have opened with KConsole. How can I list how many others session like the one I found today are still opened and how can I close them?
Have a look at the xrdp settings (xrdp.ini or sessman.ini). By default it will start a new session for earch new user and for each color resolution. There are different settings which will cause different behaviours. pstree will show your different sessions. All of them will propably be based on a window-manager. If you kill that father process from your active session you will terminate that other session. who am i will show you your current session.
Using Remote Desktop on my private VM, I enter another existing session than the previous one, depending on... my screen resolution?
1,651,686,969,000
I have a setup where I connect to two different displays depending on where I am. I would like to use a mode which one of the monitors doesn't allow, but that won't be loaded when any other display is connected. To be more concrete, the I would like to overclock the external display at home to 120hz, but not overclock a different display at work that I connect to the same HDMI port. Is this possible using nvidia? I can't seem to be able to override EDID with xrandr, at the same time using the EDID settings by default.
Ok so if using nvidia you can refer to monitors by their EDID here is an example from my 99-nvidia.conf (Reference I used here) Section "Module" Load "modesetting" EndSection Section "Screen" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 #this is the important part!!! Option "CustomEDID" "DPY-EDID-d2ee947f-cde6-694c-4099-2f7aa520eb75: /home/myName/Documents/monitors/edid-120.bin" SubSection "Display" Depth 24 EndSubSection EndSection Section "Device" Identifier "nvidia" Driver "nvidia" BusID "PCI:1:0:0" #Option "DPI" "96 x 96" #adjust this value as needed to fix scaling Option "AllowEmptyInitialConfiguration" EndSection Section "Extensions" Option "Composite" "Enable" EndSection
Custom EDID for specfic physical monitor, not output
1,651,686,969,000
I'm running some UI automatic testing in which the pointer is moved to perform/simulate user actions, which is executed after some time when the display is OFF. I want to leave the screen OFF while running these tests, but stop the tests if the user interact with the keyboard or mouse (or just the keyboard) I know how to turn off mouse & keyboard, and how to turn the display on and off, but that is not what I need here. I would like to prevent waking up the screen during such tests. I'm using Ubuntu 18.04, but if I can get a more generic solution, it would be great. Note: I found that vbetool can turn off the display without waking up with mouse or keyboard input, but its not working (display is not turning off). Update: My best shot so far is to use xrandr --output HDMI-0 --off (for example), as it completely disables the screen.
This is what I use daily, which goes through usual X11 driver rather than VGA BIOS, no root access needed: xrandr --output LVDS1 --off If your display is not a built-in LCD screen, substitute LVDS1 with relevant port from you see from plain xrandr command without parameter. It is usually something along the line of VGA1, HDMI1, or DP1. This doesn't "standby" the screen in a sense of DPMS-style power save however; it actually disable the specified video output, and detach it from your display server. The side effect of this headless state is your "desktop" would shrink to the minuscule size; around 320x200 pixel; you can press PrintScreen key to see what it looks like. This probably won't work for your usability tests, so... To prevent the shrinkage; add --fb option to set the virtual "desktop" size after your video output is off: xrandr --output LVDS1 --off --fb 1024x768 Substitute LVDS1 with the relevant output port, and 1024x768 with your current resolution. Once your video output is disabled and virtual "desktop" size is all set, you can commence your tests. At the time you would like to come back, re-enable your output: xrandr --output LVDS1 --auto Substitute LVDS1 with the relevant output port. This will set the output to default monitor-native resolution. If you would like to restore it to a specific resolution, substitute --auto with something like --mode 1024x768 (replace 1024x768 with your desired resolution). P.S. My answer is tested on Debian 7.0 32-bit GNU/Linux system, Xorg 1.12.4 display server, Intel i915 graphics.
Prevent the mouse to wake up display (UI Testing)
1,651,686,969,000
I have a console program: #include <iostream> #include <stdio.h> using namespace std; int main() { printf("please num1:"); int a; cin>>a; printf("please num2:"); int b; cin>>b; cout<<"see the result"<<endl; return a+b; } With the executable named test. When I put this line:/path/to/test test & inside the home/user/.config/openbox/autostart/ I can not see anything at startup, there is only a blank screen. How can I see the terminal that runs this app at startup? I should say I have tested the above method with the executable of other apps that show an image on LCD(using gtk+), or saying something in speaker(using espeak).They do these things att startup automatically. But for a console app this method doesn't work.I mean I can't see a terminal-shell at startup! How should I solve this problem?
since your program is a console program and not a graphical one, as you stated and as your code shows you need to launch it in a console, in a terminal. e.g. gnome-terminal -- test.sh in this case, I used gnome-terminal and the executable was test.sh. this is the command to launch at startup
How to start a console program at startup(inside ../openbox/autostart)
1,651,686,969,000
I am running a Linux Mint 17.3 machine with a DVI and USB (DVI-to-USB) monitor connection plugged in to the same desktop. Upon booting up, I get strange display misalignment that I found out was due to the USB connection - half of one screen is scrunched up on one of the monitors. However, if I simply go into Settings -> Display and click "Apply" without making any changes, the display problem goes away. Is there some sort of command that I can add to a startup script in /etc/init.d/ or in a crontab command that will automatically apply/refresh monitor settings in this way for me at startup? Thanks in advance.
My solution, which works great, was to do the following: Create a startup.sh file in my home directory ~/ (that way I can port it over during a backup of files later if you wish) with the following command saved in it: pkill -HUP "cinnamon --replace" Open a terminal session, type crontab -e to enter the crontab file, and enter the following and save it: @reboot /home/donkey/startup.sh Reboot the computer and note that on every boot up now, the cinnamon session refreshes and removes the bootup bugs on the monitors I am using, making the resolution span perfectly across all monitors. Hooray :)
Startup Script for Refreshing Screen Settings
1,651,686,969,000
I am trying to open up firefox in my Red Hat machine. I am doing, export DISPLAY=:0.0 and typing, firefox I am getting Error: cannot open display: :0.0 Is there any way to know what packages are missing or needs to be installed or any log files I can refer to, to be able to load firefox from terminal.
Ok. I figured it out. I had to use X11 forwarding using Xming. Below link provides more details. https://wiki.utdallas.edu/wiki/display/FAQ/X11+Forwarding+using+Xming+and+PuTTY
Error: cannot open display: :0.0 - Red Hat Enterprise Linux Server [duplicate]
1,651,686,969,000
How to set the maximum laptop brightness after startup on my ASUS laptop? When I power on my laptop, it only has about 30% display brightness set. Naturally, I want to set the maximum laptop screen brightness after startup on my Asus N750JV. I am running Linux Mint 17.3 Cinnamon. I presume there will have to be some startup script written in order to achieve that...
My own solution, that worked for me: Go to a directory, where this information is stored cd /sys/class/backlight/intel_backlight/ Find out the maximum brightness of your display: cat max_brightness In the following step, use echo your-maximum-brightness, mine is 5273: Write a script e.g. named set-max-brightness.sh with the following contents: #!/bin/sh echo 5273 >/sys/class/backlight/intel_backlight/brightness Let's say we now have this file stored as: /home/user/set-max-brightness.sh Now, we assign it to root by running: sudo chown root:root /home/user/set-max-brightness.sh Then, we make it executable and limit user rights with: sudo chmod 744 /home/user/set-max-brightness.sh Finally, we make the script run at every boot using CRON: sudo crontab -e This will bring up root's CRON edit, just add this at the bottom of the file: @reboot /home/user/set-max-brightness.sh
How to set the maximum laptop brightness after startup on my ASUS laptop?
1,651,686,969,000
I have an AMD Radeon HD 4650 Graphics hardware, and a LCD monitor with an odd native resolution of 1440x900. I use debian stable on this system. When not using the firmware-linux-nonfree package, the display resolution offers few choices that doesn't look very well on this monitor, since they are lower than the monitor's native resolution. The best resolution I can get is 1152x864, which is one under VBE resolutions. So, I want to know, which one is odd, Graphics hardware, or the monitor, or both? If I use the same graphics hardware with some widespread 1366x768 (16:9) monitor, can I have VBE resolution at 1366x768? Or if I use the same 1440x900 monitor with some open source Intel HD Graphics hardware, can I have VBE resolution at 1440x900?
If your hardware requires firmware to be loaded on chip then you should use it, otherwise do not wonder why your resolution resets to something not native to your monitor. Without firmware the graphics driver probably fails to detect graphics chip at all or can't access it's advanced features you mentioned, and falls back to VESA resolutions provided as basic ones by your monitor for failsafe (always available) operation. VBE modes are provided by the video bios of the graphics chip itself and usually contains only definitions for most common modes.
Is framebuffer/vbe resolution a property of monitor or graphics hardware?
1,651,686,969,000
/dev/dri/renderD128 should only be accessible to root or users in the render group as per: project_kohli% ls -lha /dev/dri/renderD128 crw-rw----+ 1 root render 226, 128 May 26 09:43 /dev/dri/renderD128 However I can see a number of applications which are NOT running as root accessing the file descriptor directly: USER PID ACCESS COMMAND /dev/dri/renderD128: root 3389 F.... Xorg project_kohli 6392 F.... xfwm4 project_kohli 9472 F.... firefox-esr project_kohli 9364 F.... totem Interestingly these are all applications that need to render Video directly (such as Totem Video Player). Anyway, getting back to the question, application firefox-esr is running as user project_kohli. project_kohli% ps -aux | grep firefox-esr project_kohli 9472 5.3 5.2 12545580 74632 ? Sl 13:13 3:23 /usr/lib/firefox-esr/firefox-esr User project_kohli is not in the render group. project_kohli% cat /etc/group | grep render render:x:115: Why is /dev/dri/renderD128 accessible to applications running with my user?
crw-rw----+ Did you notice the plus sign at the end of the permissions string? It means the device node has an Access Control List on it, and so you'll need the getfacl /dev/dri/renderD128 command to see the complete set of permissions on it. The result will probably be similar to this: getfacl: Removing leading '/' from absolute path names # file: dev/dri/renderD128 # owner: root # group: render user::rw- user:project_kohli:rw- group::rw- mask::rw- other::--- indicating that in addition to the classic owner/group/other permissions, there is an ACL granting read/write access to a named user, in this case project_kohli. Typically this is caused by a TAG+="uaccess" in the udev rule for the device. It causes the login/logout mechanism (specifically systemd-logind on systems using systemd) to add read+write permissions to this device for locally logged-in users, and to remove those permissions on logout. If your GUI desktop environment includes the capacity of switching between multiple locally logged-in users, this mechanism is extended to grant access to the user that is currently actively holding the local seat (= the group of devices under control of the active locally logged-in user). Remote sessions, like SSH sessions, won't normally get this kind of access at all: you don't want other users to be able to remotely peek at your display or manipulate it, do you? If you do want to allow multiple users to share access to the renderD128 device, that's what the render group is for. Once the system administrator adds someone's account to the render group, that user will always be able to access that device: from remote sessions, from cron jobs, or from switched-out local GUI sessions.
How can Firefox access /dev/dri/renderD128 when it should be only root accessible
1,651,686,969,000
I recently experienced a very bizarre phenomenon. As I thawed my computer from suspend, I got back to my session manager's lock screen. However, the content of the window of application currently running in the session was displayed intermittently, and with a 180°-rotation. As a picture is better than a thousand words, here is a photography (not a screen capture) of my screen displaying the content of Thunderbird's window (the tessellation was added afterwards for obvious privacy reasons) : Notice the bottom part of the screen, where the taskbar would go, is unaffected, and correctly displays (not upside down) the background of the lock screen. Also note that Thunderbird was not the only application to be displayed. The whole thing flickers, sometimes displaying a given window for less than one second. The system is otherwise responsive. However, unlocking the session did not end that behaviour. In fact, I had already experienced it some month ago without session locking being involved : that was then an annoyance, but its happening while the session is locked is a more important privacy problem. Closing then reopening the session (without restarting the whole computer) put an end to the phenomenon. I am at a loss because it is difficult for me to describe the phenomenon concisely in English, and therefore to search for that topic ; I also do not know where to report that bug, considering the many elements involved. Namely : OS is KUbuntu 22.04 (up to date), with kernel 5.15.0-82-generic, KDE Plasma 5.24.7, KDE Frameworks 5.92.0, Qt 5.15.3, Xorg-server 21.1.4, GPU is NVIDIA GeForce GTX 1650/PCIe/SSE2, GPU driver is nvidia 525.125.06. Also, Steam window was running at the time (it sometimes has display artefacts so this may be relevant).
I would tentatively classify that as a failure to restore GPU state on resume from suspend. More specifically, it appears that the display composition information did not get properly restored. It is possible that an intermittent hardware failure (e.g. an unreliable spot in GPU memory) might be involved too. The flickering and the fact that you said Steam sometimes has display artifacts also hints at that possibility. Since you are using NVidia's proprietary driver, this should be reported to NVidia only. If you want to know more, you might post to NVidia's Linux driver development forum: you might get a more detailed answer there. As far as I've understood, a modern GPU has usually plenty of video RAM to hold most or all windows (even ones that are currently hidden) as separate bitmaps, with a separate display composition information telling it how the windows should be placed on screen (potentially overlapping each other) and which of them should and shouldn't be displayed at any given moment. Since the window bitmaps are not so different from textures, the GPU hardware options to mirror/resize/rotate/otherwise warp textures can be equally applicable to them. It seems to me that your situation was most likely caused by the display composition information getting corrupted, resulting in the GPU hardware semi-randomly rendering various window bitmaps that should have been off-screen at the time, with an incorrect mirroring/rotation applied to them. Closing and reopening the session causes the X display server to reset, which should cause a fairly major reinitialization of the GPU state, so that might explain why the problem went away.
Application windows content get displayed (upside down !) while session is locked
1,692,995,896,000
After a fresh Ubuntu Server install, the console (no GUI) output is not aligned to the left side of my 4k screen. Instead, each line starts at a third of the screen width away from the right side, and then overflows to the left side of the screen. This doesn't seem to be a problem with lower-resolution screens (tested on 1440p). My GPU is an AMD Radeon HD 8490. I have a picture here:
Your card might not be able to drive that monitor. 4K seems to be a common barrier, even for some 3-y.o. cards.
Console output not aligned on 4k screen
1,692,995,896,000
I have recently install 22.10 Ubuntu OS and I lost external monitor connectivity. I have tried few things but stuck. OS: Ubuntu 22.10 xrandr: HDMI-1-1 disconnected (normal left inverted right x axis y axis) Identifier: 0xa7 Timestamp: 5598 Subpixel: unknown Clones: CRTCs: 4 5 6 7 Transform: 1.000000 0.000000 0.000000 0.000000 1.000000 0.000000 0.000000 0.000000 1.000000 filter: PRIME Synchronization: 1 supported: 0, 1 dithering depth: auto supported: auto, 6 bpc, 8 bpc dithering mode: auto supported: auto, off, static 2x2, dynamic 2x2, temporal scaling mode: None supported: None, Full, Center, Full aspect color vibrance: 150 range: (0, 200) vibrant hue: 90 range: (0, 180) underscan vborder: 0 range: (0, 128) underscan hborder: 0 range: (0, 128) underscan: off supported: auto, off, on link-status: Good supported: Good, Bad CTM: 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 CONNECTOR_ID: 101 supported: 101 non-desktop: 0 range: (0, 1) I turn off wayland setting and turn on X11 And whenever I go to Settings > About my laptop freeze. Additionally, I don't see display layout to setup monitor. I have Dell Dock and tried with DP1 and HDMI cables but didn't work. Appreciate your time.
I have tried multiple tricks to find a solution inside the Linux system. Afterwards, I have tried to find a solution for the Dock driver connector to display my external monitor. And it works, here what I have done: Download external Nvidia drivers as mentioned by kepler-22. https://www.nvidia.com/download/index.aspx Go to terminal and run downloaded file from above link: sudo ./filename.run In my case, I did sudo ./NVIDIA-Linux-x86_64-525.85.05.run Download Ubuntu drivers for Dell Dock from https://www.synaptics.com/products/displaylink-graphics/downloads/ubuntu Open downloaded folder. It has a .run file. In my case, sudo ./displaylink-driver-5.6.1-59.184.run sudo reboot Additionally, I had to install sudo apt-get install evdi-dkms Afterwards, I got an external display (LG), Dell Dock with MSI laptop.
Ubuntu 22.10 External monitor connection issue
1,692,995,896,000
I am trying to get a USB 3.0 to HDMI adapter to work that supports displaylink drivers in archlinux with Xorg server, doing: I have installed evdi for the in development kernel module. I have installed the displaylink driver. yay -S evdi displaylink I have enabled displaylink.service. Used the modesetting driver with AccelMethod "none" and MatchDriver "evdi" (in /etc/X11/xorg.conf.d/20-evdi.conf): Section "OutputClass" Identifier "DisplayLink" MatchDriver "evdi" Driver "modesetting" Option "AccelMethod" "none" EndSection This is the official procedure of the arch-wiki. However, when starting or enabling the displaylink.service, I get the error: ● displaylink.service - DisplayLink Manager Service Loaded: loaded (/usr/lib/systemd/system/displaylink.service; disabled; vendor preset: d> Active: activating (auto-restart) (Result: exit-code) since Tue 2021-10-05 12:06:37 EDT> Process: 24554 ExecStartPre=/sbin/modprobe evdi (code=exited, status=1/FAILURE) CPU: 2ms lines 1-5/5 (END) Checking the journalctl, I get: The job identifier is 33183. Oct 05 12:07:34 minnow modprobe[24572]: modprobe: FATAL: Module evdi not found in directory Oct 05 12:07:34 minnow systemd[1]: displaylink.service: Control process exited, code=exited,> Subject: Unit process exited But I have confirmed that evdi is correctly installed.
Reviewing the problem, I have noticed that in the last comments of the displaylink AUR package information, it indicates: djallits commented on 2022-10-13 21:33 (UTC) DisplayLink 5.6.1-3 breaks on Linux 6.0.1-arch1-1 x86_64. I am digging into the issue now, >but I just wanted to warn anyone else. And: jmcld commented on 2022-10-14 19:21 (UTC) (edited on 2022-10-14 19:22 (UTC) by jmcld) @djallits Linux 6.0.1-arch1-1 x86_64 breaking this is actually a problem with evdi. Fix is mainlined in evdi-git. See https://aur.archlinux.org/packages/evdi#comment-884724 for fix. So the solution has moved on to uninstall evdi and install evdi-git.
Unable to init displaylink service for HDMI adapter
1,692,995,896,000
Simple. I have the file "longname.server" on remote pc, I want to copy on my pc, but..I don't remind the name because is long and I use tab completion. \rsync -avP remote:^[\\\[0\\\;longname.server^[\\\[0m\\\^M Strange characters appear on display after I press the tab key to complete name, what to check? Distro is Slackware 15.0. I see this problem happen on Slackware 15.0 but not in 14.2
Workaround found. I start bash without reading the rc files bash --norc now the command works Then I edit bashrc and I delete those lines which cause problems # Append any additional sh scripts found in /etc/profile.d/: for profile_script in /etc/profile.d/*.sh ; do if [ -x $profile_script ]; then . $profile_script fi done unset profile_script so the problem was on one of profile scripts the bash-completion script I think edit2: on another Slackware 15.0 pc the problem don't appear so probably this other pc is misconfigured
when I use tab on rsync and remote pc magical characters happear in display
1,692,995,896,000
I've a bunch of SSD1351 OLED displays like this one: Those are driven over a SPI Interface and I use them in the Raspberry Pi and other ARM SBCs like the NanoPi with the following libraries: https://github.com/rm-hull/OPi.GPIO https://luma-oled.readthedocs.io/en/latest/ https://pillow.readthedocs.io/en/stable/ Those ARM SBCs have integrated SPI that is exposed in Linux at /dev/spidevX.Y. Is it possible to drive this screen from a standard x86 computer? I own a FT4232H Mini Module (GPIO/UART/SPI to USB) and also found out the AK-MCP2210 (USB to SPI Bridge) however I'm not sure they will expose the screen in /dev/ nor if luma-oled can be used. Thank you.
Apparently this was already done using a FT232H. https://github.com/rm-hull/luma.oled/issues/185
x86: Drive a SSD1351 OLED Display
1,692,995,896,000
Is there some sort of display keep-awake command that could be scheduled? I have a machine I'd like to run a dashboard display during certain hours without having to interact with it every day to wake and sleep. This means I'd like only the display to wake and sleep on a schedule of my choosing without locking the desktop session and requiring a password to wake and without sleeping the entire machine (the machine itself must continue to process events 24/7). I intend to have an unprivileged "kiosk" account always signed in for this, and an unused pen-driven display for interacting with the dashboard without hauling out the keyboard/mouse. This set of requirements precludes a lot of the usual allowances afforded by standard screen savers, screen-locking, power management, etc. in that I want the display to truly sleep during its off hours unless specifically awoken by a click of the tablet's buttons, a keyboard, etc., at which point it would simply auto-sleep after a period of inactivity as usual. Note: I didn't bother specifying "I'm using Ubuntu Desktop" because I'm hoping for a generic solution that is independent of the GUI environment, even if I have to build and install it. I'd also like to avoid periodically faking keyboard/mouse events so they don't interfere with console use. Please let me know in comments if you need more specific information.
It looks like a broad solution isn't available (or easily discoverable), so I think Caffeine for Ubuntu* is my best option. Apart from the GUI utility, there is a CLI interface that could be scheduled to achieve my goals. If someone has a better answer, I'll gladly review and accept it instead. *Note: StackExchange doesn't allow alternate schemes, so I can't link apt://caffeine directly.
Display Sleep / Wake on Schedule
1,692,995,896,000
I have two monitors. I disabled my primary monitor(eDP-1) by running this command on startup to display on secondary monitor(DP-1) only. xrandr --output eDP-1 --off The primary monitor is disabled and everything is shown on the secondary monitor. But only about three-quarters of the secondary monitor is used and the rest of the screen is black. The windows are not positioned on the top left corner only. I can hover my mouse over the black area but my window manager only covers the top left portion as show in the image. The display looks like this. Only the white portion is used and the rest is black. Display information: $ xrandr Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 16384 x 16384 eDP-1 connected primary (normal left inverted right x axis y axis) 1600x900 60.00 + 59.99 59.94 59.95 59.82 1440x900 59.89 1400x900 59.96 59.88 1440x810 60.00 59.97 1368x768 59.88 59.85 1360x768 59.80 59.96 1280x800 59.99 59.97 59.81 59.91 1152x864 60.00 1280x720 60.00 59.99 59.86 59.74 1024x768 60.04 60.00 960x720 60.00 928x696 60.05 896x672 60.01 1024x576 59.95 59.96 59.90 59.82 960x600 59.93 60.00 960x540 59.96 59.99 59.63 59.82 800x600 60.00 60.32 56.25 840x525 60.01 59.88 864x486 59.92 59.57 800x512 60.17 700x525 59.98 800x450 59.95 59.82 640x512 60.02 720x450 59.89 700x450 59.96 59.88 640x480 60.00 59.94 720x405 59.51 58.99 684x384 59.88 59.85 680x384 59.80 59.96 640x400 59.88 59.98 576x432 60.06 640x360 59.86 59.83 59.84 59.32 512x384 60.00 512x288 60.00 59.92 480x270 59.63 59.82 400x300 60.32 56.34 432x243 59.92 59.57 320x240 60.05 360x202 59.51 59.13 320x180 59.84 59.32 DP-1 disconnected (normal left inverted right x axis y axis) HDMI-1 disconnected (normal left inverted right x axis y axis) DP-2 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 477mm x 268mm panning 1920x1080+0+0 1920x1080 60.00*+ 1600x900 60.00 1280x1024 75.02 60.02 1152x864 75.00 1024x768 75.03 60.00 800x600 75.00 60.32 640x480 75.00 59.94 720x400 70.08 HDMI-2 disconnected (normal left inverted right x axis y axis) I use bspwm on ubuntu.
Okay.. I finally fixed it by setting the secondary monitor as primary monitor before disabling the primary monitor by using the following command xrandr --output "DP-2" --primary --above "eDP-1"
xrandr - can't set fullscreen
1,692,995,896,000
So, if I boot with only one screen or use cinnamon or another desktop environment, everything looks nice, but if i boot with two screens, it looks like this: Notice how everything looks normal in the browser, however the task bar and widgets look insanely large. I suspect they are being scaled as if both screens are actually one, so it would make sense to be this big. Xrandr gives me: Screen 0: minimum 8 x 8, current 1920 x 2160, maximum 32767 x 32767 DVI-D-0 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 160mm x 90mm 1360x768 60.02 + 1920x1080 60.00* 59.94 29.97 23.98 60.05 60.00 1280x720 60.00 59.94 1024x768 75.03 70.07 60.00 800x600 75.00 72.19 60.32 720x480 59.94 640x480 75.00 72.81 59.94 HDMI-0 connected primary 1920x1080+0+1080 (normal left inverted right x axis y axis) 480mm x 270mm 1920x1080 60.00*+ 59.94 50.00 60.05 60.00 50.04 1680x1050 59.95 1600x900 60.00 1440x900 59.89 1400x1050 59.98 1280x1024 75.02 60.02 1280x800 59.81 1280x720 60.00 59.94 50.00 1152x864 75.00 1024x768 75.03 60.00 800x600 75.00 60.32 720x576 50.00 720x480 59.94 640x480 75.00 59.94 59.93 DP-0 disconnected (normal left inverted right x axis y axis) DP-1 disconnected (normal left inverted right x axis y axis) DP-2 disconnected (normal left inverted right x axis y axis) DP-3 disconnected (normal left inverted right x axis y axis) DP-4 disconnected (normal left inverted right x axis y axis) DP-5 disconnected (normal left inverted right x axis y axis) So it seems that the resolution IS correct. How can I fix this? This is how it looks if i boot with a single display, and then connect the second one after KDE has already loaded
This took me forever and many days of figuring out the problem. It turns out my DVI monitor has a broken EDID, so it was telling my PC that it's display size is significantly smaller than it actually is, while maintaining the resolution, which created this huge DPI difference. I'm not exactly sure what exactly fixed it, but something did. Here's what I did: I wanted to pass my own EDID file to the computer. I first got the wrong EDID file. In order to do this, I used the NVIDIA settings tool. I saved it to a temporary folder. I then followed these instructions to correct the EDID file. In specific: I opened the wrong edid file using an HEX editor. The display size lives on offset 0x15 and 0x16 (width vs height respectively) of the binary file. These calculations are in cm, that means that 160mm = 16 cm = 0x10 and 90 mm = 9cm = 0x09. I corrected them manually and saved them to a different place. However, these have different checksums, so we need to fix that. I ran edid-checksum.py < correct-edid.bin , which told me where to fix the edid (You need to fix checksum on offset 0x7f. ox75 is BAD, should be 0x65). Then i opened the file using a HEX Editor again, updated the value to what is supposed to be correct and saved it. Running the following command parse-edid < dvi-d.bin showed me that the checksum was indeed correct. Now: Here I did two different things and I have no idea what solved it: First I ran nvidia-xconfig --custom-edid="DVI-D-0:/path/to/correct/edid.bin" which added the edid to my xorg.conf file on /etc/X11. Then I also updated it on the Kernel side. In order to do that, I saved the correct edid on /lib/firmware/edid/DVI.bin, then I went to /etc/default and added the following to /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="quiet drm.edid_firmware=DVI-D-0:edid/DVI.bin" I then updated the kernel bootloader using grub-mkconfig -o /boot/grub/grub.cfg and restarted the computer. Somehow, it still doesn't work on GNOME, but on Plasma, everything looks fine, so that's a plus. UPDATE: I had been using gdm3, and it worked out fine after the solution given above. Today I tried switching to sddm and everything failed again. I don't really have time to chase this bug, so if anyone is having the same problem, and the solution above STILL doesn't seem to work, then try: sudo dpkg-reconfigure gdm3
KDE Plasma's widgets look comically large when using two screens
1,621,168,052,000
I am running Pop!OS 20.04, on an Asus Tuf FX505DU with an Nvidia 1660TI, and sometimes when I boot up the OS, instead of the login screen I get a glitched screen (horizontal bars of pixels of different colours) and it keeps turning on and off for a bit. If I press Ctrl+Alt+F2, I can enter TTY and login, I can run sudo systemctl restart gdm and everything returns to normal. I suppose it happens because GDM gets executed before the nvidia driver is loaded? Is there any way to fix this?
I fixed this issue by simply reinstalling GDM. Originally, I thought it's a graphical driver loading issue, but apparently this did the trick. sudo apt install --reinstall gdm3 pop-desktop gnome-shell sudo systemctl reboot Edit: Here's the link to System76's article that I used: https://support.system76.com/articles/login-loop-pop/
Pop!OS 20.04 - Glitched screen instead of login screen
1,621,168,052,000
I'm trying to automate the connection of a VPN tunnel via cron upon system boot. I have a bash-script triggered by roots cron sudo crontab -e (because some other commands needs elevation). I'd be fine using piactl (cli), but when trying to connect I get a message saying the client needs to be started, so... One of the steps I'd like to perform is to start an GUI application (pia-client) on my non-root users display. The following command works directly from terminal: sudo su runuser -l $username -c 'DISPLAY=:0 /opt/piavpn/bin/pia-client &> /dev/null &' However, when I put that into my script is doesn't work. The script is executable chmod +x /path/to/script.sh and other commands in the same script is working. Anyone have a solution here?
This is embarrassing as I've already asked this exact question only days ago. Didn't even realize... The solution is to add full path-to-command as the command is not in cron's PATH. Changed the code to: sudo su /sbin/runuser -l $username -c 'DISPLAY=:0 /opt/piavpn/bin/pia-client &> /dev/null &' Source/credit: @steeldriver's comment on this question
starting GUI application on other users display via roots cron
1,621,168,052,000
I am trying to get an external monitor (Dell) working on my laptop (dell) via a ubc-c docking station (dell). and I am having some issue - Running on an up to date Manjaro (5.4.28-1) Basically while in KDE, the 2nd monitor is detected correctly and signal is going, but I have no image. I can't figure out what is going on. If move to a tty (ctrl+alt+F2) the second screen comes to life. If I boot on my win session (same laptop) all is working good. I have tryied the xf86-video-intel driver, but it is just broken. KDE is unsuable so I had to revert back to modeseting. Any suggestion most welcome. Many thanks!!
so in desperation I wiped the system and went for a kubuntu 20.04 - I have a dell dev edition so I thought that might help. It did initialy, but after few update it stopped working again. So that made me start looking in KDE. and I finally found it. Appenrently, something, I don't know if it is the laptop, the screen or the docking station doesn't like OpenGL. By changing the rendering to something else that openGL it worked just fine. So I am assuming it would be the same fix in Manjaro. For the time Beeing I am happy in Kubuntu - Much better battery life ;) I hope that can be uselfull to somebody.
XF86-video-intel vs Modesetting for a 2nd Monitor [Manjaro]