date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,413,880,624,000 |
First of all, I'm a Linux User and know that df and tons of other CLI tools exist. I'm only looking for a GUI tool, this makes this question relevant, so please don't mention CLI tools.
In the GUI arena, we do have tons and tons of disk analyzer tools. They don't work for me, what I need is a simple OVERVIEW over all media that is attached to my system and be able to use a GUI, basically what df does, but with a visual usage bar. Windows does this well, as the pic below shows and I raked my brain and the Internet to find such a tool and so far, zero.
The disk analyzer (where is your space used) are overkill and do not allow to show the total for several disks. Also, I don't want a tool to bog down the system, when a simple check for disk space totals is enough.
|
Both GNOME Disks and GNOME System Monitor on my Ubuntu provide this:
GNOME System Monitor:
GNOME Disks:
The theme makes the view a bit unclear, but the darker orange part shows the occupied space.
| How to see system wide hard disk disk free with a GUI tool? |
1,413,880,624,000 |
how to print the disks from lsblk in GIGA
lsblk -io KNAME,TYPE,SIZE,MODEL
dm-0 lvm 50G
dm-1 lvm 16G
dm-2 lvm 100G
sdb disk 1.8T AVAGO
sdc disk 1.8T AVAGO
sdd disk 1.8T AVAGO
sde disk 1.8T AVAGO
we need to print all disks in GIGA
we have the option -b to print in byte but we prefer in giga
|
Try this:
lsblk -b -io KNAME,TYPE,SIZE,MODEL | awk 'BEGIN{OFS="\t"} {if (FNR>1) print $1,$2,$3/1073741824"G",$4; else print $0}'
| how to print the disks from lsblk in GIGA [closed] |
1,413,880,624,000 |
My cloud server has an external device formatted as XFS. After expanding the volume, I will need to run the command:
$ sudo xfs_growfs /dev/sdb
Since it is critical for a service to keep writing on the device, would it be possible to run xfs_growfs without stopping the service? The usual "safe" way is to stop the service, run xfs_growfs, then start the service again. I was just wondering if stopping the service is really needed, or if xfs_growfs was designed to be able to resize even when reads and writes on the disk is ongoing.
|
xfs_growfs works online, no need to stop anything, no problem at all.
As long as the larger /dev/sdb device size is detected properly, and you're actually using XFS on full disk instead of a partition, you can grow it directly. If there are partitions involved, you have to grow the partition first.
| Should the storage device be stopped before using xfs_growfs? |
1,413,880,624,000 |
I have a linux production virtual machine and I need to increase space in /usr/local folder. I have already attached a partitioned and formatted disk with capacity of 300 GB as follows.
[root@CentOS6-4SVR ~]# df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root
ext4 433G 373G 39G 91% /
tmpfs tmpfs 3.9G 0 3.9G 0% /dev/shm
/dev/xvda1 ext4 477M 46M 406M 11% /boot
/dev/mapper/VolGroup-lv_home
ext4 41G 10G 29G 26% /home
[root@CentOS6-4SVR ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvdb 202:16 0 400G 0 disk
└─VolGroup-lv_root (dm-0) 253:0 0 440G 0 lvm /
xvda 202:0 0 100G 0 disk
├─xvda1 202:1 0 500M 0 part /boot
└─xvda2 202:2 0 99.5G 0 part
├─VolGroup-lv_root (dm-0) 253:0 0 440G 0 lvm /
├─VolGroup-lv_swap (dm-1) 253:1 0 7.8G 0 lvm [SWAP]
└─VolGroup-lv_home (dm-2) 253:2 0 41.7G 0 lvm /home
xvdc 202:32 0 300G 0 disk
[root@CentOS6-4SVR ~]# vgdisplay
--- Volume group ---
VG Name VolGroup
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 7
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 2
Act PV 2
VG Size 499.50 GiB
PE Size 4.00 MiB
Total PE 127873
Alloc PE / Size 125314 / 489.51 GiB
Free PE / Size 2559 / 10.00 GiB
VG UUID jH8M8M-P8ey-apiK-888r-OhDj-ERSf-bn13N9
Please mention available methods to do that ?
Edit 01
I tried to extend as follow, but I got following errors. Please advice.
[root@csc-akaza-test-01 ~]# pvcreate /dev/xvdc
Couldn't find device with uuid gBPzEJ-Ir84-Q8pw-IJ8J-Bh10-ISIy-1snfWG.
Couldn't find device with uuid d4eTfV-HR7R-TLeH-tYgy-e2JC-B6sl-d5ysHa.
Physical volume "/dev/xvdc" successfully created
[root@csc-akaza-test-01 ~]# vgextend VolGroup /dev/xvdc
Couldn't find device with uuid gBPzEJ-Ir84-Q8pw-IJ8J-Bh10-ISIy-1snfWG.
Couldn't find device with uuid d4eTfV-HR7R-TLeH-tYgy-e2JC-B6sl-d5ysHa.
Volume group "VolGroup" successfully extended
[root@csc-akaza-test-01 ~]# lvresize -r -l 100%FREE VolGroup/lv_root
Couldn't find device with uuid gBPzEJ-Ir84-Q8pw-IJ8J-Bh10-ISIy-1snfWG.
Couldn't find device with uuid d4eTfV-HR7R-TLeH-tYgy-e2JC-B6sl-d5ysHa.
Cannot change VG VolGroup while PVs are missing.
Consider vgreduce --removemissing.
Cannot process volume group VolGroup
|
Since you're using LVM already and don't have a filesystem mounted at /usr/local, you can simply add the new disk (partition actually) to the volume group and then increase the size of the root logical volume and filesystem:
pvcreate /dev/xvdc1
vgextend VolGroup /dev/xvdc1
lvresize -r -l 100%FREE VolGroup/lv_root
WARNING
You're dealing with a production system. Ensure you understand the commands shown above BEFORE you use them. In following my advice you take full responsibility for whatever may happen.
| What are the methods to increase capacity for local in usr in linux? [closed] |
1,413,880,624,000 |
we want to verify if raid1 is configured on out linux machine
so we found the answer on - https://serverfault.com/questions/110843/how-to-determine-if-a-centos-system-is-raid-1
but from the output - how we can be sure that raid1 is configured?
and how to understand it from the command: dmesg | grep raid
when non raid , and when raid1 is configured ?
|
The output in the image you attached isn't necessarily showing you what you think it is. The "megaraid_sas" bits aren't specifying any RAID level in the OS, they're specifying the use of the "megaraid_sas" driver to access the disk. Given that, you are very likely using a MegaRAID controller, which is capable of various RAID levels, but to see the configuration you would need to be in the MegaRAID BIOS screen. That information usually isn't available from the OS. When configured for RAID-1, the MegaRAID card will hide the two physical disks from the system, and will present the kernel and OS with only a single disk device.
In short: check your MegaRAID BIOS configuration.
| how to identify if raid1 or raid mirror configure on OS disk |
1,413,880,624,000 |
I'm new to linux and how it works. I recently installed a kali linux image onto a 32gb flash drive and stuck it into my pc and booted from it. All went well, and so I ran the df command and got this result:
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 788M 18M 770M 3% /run
/dev/sdb1 4.0G 2.7G 1.4G 67% /lib/live/mount/persistence/sdb1
/dev/loop0 2.4G 2.4G 0 100% /lib/live/mount/rootfs/filesystem.squashfs
tmpfs 3.9G 0 3.9G 0% /lib/live/mount/overlay
/dev/sdb2 20G 5.2G 13G 29% /lib/live/mount/persistence/sdb2
overlay 20G 5.2G 13G 29% /
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 98M 3.8G 3% /tmp
tmpfs 788M 12K 788M 1% /run/user/131
tmpfs 788M 40K 788M 1% /run/user/0
total 67G 16G 49G 25% -
So how is the total size 67gb?
Also, what are all of these partitions? I would have thought there would be just two: the system, and the persistence file. Is there a reason that kali is partitioned into this many pieces, and is there documentation on which each of them do/are?
Thanks!
|
Welcome to Linux... it will be good if you get to know your man pages. There is a lot of information there.... also do a little web researching, All of this information is 'out there somewhere'
You are not looking at disk usage but total filesystem usage.
If you read around a little you will see that all of your tmpfs exist in memory and that your overlay is a mapping of persistenceonto non-writable layers.
udev is a particular type of tmpfs
Since you know that your SD card is mounted at /dev/sdbyour usage on that device is just the sum of the writable 'persistence' allocations to that device (/dev/sdb1 & sdb2, 20G+4.8G) plus the non-writable squashfs image which is also on /dev/sdb but is mounted at loop0 (2.4G).
| Kali on 32gb flash drive displays 67gb storage |
1,413,880,624,000 |
we want to identify bad block or disk problem by the following
umount /grid/sdd
badblocks -n -vv /dev/sdd
Checking for bad blocks in non-destructive read-write mode
From block 0 to 20971519
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: 14.38% done, 2:46 elapsed. (0/0/0 errors)
the problem is that verification take a long time
and if we have disk with 5T , then need more then 30hours
any other option or tool that do it more fast ?
to check disk with 20G , its tool 30Min
badblocks -n -vv /dev/sdd
Checking for bad blocks in non-destructive read-write mode
From block 0 to 20971519
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: done
Pass completed, 0 bad blocks found. (0/0/0 errors)
|
First of all, you can halve the running time badblocks by using destructive (-w) mode (instead of non-destructive -n).
You may also want to tune block size and number of blocks:
-b block_size
Specify the size of blocks in bytes. The default is 1024.
-c number of blocks
is the number of blocks which are tested at a time. The default
is 64.
The number of blocks is only limited of the available memory. The block size should match the block size of the disk, which is now normally 4096. You can check this with:
lsblk -o NAME,PHY-SeC
As to detect disk problems, the usual method of doing that now is SMART. Modern disk will remap sectors which are failing and they won't even show up on badblocks. You can either let SMART run it's course and check it from time to time (smartctl -H /dev/sda), or you can force a test, e.g. smartctl -t long. This test will not (or to a lesser degree) interfere with the normal operation of the disk. In other words, badblocks is superseded by SMART.
| how to identify bad block or disk problem |
1,413,880,624,000 |
I am searching for a long time tool that can runs as CLI on linux machine
and identify which RAID configuration is defined on the disk or disks ( according to the list from lsblk )
the reason for that we cant showdown ( Production machines ) the linux machine and look on the RAID controler when OS id down
I am just cant understand why it is so difficult to capture the RAID conf when OS is up , must be somewhere tool that identify the RAID configuration
|
What's the make and model of your hardware?
At least some Fujitsu PRIMERGY servers use the megaraid_sas kernel module for their RAID controller. And Fujitsu provides a tool called ServerView RAID Manager which includes a command-line tool and a web GUI, both of which can both view and modify the RAID configuration. I think amCLI was the name of their command-line tool.
Dell has a MegaCli command-line tool for their LSI MegaRAID controllers.
You should check your hardware vendor's support pages for your specific OS + hardware model combination, and see if they offer any useful downloadables.
| linux + tool that can identify RAID configuration [duplicate] |
1,413,880,624,000 |
we have rhel 7.2 machine , and machine is VM type
since one of the machines disk failure,we performed xfs_repair /dev/sdb ( in single user mode )
finally after 1 hour we get the following message
could not find valid secondery superblock
is it means that we cant repair the disk?
|
It's impossible to answer your question since you are providing zero detail about your issue. What kind of disk failure and what else was done? And you're sure XFS was on /dev/sdb (full disk) instead of a partition? This is slightly unusual.
So I'd just like to point out that the output by xfs_repair is what you get when running it on a full zero device that isn't and never has been XFS.
# truncate -s 40M foobar.img
# losetup --find --show foobar.img
/dev/loop0
# xfs_repair /dev/loop0
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!
attempting to find secondary superblock...
...........................Sorry, could not find valid secondary superblock
Exiting now.
So, in case you're using the wrong device or making another similar mistake, the output doesn't mean a thing.
Tools like xfs_repair, fsck, etc. should be used with caution, they can cause more damage. In a data recovery situation, you should always have a full disk copy or a copy-on-write layer to experiment with.
| xfs_repair /dev/sdb + could not find valid secondery superblock |
1,413,880,624,000 |
how to verify the disk size ( the disk that OS is installed on )
we have redhat 7.2
I will give example
# disk_os_size=` lsblk | grep sda `
sda 8:0 0 150G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 149.5G 0 part
# disk_os_size=` lsblk | grep sda | awk '{print $4}' `
# echo $disk_os_size
150G 500M 149.5G
so in that case the results are not good, because we get also the OS partitions
and what we want is only the size of the disk of the OS that should be 150G
|
disk_os_size=$(lsblk /dev/sda -o SIZE -n|head -1)
| how to find the size of the disk that OS is installed on |
1,413,880,624,000 |
I want to ask question about the memory buff/cache
lets say we have Linux machine with disk , and some application that wrote data to disk /var/data
from my understanding when application wrote data to disk then this data will saved also in memory cache
for some time , and after X time application data will removed from memory cache
is it possible to force the specific data to be saved on memory cache and not on the disk?
the reason for my question - is because we want fast read/write and disks are very slow regarding that
|
saved on memory cache
There is no such thing really. You start from this buff/cache:
]# free -g
total used free shared buff/cache available
Mem: 7 0 3 0 3 6
Swap: 0 0 0
This gives the amount of RAM currently used as buffer or cache. The kernel does this exactly because disks are slow, and there is often (as shown) enough RAM.
If you want to reserve a part of RAM for certain files, you can put them onto a RAM disk (mount -t tmpfs ...). But then you have to remember to copy them back to disk.
| Linux and memory buff/cache and disk storage |
1,413,880,624,000 |
My question is about official linux and common distributions (e.g. fedora/debian) on learning disk management in practice on linux / linux distributions. The layout, may be including the file system, although my focus here is not the theory/technology behind the implementation but the practical usage. For example, when I install a distribution (say Fedora), it asks me (now it is mostly automated but I'd like to do it myself or at lest know what's going on) about whether I want to creat boot, home etc partitions and then apparently it creates some more (probably "virtual" but I don't know how). I'd like to know about that whole process, for example if I install different distributions, is the /boot directory shared by all? Why are different types of file system recommended for different partitions? What are the differences? Tools to probe and manage the disk usage and file system (df, fdisk, parted etc) and so on..
I mostly search on the web but the information is very scattered and mostly shallow. Which is the most appropriate/updated resource to look for for this kind of stuff? Again, my focus in this is not so much on Linux kernel internals as the usage but with depth. Thanks!
|
I read the rules and it does seem to indicate open ended questions like this are not encouraged.
Here is a quick link to them: https://unix.stackexchange.com/help
However in case it is allowed, I would suggest reading Linux CompTIA or any of the very popular certification courseware. You will need more than just the understanding of disks to become conversant with how to design a partition scheme.
Things like the purpose of the box, what kind of network is it in (in case of NAS/SAN etc), and I am sure there are many more questions that would require an understanding of before you could properly "know" what to design.
PS I would assume this question will be deleted as it is an opinion and could be seen as an open ended question.
| Official resources about disc management on linux / distributions [closed] |
1,393,210,618,000 |
Suppose I press the A key in a text editor and this inserts the character a in the document and displays it on the screen. I know the editor application isn't directly communicating with the hardware (there's a kernel and stuff in between), so what is going on inside my computer?
|
There are several different scenarios; I'll describe the most common ones. The successive macroscopic events are:
Input: the key press event is transmitted from the keyboard hardware to the application.
Processing: the application decides that because the key A was pressed, it must display the character a.
Output: the application gives the order to display a on the screen.
GUI applications
The de facto standard graphical user interface of unix systems is the X Window System, often called X11 because it stabilized in the 11th version of its core protocol between applications and the display server. A program called the X server sits between the operating system kernel and the applications; it provides services including displaying windows on the screen and transmitting key presses to the window that has the focus.
Input
+----------+ +-------------+ +-----+
| keyboard |------------->| motherboard |-------->| CPU |
+----------+ +-------------+ +-----+
USB, PS/2, … PCI, …
key down/up
First, information about the key press and key release is transmitted from the keyboard to the computer and inside the computer. The details depend on the type of hardware. I won't dwell more on this part because the information remains the same throughout this part of the chain: a certain key was pressed or released.
+--------+ +----------+ +-------------+
-------->| kernel |------->| X server |--------->| application |
+--------+ +----------+ +-------------+
interrupt scancode keysym
=keycode +modifiers
When a hardware event happens, the CPU triggers an interrupt, which causes some code in the kernel to execute. This code detects that the hardware event is a key press or key release coming from a keyboard and records the scan code which identifies the key.
The X server reads input events through a device file, for example /dev/input/eventNNN on Linux (where NNN is a number). Whenever there is an event, the kernel signals that there is data to read from that device. The device file transmits key up/down events with a scan code, which may or may not be identical to the value transmitted by the hardware (the kernel may translate the scan code from a keyboard-dependent value to a common value, and Linux doesn't retransmit the scan codes that it doesn't know).
X calls the scan code that it reads a keycode. The X server maintains a table that translates key codes into keysyms (short for “key symbol”). Keycodes are numeric, whereas keysyms are names such as A, aacute, F1, KP_Add, Control_L, … The keysym may differ depending on which modifier keys are pressed (Shift, Ctrl, …).
There are two mechanisms to configure the mapping from keycodes to keysyms:
xmodmap is the traditional mechanism. It is a simple table mapping keycodes to a list of keysyms (unmodified, shifted, …).
XKB is a more powerful, but more complex mechanism with better support for more modifiers, in particular for dual-language configuration, among others.
Applications connect to the X server and receive a notification when a key is pressed while a window of that application has the focus. The notification indicates that a certain keysym was pressed or released as well as what modifiers are currently pressed. You can see keysyms by running the program xev from a terminal. What the application does with the information is up to it; some applications have configurable key bindings.
In a typical configuration, when you press the key labeled A with no modifiers, this sends the keysym a to the application; if the application is in a mode where you're typing text, this inserts the character a.
Relationship of keyboard layout and xmodmap goes into more detail on keyboard input. How do mouse events work in linux? gives an overview of mouse input at the lower levels.
Output
+-------------+ +----------+ +-----+ +---------+
| application |------->| X server |---····-->| GPU |-------->| monitor |
+-------------+ +----------+ +-----+ +---------+
text or varies VGA, DVI,
image HDMI, …
There are two ways to display a character.
Server-side rendering: the application tells the X server “draw this string in this font at this position”. The font resides on the X server.
Client-side rendering: the application builds an image that represents the character in a font that it chooses, then tells the X server to display that image.
See What are the purposes of the different types of XWindows fonts? for a discussion of client-side and server-side text rendering under X11.
What happens between the X server and the Graphics Processing Unit (the processor on the video card) is very hardware-dependent. Simple systems have the X server draw in a memory region called a framebuffer, which the GPU picks up for display. Advanced systems such as found on any 21st century PC or smartphone allow the GPU to perform some operations directly for better performance. Ultimately, the GPU transmits the screen content pixel by pixel every fraction of a second to the monitor.
Text mode application, running in a terminal
If your text editor is a text mode application running in a terminal, then it is the terminal which is the application for the purpose of the section above. In this section, I explain the interface between the text mode application and the terminal. First I describe the case of a terminal emulator running under X11. What is the exact difference between a 'terminal', a 'shell', a 'tty' and a 'console'? may be useful background here. After reading this, you may want to read the far more detailed What are the responsibilities of each Pseudo-Terminal (PTY) component (software, master side, slave side)?
Input
+-------------------+ +-------------+
----->| terminal emulator |-------------->| application |
+-------------------+ +-------------+
keysym character or
escape sequence
The terminal emulator receives events like “Left was pressed while Shift was down”. The interface between the terminal emulator and the text mode application is a pseudo-terminal (pty), a character device which transmits bytes. When the terminal emulator receives a key press event, it transforms this into one or more bytes which the application gets to read from the pty device.
Printable characters outside the ASCII range are transmitted as one or more byte depending on the character and encoding. For example, in the UTF-8 encoding of the Unicode character set, characters in the ASCII range are encoded as a single bytes, while characters outside that range are encoded as multiple bytes.
Key presses that correspond to a function key or a printable character with modifiers such as Ctrl or Alt are sent as an escape sequence. Escape sequences typically consist of the character escape (byte value 27 = 0x1B = \033, sometimes represented as ^[ or \e) followed by one or more printable characters. A few keys or key combination have a control character corresponding to them in ASCII-based encodings (which is pretty much all of them in use today, including Unicode): Ctrl+letter yields a character value in the range 1–26, Esc is the escape character seen above and is also the same as Ctrl+[, Tab is the same as Ctrl+I, Return is the same as Ctrl+M, etc.
Different terminals send different escape sequences for a given key or key combination. Fortunately, the converse is not true: given a sequence, there is in practice at most one key combination that it encodes. The one exception is the character 127 = 0x7f = \0177 which is often Backspace but sometimes Delete.
In a terminal, if you type Ctrl+V followed by a key combination, this inserts the first byte of the escape sequence from the key combination literally. Since escape sequences normally consist only of printable characters after the first one, this inserts the whole escape sequence literally. See key bindings table? for a discussion of zsh in this context.
The terminal may transmit the same escape sequence for some modifier combinations (e.g. many terminals transmit a space character for both Space and Shift+Space; xterm has a mode to distinguish modifier combinations but terminals based on the popular vte library don't). A few keys are not transmitted at all, for example modifier keys or keys that trigger a binding of the terminal emulator (e.g. a copy or paste command).
It is up to the application to translate escape sequences into symbolic key names if it so desires.
Output
+-------------+ +-------------------+
| application |-------------->| terminal emulator |--->
+-------------+ +-------------------+
character or
escape sequence
Output is rather simpler than input. If the application outputs a character to the pty device file, the terminal emulator displays it at the current cursor position. (The terminal emulator maintains a cursor position, and scrolls if the cursor would fall under the bottom of the screen.) The application can also output escape sequences (mostly beginning with ^[ or ^]) to tell the terminal to perform actions such as moving the cursor, changing the text attributes (color, bold, …), or erasing part of the screen.
Escape sequences supported by the terminal emulator are described in the termcap or terminfo database. Most terminal emulator nowadays are fairly closely aligned with xterm. See Documentation on LESS_TERMCAP_* variables? for a longer discussion of terminal capability information databases, and How to stop cursor from blinking and Can I set my local machine's terminal colors to use those of the machine I ssh into? for some usage examples.
Application running in a text console
If the application is running directly in a text console, i.e. a terminal provided by the kernel rather than by a terminal emulator application, the same principles apply. The interface between the terminal and the application is still a byte stream which transmits characters, with special keys and commands encoded as escape sequences.
Remote application, accessed over the network
Remote text application
If you run a program on a remote machine, e.g. over SSH, then the network communication protocol relays data at the pty level.
+-------------+ +------+ +-----+ +----------+
| application |<--------->| sshd |<--------->| ssh |<--------->| terminal |
+-------------+ +------+ +-----+ +----------+
byte stream byte stream byte stream
(char/seq) over TCP/… (char/seq)
This is mostly transparent, except that sometimes the remote terminal database may not know all the capabilities of the local terminal.
Remote X11 application
The communication protocol between applications an the server is itself a byte stream that can be sent over a network protocol such as SSH.
+-------------+ +------+ +-----+ +----------+
| application |<---------->| sshd |<------>| ssh |<---------->| X server |
+-------------+ +------+ +-----+ +----------+
X11 protocol X11 over X11 protocol
TCP/…
This is mostly transparent, except that some acceleration features such as movie decoding and 3D rendering that require direct communication between the application and the display are not available.
| How do keyboard input and text output work? |
1,393,210,618,000 |
I have a Dell XPS 13 9343 2015 with a resolution of 3200x1800 pixels.
I am trying to use i3 windows manager on it but everything is tiny and hardly readable.
I managed to scale every applications (firefox, terminal, etc...) using .Xresources :
! Fonts {{{
Xft.antialias: true
Xft.hinting: true
Xft.rgba: rgb
Xft.hintstyle: hintfull
Xft.dpi: 220
! }}}
but i3 interface still does not scale...
I have understood that xrandr --dpi 220 may solve the problem, but I don't know how/where to use it.
Can somebody enlighten me on this issue ?
|
Since version 4.13 i3 reads DPI information from Xft.dpi (source). So, to set i3 to work with high DPI screens you'll probably need to modify two files.
Add this line to ~/.Xresources with your preferred value:
Xft.dpi: 120
Make sure the settings are loaded properly when X starts in your ~/.xinitrc (source):
xrdb -merge ~/.Xresources
exec i3
Note that it will affect other applications (e.g. your terminal) that read DPI settings from X resources.
| How do I scale i3 window manager for my HiDPI display? |
1,393,210,618,000 |
A few months ago, Samsung announced the Ativ Book 9 Plus, a pretty cool ultrabook with a screen resolution of 3200 x 1800 pixels (QHD+).
The device ships with Windows 8 until Windows 8.1 is released and Samsung declared that only Windows 8.1 will be able to deal with this ultra high resolution.
Now I ask myself if any Linux distribution is able to deal with such a high resolution. Especially font rendering is a point to regard. According to some early reviews of the Ativ Book 9 Plus, Windows 8 is not able to render fonts properly so that you can read text without having to put the screen just in front of your nose. That's why they say Windows 8.1 will be able to do better.
But what's with Linux? Can Linux deal better with this ultra high resolution? Maybe anybody has some experience regarding other ultrabooks with comparable resolutions.
|
The Gnome / Wayland / X developers are working on this. As with OS X and Windows, the solution will probably involve decoupling applications' idea of a "pixel" from physical pixels. This is kind of silly, but solves the problem for software that makes assumptions about DPI and the relative size of a pixel.
There's an update on this from Gnome developer Alexander Larsson here: HiDPI support in Gnome.
| Can Linux deal with ultra high resolution displays? |
1,393,210,618,000 |
My question is simple, but I am finding it hard to frame/explain it easily.
I log into several Unix boxes with different accounts. I see 2 different things for user1 and user2, while editing text files in vim
user1
When I type vim filename, vim opens and I edit the file. When I close it, the complete text from the file is gone , and I see the Teminals' command/output that was previously present.
user2
When I type vim filename, vim opens and I edit the file. When I close it, the part of file that was present on the display while I was in vim still shows up at the display, and all the previous Terminal display get's scrolled up. Even if the file was just 1 line, after exiting vim, the display shows the first line, with rest all the ~ and I see the command prompt at the bottom of screen.
Details
$ bash --version
GNU bash, version 3.2.25(1)-release (x86_64-redhat-linux-gnu)
Copyright (C) 2005 Free Software Foundation, Inc.
$ vim --version
VIM - Vi IMproved 7.0 (2006 May 7, compiled Jun 12 2009 07:08:36)
I compared the vimrc files for both users, and I am aware of all the settings, and don't find any setting/config related to this behavior.
Is this behavior related to shell config ? How do I set the things, so that I get the behavior as shown in user1 scenario?
I am not able to describe this easily, also finding it hard to google, as I don't to know what keyword to look for such behavior. Let me know, If I should elaborate further.
|
One of the reasons for that behaviour will be the setting of the terminal for each user.
For example:
User1 is using TERM=xterm, in this case when you exit vim it will clear the terminal.
User2 is using TERM=vt100, in this case when you exit vim it will not clear the terminal.
Check what terminal user1 is using with echo $TERM and set the same for user2.
for bash:
TERM=xterm; export TERM
| How to set the bash display to not show the vim text after exit? |
1,393,210,618,000 |
What is DISPLAY=:0 and what does it mean?
It isn't a command, is it? (gnome-panel is a command.)
DISPLAY=:0 gnome-panel
|
DISPLAY=:0 gnome-panel is a shell command that runs the external command gnome-panel with the environment variable DISPLAY set to :0. The shell syntax VARIABLE=VALUE COMMAND sets the environment variable VARIABLE for the duration of the specified command only. It is roughly equivalent to (export VARIABLE=VALUE; exec COMMAND).
The environment variable DISPLAY tells GUI programs how to communicate with the GUI. A Unix system can run multiple X servers, i.e. multiple display. These displays can be physical displays (one or more monitor), or remote displays (forwarded over the network, e.g. over SSH), or virtual displays such as Xvfb, etc. The basic syntax to specify displays is HOST:NUMBER; if you omit the HOST part, the display is a local one.
Displays are numbered from 0, so :0 is the first local display that was started. On typical setups, this is what is displayed on the computer's monitor(s).
Like all environment variables, DISPLAY is inherited from parent process to child process. For example, when you log into a GUI session, the login manager or session starter sets DISPLAY appropriately, and the variable is inherited by all the programs in the session. When you open an SSH connection with X forwarding, SSH sets the DISPLAY environment variable to the forwarded connection, so that the programs that you run on the remote machine are displayed on the local machine. If there is no forwarded X connection (either because SSH is configured not to do it, or because there is no local X server), SSH doesn't set DISPLAY.
Setting DISPLAY explicitly causes the program to be displayed in a place where it normally wouldn't be. For example, running DISPLAY=:0 gnome-panel in an SSH connection starts a Gnome panel on the remote machine's local display (assuming that there is one and that the user is authorized to access it). Explicitly setting DISPLAY=:0 is usually a way to access a machine's local display from outside the local session, such as over a remote access or from a cron job.
| What is DISPLAY=:0? [duplicate] |
1,393,210,618,000 |
From https://unix.stackexchange.com/a/17278/674
If you run ssh -X localhost, you should see that $DISPLAY is
(probably) localhost:10.0. Contrast with :0.0, which is the value
when you're not connected over SSH. (The .0 part may be omitted;
it's a screen number, but multiple screens are rarely used.) There are
two forms of X displays that you're likely to ever encounter:
Local displays, with nothing before the :.
TCP displays, with a hostname before the :.
With ssh -X localhost, you can access the X server through both
displays, but the applications will use a different method: :NUMBER
accesses the server via local sockets and shared memory, whereas
HOSTNAME:NUMBER accesses the server over TCP, which is slower and
disables some extensions.
What are the relations and differences between X server, display and
screen?
What does "the X server through both display" mean? Does a "display"
means a display server, i.e. an X server, so two "displays" means
two display servers, i.e. two X servers.
What does "multiple screens" mean? Does a "screen" mean a display
monitor?
Thanks.
|
I will give you a visual example to explain the basics of X11 and what is going on in the background:
source
In this example you have a local X11-server with two "screens" on your hostA. Usually there would be only one server with one screen (:0.0), which spans across all your monitors (makes multi-monitor applications way easier). hostB has two X servers, where the second one has no physical display (e.g. virtual framebuffer for VNC). hostC is a headless server without any monitors.
terminal 1a, 2a, 5a, 6a:
If you open a local terminal, and set the display to :0.0 (default) or :0.1, the drawing calls for your graphical programs will be sent to the local X server directly via the memory.
terminal 1b, 5b:
If you ssh onto some server, usually the display will be set automatically to the local X server, if there is one available. Otherwise, it will not be set at all (reason see terminal 3).
terminal 2b, 6b:
If you ssh onto a server, and enable X11-forwarding via the "-X" parameter, a tunnel is automatically created through the ssh-connection. In this case, TCP Port 6010 (6000+display#) on hostB is forwarding the traffic to Port 6000 (X server #0) on hostA. Usually the first 10 displays are reserved for "real" servers, therefore ssh remaps display #10 (next user connecting with ssh -X while you're logged in, would then get #11). There is no additional X server started, and permissions for X-server #0 on hostA are handled automatically by ssh.
terminal 4:
If you add a hostname (e.g. localhost) in front of the display/screen#, X11 will also communicate via TCP instead of the memory.
terminal 3:
You can also directly send X11 commands over the network, without setting up a ssh-tunnel first. The main problem here is, that your network/firewall/etc. needs to be configured to allow this (beware X11 is practically not encrypted), and permissions for the X server need to be granted manually (xhosts or Xauthority).
To answer your questions
What are the relations and differences between X server, display and screen?
A display just refers to some X server somewhere. The term "both displays" was referring to ":0.0" on the local computer ("local display") being equal to "localhost:10.0" on the ssh-target ("TCP display"). "screens" is referring the different virtual monitors (framebuffers) of the X server. "localhost:10.0" is only redirecting to the local X server, there is no X server started on the ssh-target (see scenario terminal 2b/6b).
| What are X server, display and screen? |
1,393,210,618,000 |
If I set 175% scaling in Gnome Settings, the value is saved as 1.7518248558044434 in ~/.config/monitors.xml:
<monitors version="2">
<configuration>
<logicalmonitor>
<x>0</x>
<y>0</y>
<scale>1.7518248558044434</scale>
<primary>yes</primary>
<monitor>
<monitor spec>
<connector>DP-3</connector>
Why is it so? At first, I thought it could be due to floating point rounding error, but 1.75 is one of those happy numbers whose the exact value can be expressed.
Gnome Wayland 43.3
|
The preset scale factors (100%, 125%, etc.) get adjusted to the closest values that give a whole number of pre-scaling virtual pixels both horizontally and vertically for your resolution; judging by your value of 1.7518248558044434 this is probably 2192 x 1233 virtual resolution and you have a 3840 x 2160 display.
Also as to why the width you would calculate with that value, 3840/1.7518248558044434 = 2191.9999520937613, is only accurate to about four places after the decimal point, clearly the scale has been converted from single-precision floating point (IEEE-754 32-bit). The double precision approximation of 3840/2192 is more like 1.7518248175182483, but if you convert that value to single-precision and back to double-precision you get 1.7518248558044434 precisely. I did it with Python, as suggested by the answer https://stackoverflow.com/a/43405711/60422:
>>> struct.unpack('f', struct.pack('f', 1.7518248175182483))[0]
1.7518248558044434
Stéphane Chazelas suggests the corresponding one-liner in Perl:
perl -e 'printf "%.17g\n", unpack "f", pack "f", 1.7518248175182483'
Why converting a floating point number to a higher precision gives a decimal representation with more digits that are of no use is the kind of floating point rounding error the question is alluding to -- the internal representation of the floating point number is in binary, and so the digits after the floating point internally (the "binary point" since it's binary) represent power of 2 fractions (1/2, 1/4, 1/8, and so on). A number you can express in a finite number of places in decimal does not necessarily have a finite representation in binary, and vice versa. For more on this see: https://stackoverflow.com/a/588014/60422
Single precision is generally said to be good for about 7 decimal significant figures and that's what we're seeing here.
To get an idea of how the adjustment of the scale factor that comes up with this number actually works, the get_closest_scale_factor_for_resolution function in mutter calculates the virtual width and height from the scale factor, and then if these aren't whole numbers, starting from the calculated width rounded down it tries whole number widths around the calculated one on both sides, expanding outward from it one pixel at a time, until it finds a width that gives an adjusted scale factor that would also make the virtual height a whole number, or until it gives up because the scale has gone out of range or out of the search threshold. https://gitlab.gnome.org/GNOME/mutter/-/blob/176418d0e7ac6a0418eea46669f33c8e3b03c4bd/src/backends/meta-monitor.c#L1960
If you want to know why the developers decided to round the scale factors to get whole numbers of pixels, I don't have the answer for that, but my guess is backwards compatibility: developers are used to peoples' monitors having whole numbers of pixels, and so this is what the existing software out there is designed for.
| Why is Gnome fractional scaling 1.7518248558044434 instead of 1.75? |
1,393,210,618,000 |
Is there a Linux graphics program that displays man commands in a browser?
I need a program that allows me to display all man commands in a browser, or in some graphics program, so that they can be up all the time, rather than having to view them through terminal windows.
|
There is xman, a graphical utility for displaying manpages.
I don't know anyone who has ever used it though. It was old an archaic already 20 years ago. For your stated use case of having manual pages displayed all the time, you'd probably be better off just opening a new terminal window and typing man something than by using xman.
| Is there a Linux graphics program that displays man commands in a browser? |
1,393,210,618,000 |
Something that's always bugged me and I've been unable to find good information on:
How can you or why can't you forward an entire desktop over SSH (ssh -X)?
I'm very familiar with forwarding individual windows using ssh -X. But there are times when I'd like to use my linux laptop as a dumb terminal for another linux machine.
I've always thought that it should be possible to shut down the desktop environment on my laptop and then from the command line ssh into another machine and startup an desktop environment forwarded to my laptop.
Searches online come up with a bunch of third party tools such as VNC and Xephyr, or they come up with the single window ssh commands and config. But that's NOT what I'm looking for. I'm looking to understand a little of the the anatomy (of xwindows?, wayland?, gdm?) to understand how you'd go about doing this, OR why it's not possible.
Note:
Xephyr isn't what I'm looking for because it tries to run the remote desktop in a window
VNC isn't what I'm looking for for a whole bunch of reasons, not least because it's not X11 forwarding but forwarding bitmaps.
|
How can you?
I've been using the below method from the (now suspended) Xmodulo site to remote into my entire Raspberry Pi desktop from any Ubuntu machine. Works with my original RPi, RPi2 and RPi3. Of course you have to mod sshd_config to allow X11 forwarding on the remote machine (I'd say client/host, but I believe they are different in X11 from other uses and I may confuse myself). Mind the spaces -- they break this procedure frequently when I can't type.
You then have the entire desktop and can run the machine as if physically connected. I switch to Ubuntu using CTRL+ALT+F7, then back to RPi with CTRL+ALT+F2. YMMV. A quirk: You must physically release CTRL+ALT before hitting another function key when switching back and forth.
Original link: http://xmodulo.com/2013/12/remote-control-raspberry-pi.html
Original work attributed to: Kristophorus Hadiono. The referenced pictures are, sadly, lost.
UPDATE (20230411): Looks like this site is back up.
Method #3: X11 Forwarding for Desktop over SSH
With X11+SSH forwarding, you can actually run the entire desktop of Raspberry Pi remotely, not just standalone GUI applications.
Here I will show how to run the remote RPi desktop in the second virtual terminal (i.e., virtual terminal 8) via X11 forwarding. Your Linux desktop is running by default on the first virtual terminal, which is virtual terminal #7. Follow instructions below to get your RPi desktop to show up in your second virtual terminal.
Open your konsole or terminal, and change to root user.
sudo su
Type the command below, which will activate xinit in virtual terminal 8. Note that you will be automatically switched to virtual terminal 8. You can switch back to the original virtual terminal 7 by pressing CTRL+ALT+F7.
xinit -- :1 &
After switching to virtual terminal 8, execute the following command to launch the RPi desktop remotely. Type pi user password when asked (see picture below).
DISPLAY=:1 ssh -X [email protected] lxsession
You will bring to your new virtual terminal 8 the remote RPi desktop, as well as a small terminal launched from your active virtual terminal 7 (see picture below).
Remember, do not close that terminal. Otherwise, your RPi desktop will close immediately.
You can move between first and second virtual terminals by pressing CTRL+ALT+F7 or CTRL+ALT+F8.
To close your remote RPi desktop over X11+SSH, you can either close a small terminal seen in your active virtual terminal 8 (see picture above), or kill su session running in your virtual terminal 7.
| Forwarding an entire desktop over SSH without third party tools |
1,393,210,618,000 |
When you type control characters in the shell they get displayed using what is called "caret notation". Escape for example gets written as ^[ in caret notation.
I like to customize my bash shell to make it look cool. I have for example changed my PS1 and PS2 to become colorized. I now want control characters to get a unique appearance as well to make them more distinguishable from regular characters.
$ # Here I type CTRL-C to abort the command.
$ blahblah^C
^^ I want these two characters to be displayed differently
Is there a way to make my shell highlight control characters differently?
Is it possible to make it display them in a bold font or maybe make them appear in different colors from regular text?
I am using bash shell here but I did not tag the question with bash because maybe there is a solution that applies to many different shells.
Note that I do not know at what level highlighting of control characters takes place. I first thought it was in the shell itself. Now I have heard that it is readline that controls how control characters are in shells like bash. So the question is now tagged with readline and I am still looking for answers.
|
When you press Ctrl+X, your terminal emulator writes the byte 0x18 to the master side of the pseudo-terminal pair.
What happens next depends on how the tty line discipline (a software module in the kernel that sits in between the master side (under control of the emulator) and the slave side (which applications running in the terminal interact with)) is configured.
A command to configure that tty line discipline is the stty command.
When running a dumb application like cat that is not aware of and doesn't care whether its stdin is a terminal or not, the terminal is in a default canonical mode where the tty line discipline implements a crude line editor.
Some interactive applications that need more than that crude line editor typically change those settings on start-up and restore them on leaving. Modern shells, at their prompt are examples of such applications. They implement their own more advanced line editor.
Typically, while you enter a command line, the shell puts the tty line discipline in that mode, and when you press enter to run the current command, the shell restores the normal tty mode (as was in effect before issuing the prompt).
If you run the stty -a command, you'll see the current settings in use for the dumb applications. You're likely to see the icanon, echo and echoctl settings being enabled.
What that means is that:
icanon: that crude line editor is enabled.
echo: characters you type (that the terminal emulator writes to the master side) are echoed back (made available for reading by the terminal emulator).
echoctl: instead of being echoed asis, the control characters are echoed as ^X.
So, let's say you type A B Backspace-aka-Ctrl+H/? C Ctrl+X Backspace Return.
Your terminal emulator will send: AB\bC\x18\b\r. The line discipline will echo back: AB\b \bC^X\b \b\b \b\r\n, and an application that reads the input from the slave side (/dev/pts/x) will read AC\n.
All the application sees is AC\n, and only when your press Enter so it can't have any control on the output for ^X there.
You'll notice that for echo, the first ^H (^? with some terminals, see the erase setting) resulted in \b \b being sent back to the terminal. That's the sequence to move the cursor back, overwrite with space, move cursor back again, while the second ^H resulted in \b \b\b \b to erase those two ^ and X characters.
The ^X (0x18) itself was being translated to ^ and X for output. Like B, it didn't make it to the application, as we deleted it with Backspace.
\r (aka ^M) was translated to \r\n (^M^J) for echo, and \n (^J) for the application.
So, what are our options for those dumb applications:
disable echo (stty -echo). That effectively changes the way control characters are echoed, by... not echoing anything. Not really a solution.
disable echoctl. That changes the way control characters (other than ^H, ^M... and all the other ones used by the line editor) are echoed. They are then echoed as-is. That is for instance, the ESC character is send as the \e (^[/0x1b) byte (which is recognised as the start of an escape sequence by the terminal), ^G you send a \a (a BEL, making your terminal beep)... Not an option.
disable the crude line editor (stty -icanon). Not really an option as the crude applications would become a lot less usable.
edit the kernel code to change the behaviour of the tty line discipline so the echo of a control character sends \e[7m^X\e[m instead of just ^X (here \e[7m usually enables reverse video in most terminals).
An option could be to use a wrapper like rlwrap that is a dirty hack to add a fancy line editor to dumb applications. That wrapper in effect tries to replace simple read()s from the terminal device to calls to readline line editor (which do change the mode of the tty line discipline).
Going even further, you could even try solutions like this one that hijacks all input from the terminal to go through zsh's line editor (which happens to highlight ^Xs in reverse video) relying on GNU screen's :exec feature.
Now for applications that do implement their own line editor, it's up to them to decide how the echo is done. bash uses readline for that which doesn't have any support for customizing how control characters are echoed.
For zsh, see:
info --index-search='highlighting, special characters' zsh
zsh does highlight non-printable characters by default. You can customize the highlighting with for instance:
zle_highlight=(special:fg=white,bg=red)
For white on red highlighting for those special characters.
The text representation of those characters is not customizable though.
In a UTF-8 locale, 0x18 will be rendered as ^X, \u378, \U7fffffff (two unassigned unicode code points) as <0378>, <7FFFFFFF>, \u200b (a not-really printable unicode character) as <200B>.
\x80 in a iso8859-1 locale would be rendered as ^�... etc.
| How to display control characters (^C, ^D, ^[, ...) differently in the shell |
1,393,210,618,000 |
I am generating random data and trying to convert it to a PNG image using :
head -c 1MB < /dev/urandom | hexdump -e '16/1 "_x%02X"' | sed 's/_/\\/g; s/\\x //g; s/.*/ "&"/' | tr -d "\"" | display -depth 8 -size 1000x1000+0 rgb:-
This command always shows a greyish image with some RGB pixels. What am I doing wrong ?
My final goal is to generate at least one image with random data.
|
Firstly, you need to feed display RGB:- raw bytes, not an encoded hex string like you're building with that hexdump | sed | tr pipeline.
Secondly, you aren't giving it enough bytes: you need 3 bytes per pixel, one for each colour channel.
This does what you want:
mx=320;my=256;head -c "$((3*mx*my))" /dev/urandom | display -depth 8 -size "${mx}x${my}" RGB:-
To save directly to PNG, you can do this:
mx=320;my=256;head -c "$((3*mx*my))" /dev/urandom | convert -depth 8 -size "${mx}x${my}" RGB:- random.png
Here's a typical output image:
If you'd like to make an animation, there's no need to create and save individual frames. You can feed a raw byte stream straight to ffmpeg / avconv, eg
mx=320; my=256; nframes=100; dd if=/dev/urandom bs="$((mx*my*3))" count="$nframes" | avconv -r 25 -s "${mx}x${my}" -f rawvideo -pix_fmt rgb24 -i - random.mp4
| Random image generator |
1,393,210,618,000 |
I'm not able to start any GUI applications as a root user:
# pgrep -lf Xorg
1590 /usr/bin/Xorg -br -nolisten tcp :0 vt7 -auth /var/lib/xdm/authdir/authfiles/A:0-PNnJzp
# echo $DISPLAY
:0
# xeyes
No protocol specified
Error: Can't open display: :0
# firefox
No protocol specified
No protocol specified
Error: cannot open display: :0
# xcalc
No protocol specified
Error: Can't open display: :0
#
Distribution is openSUSE 11.2(2.6.31.5-0.1-default) and X.Org X Server version is 1.6.5. My DISPLAY variable is set correctly, isn't it? Any ideas what might cause this problem?
|
:0 should work as should :0.0 (normal default) as also localhost:0 etc. Permissions are most likely problem.
Try disabling xhost with: xhost +
(This is unlikely to work but easier to do than the following which is required if it didn't).
So if that fails it's probably xauth.
Follow the first answer on here:
How to use xauth to run graphical application via other user on linux | Server Fault
To add the xauth key from your user logged into X to the root user.
| X "Can't open display: :0" while DISPLAY variable is correct [duplicate] |
1,393,210,618,000 |
I run this command to allow me to move windows between screens:
xrandr --auto
This magic command fixes my screen for me (before I run this my 2nd monitor is just an empty space where I can move my mouse). How can I make whatever this command does stick when I reboot? I'm more interested in fixing my configuration than just re-running this command, but I'm clueless as to how to make this happen.
I have 2 monitors, DFP 5 and DFP 6. Running xrandr results in this:
DFP1 disconnected (normal left inverted right x axis y axis)
DFP2 disconnected (normal left inverted right x axis y axis)
DFP3 disconnected (normal left inverted right x axis y axis)
DFP4 disconnected (normal left inverted right x axis y axis)
DFP5 connected 1680x1050+1680+0 (normal left inverted right x axis y axis) 474mm x 296mm
1680x1050 60.0*+
1400x1050 60.0
1280x1024 75.0 60.0
1440x900 60.0
1280x960 75.0 60.0
1280x800 75.0 60.0
1152x864 60.0 75.0
1280x768 75.0 60.0
1280x720 75.0 60.0
1024x768 75.0 60.0
800x600 75.0 60.3
640x480 75.0 59.9
DFP6 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 474mm x 296mm
1680x1050 60.0*+
1400x1050 60.0
1280x1024 75.0 60.0
1440x900 60.0
1280x960 75.0 60.0
1280x800 75.0 60.0
1152x864 60.0 75.0
1280x768 75.0 60.0
1280x720 75.0 60.0
1024x768 75.0 60.0
800x600 75.0 60.3
640x480 75.0 59.9
CRT1 disconnected (normal left inverted right x axis y axis)
I have already set up DFP 6 to be right of DFP 5 using the Displays menu in debian. Here is my xorg.conf file:
Section "ServerLayout"
Identifier "aticonfig Layout"
Screen 0 "aticonfig-Screen[0]-0" 0 0
EndSection
Section "Module"
EndSection
Section "Monitor"
Identifier "aticonfig-Monitor[0]-0"
Option "VendorName" "ATI Proprietary Driver"
Option "ModelName" "Generic Autodetecting Monitor"
Option "DPMS" "true"
EndSection
Section "Device"
Identifier "aticonfig-Device[0]-0"
Driver "fglrx"
BusID "PCI:4:0:0"
EndSection
Section "Screen"
Identifier "aticonfig-Screen[0]-0"
Device "aticonfig-Device[0]-0"
Monitor "aticonfig-Monitor[0]-0"
DefaultDepth 24
SubSection "Display"
Viewport 0 0
Depth 24
virtual 3360 1050
EndSubSection
EndSection
It seems to be configured for everything to be one screen, and xrandr --auto somehow fixes it. Is there some way of taking a peek at what this command is doing to save the result to xorg.conf? How do you normally use xrandr to get the results to persist?
If I search for this I either get told to modify my xorg.conf file (which I don't know how to do because I don't know what xrandr --auto is actually doing) or instructions on how to run xrandr on startup, which I'm guessing isn't necessary, but I may be wrong.
|
I created the following file:
/etc/X11/Xsession.d/45custom_xrandr-settings and placed this line into it:
xrandr --output DFP6 --primary
This had the effect of making the correct monitor the primary one, and it launches on login.
| How can I make xrandr changes persist? |
1,393,210,618,000 |
Goal: I'm writing a very simple image viewer for framebuffer /dev/fb0 (something like fbi).
Current state:
My software takes the pixel resolution from /sys/class/graphics/fb0/virtual_size (such as 1920,1080).
And then (for each row) it will write 4 bytes (BGRA) for each 1920 row-pixels (total 4x1920=7680 bytes) to /dev/fb0.
This works perfectly fine on my one laptop with a 1920x1080 resolution.
More precisely: setting a pixel at y-row x-col => arr[y * 1920 * 4 + x * 4 + channel] where the value channel is 0,1,2,3 (for B, G, R, and A, respectively).
Problem:
When I try the same software on my old laptop with (/sys/.../virtual_size -> 1366,768 resolution), the image is not shown correctly (bit skewed). So I played around the pixel-width value and found out the value was 1376 (not 1366).
Questions:
Where do these 10 extra bytes come from?
And, how can I get this value of 10 extra bytes on different machines (automatically, not manually tuning it)?
Why do some machines need these extra 10 bytes, when some machines don't need them?
|
Programmatically, to retrieve information about a framebuffer you should use the FBIOGET_FSCREENINFO and FBIOGET_VSCREENINFO ioctls:
#include <fcntl.h>
#include <linux/fb.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/ioctl.h>
int main(int argc, char **argv) {
struct fb_fix_screeninfo fix;
struct fb_var_screeninfo var;
int fb = open("/dev/fb0", O_RDWR);
if (fb < 0) {
perror("Opening fb0");
exit(1);
}
if (ioctl(fb, FBIOGET_FSCREENINFO, &fix) != 0) {
perror("FSCREENINFO");
exit(1);
}
if (ioctl(fb, FBIOGET_VSCREENINFO, &var) != 0) {
perror("VSCREENINFO");
exit(1);
}
printf("Line length: %ld\n", fix.line_length);
printf("Visible resolution: %ldx%ld\n", var.xres, var.yres);
printf("Virtual resolution: %ldx%ld\n", var.xres_virtual, var.yres_virtual);
}
line_length gives you the line stride.
| How can I get the number bytes to write per row for FrameBuffer? |
1,393,210,618,000 |
I want to delete a display setting which I set up via xrandr to my Linux Mint distribution.
xrandr --rmmode 1368x768_60.00
gives me this error:
X Error of failed request: BadAccess (attempt to access private
resource denied)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 17 (RRDestroyMode)
Serial number of failed request: 32
Current serial number in output stream: 33
I also tried it with sudo , which prints out the same error.
Besides that I also deleted my monitors.xml file, with no effect.
How can I properly delete my display setting 1368x768_60.00 ?
|
As ctac_ already mentioned, the mode has to be unused. So I switched to another mode, used xrandr --rmmode 1368x768_60.00 which nevertheless has thrown an error and still showed the mode in the display settings.
Rebooting after that helped, and the mode wasn't available anymore.
| Deleting specific display setting created with xrandr |
1,393,210,618,000 |
Days ago I broke my laptop display by accident, the right side of the screen is damaged, but the most part of the left side is usable. I did some research trying to find a way to modify the dimension of the screen to fit into the area with no damage and I found xrandr.
I found the next .sh archive but I can't find a way to put the screen into the left side, neither modifying the --transform parameters or the --fb command.
#!/bin/bash
#change these 4 variables accordingly
ORIG_X=1280
ORIG_Y=800
NEW_X=1160
NEW_Y=800
###
X_DIFF=$(($NEW_X - $ORIG_X))
Y_DIFF=$(($NEW_Y - $ORIG_Y))
ORIG_RES="$ORIG_X"x"$ORIG_Y"
NEW_RES="$NEW_X"x"$NEW_Y"
ACTIVEOUTPUT=$(xrandr | grep -e " connected [^(]" | sed -e "s/\([A-z0-9]\+\) connected.*/\1/")
MODELINE=$(cvt $NEW_X $NEW_Y | grep Modeline | cut -d' ' -f3-)
xrandr --newmode $NEW_RES $MODELINE
xrandr --addmode $ACTIVEOUTPUT $NEW_RES
xrandr --output $ACTIVEOUTPUT --fb $NEW_RES --panning $NEW_RES --mode $NEW_RES
xrandr --fb $NEW_RES --output $ACTIVEOUTPUT --mode $ORIG_RES --transform 1,0,$X_DIFF,0,1,$Y_DIFF,0,0,1
I also tried to do it without the .sh archive running the next line:
xrandr --output LVDS-1 --fb 800x768 --mode 800x768 --transform 1,0,566,0,1,0,0,0,1
The screen took the position I want but after running that command a black border on the left side of the screen appears and I can't remove it.
Any idea of what it's going wrong here?
|
Just set the screen size with xrandr --fb (no --mode, --transform, whatever).
$ xrandr --fb 800x768
xrandr will complain about the screen size being too small, but will apply the settings nonetheless.
Example:
$ xrandr --fb 1520x1080
xrandr: specified screen 1520x1080 not large enough for output VGA-0 (1920x1080+0+0)
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 29 (RRSetPanning)
Serial number of failed request: 43
Current serial number in output stream: 43
# from the xtruss output
--- ConfigureNotify(event=w#000004A8, window=w#000004A8, x=0, y=0, width=1520, height=1080, border-width=0, above-sibling=None, override-redirect=False)
$ xwininfo -root | grep geo
-geometry 1520x1080+0+0
That should probably be a warning rather than an error; there are situations where it makes perfect sense to set the screen size to something smaller than the actual display(s).
Update:
Multi-head enabled window managers get the info about the screen(s) via the Xrandr(3) and Xinerama(3) extensions, and do not clamp their dimensions inside the root window rectangle.
A temporary workaround would be to prevent them from using the Xrandr and Xinerama extensions via a LD_PRELOAD hack. That could be improved by turning the dummy functions into wrappers that trim the returned rectangles.
This worked for me on vanilla debian 9.5 with the mate desktop environment and either the lightdm or gdm3 display manager:
root# apt-get install mate-desktop-environment lightdm
root# apt-get install gcc
root# cat <<'EOT' | cc -fPIC -x c - -shared -o /etc/X11/no_xrr.so
int XineramaIsActive(void *d){ return 0; }
void *XineramaQueryScreens(void *dpy, int *n){ *n = 0; return 0; }
int XineramaQueryExtension(void *d, int *i, int *j){ return 0; }
int XRRQueryExtension(void *d, int *i, int *j){ return 0; }
EOT
root# cat <<'EOT' >/etc/X11/Xsession.d/98-no_xrr
export LD_PRELOAD=/etc/X11/no_xrr.so
case $STARTUP in
/usr/bin/ssh-agent*)
STARTUP="/usr/bin/ssh-agent env LD_PRELOAD=$LD_PRELOAD ${STARTUP#* }";;
esac
EOT
Then, from the session menu of lightdm choose "MATE", and as the logged-in user:
$ LD_PRELOAD= xrandr --fb 800x768
I wasn't able to get it to work though with either plasma or gnome3/gnome-shell/mutter yet.
| Xrandr problem trying to avoid broken display |
1,393,210,618,000 |
I want to display xclock on another computer.
On my computer (111) I am able to ping the other computer (222) inside my home network:
$ ifconfig wlan0
wlan0 Link encap:Ethernet HWaddr 44:55:66:77:88:99
inet addr:192.168.0.111 Bcast:192.168.0.255 Mask:255.255.255.0
$ ping 192.168.0.222
The router is a D-Link DIR-655 Wireless N Gigabit Router.
$ xclock -display 192.168.0.111:0
Displays the xclock on my computer (111) as expected. On the other computer (222):
$ xhost +
But then back on my computer (111) it also displays on my computer when changed to:
$ xclock -display 192.168.0.222:0
To attempt to verify the use of the -display switch:
$ xclock -display 192.168.0.111:0.1
Error: Can't open display: 192.168.0.111:0.1
$ ping 192.168.0.333
ping: unknown host 192.168.0.333
$ xclock -display 192.168.0.333:0
I would expect it to fail but it also displays on my computer (111) but with a bit of a delay. These results tells me that the display argument is getting to xclock.
$ uname -a
Linux mycomputer 3.2.0-27-generic #43-Ubuntu SMP Fri Jul 6 14:46:35 UTC 2012 i686 i686 i386 GNU/Linux
I am using LXDE rather than GNOME or KDE.
I am only attempting to get xclock to display and am not interested in addressing security issues in this question.
|
On my computer (111):
ssh -X 192.168.0.222
followed simply by:
xclock
will run xclock on the other computer (222) and display on my computer (111).
Note: For this to work X11Forwarding should be enabled in /etc/ssh/sshd_config at computer (222)
| How to display xclock on another computer? |
1,393,210,618,000 |
This is what I find in man X:
The phrase "display" is usually used to refer to a collection of monitors that share a common set of input devices (keyboard, mouse, tablet, etc.). Most workstations tend to only have one display. Larger, multi-user systems, however, frequently have several displays so that more than one person can be doing graphics work at once. To avoid confusion, each display on a machine is assigned a display number (beginning at 0) when the X server for that display is started. The display number must always be given in a display name.
My question is: Do we need to start multiple X servers if we want to use multiple displays, or all those displays can be handled by a single X server? Is it possible to share keyboards, mice and monitors across different displays?
Edit. The display here refers to the concept defined by the X window system, not a single monitor. I know there are Xinerama and XRandR technologies that support multi-head configurations.
|
Quoting X(7):
From the user's perspective, every X server has a display name of the form:
hostname:displaynumber.screennumber
Each X server has one display (which may include multiple monitors, or even no monitors at all). Using multiple displays (in the X sense) requires multiple X servers; that's how you get multiple seats too.
As far as sharing goes, I think each X server expects to "own" the devices it's using at any given time, so you can't have input from a single keyboard going to multiple X servers simultaneously, or the output of multiple X servers combined on a single monitor. X servers can hand hardware off, which allows you to run X servers on multiple VTs and switch between them (this is how simultaneous logins are handled e.g. in GNOME). You can also nest some X servers (Xephyr, xpra...), so input goes to your main current X server, and gets passed on to the nested X server in a window; and the output of the nested X server is displayed in a window by the main X server.
On Linux, you could write a multiplexing input driver in the input layer to share input devices, but that's a different layer altogether than the X server.
| Is it possible for a X server to have multiple displays? |
1,393,210,618,000 |
I just got a new display (Samsung LC27JG50QQU, 1440p, 144hz) which is plugged into my AMD Radeon HD 6950 (DVI-D, DVI-I, HDMI 1.4, 2x Mini DisplayPort) graphics card using HDMI. However, it only lets me set 1080p max in my display settings. Cable and monitor were fine on 1440p with my MacBook Pro.
I am running Linux Mint 19.1 Tessa
This is the output xrandr gives:
Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 16384 x 16384
DisplayPort-3 disconnected (normal left inverted right x axis y axis)
DisplayPort-4 disconnected (normal left inverted right x axis y axis)
HDMI-3 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 597mm x 336mm
1920x1080 60.00* 50.00 59.94
1680x1050 59.88
1600x900 60.00
1280x1024 75.02 60.02
1440x900 59.90
1280x800 59.91
1152x864 75.00
1280x720 60.00 50.00 59.94
1024x768 75.03 70.07 60.00
832x624 74.55
800x600 72.19 75.00 60.32 56.25
720x576 50.00
720x480 60.00 59.94
640x480 75.00 72.81 66.67 60.00 59.94
720x400 70.08
DVI-0 disconnected (normal left inverted right x axis y axis)
DVI-1 disconnected (normal left inverted right x axis y axis)
VGA-1-1 disconnected (normal left inverted right x axis y axis)
HDMI-1-1 disconnected (normal left inverted right x axis y axis)
DP-1-1 disconnected (normal left inverted right x axis y axis)
HDMI-1-2 disconnected (normal left inverted right x axis y axis)
HDMI-1-3 disconnected (normal left inverted right x axis y axis)
DP-1-2 disconnected (normal left inverted right x axis y axis)
DP-1-3 disconnected (normal left inverted right x axis y axis)
lspci -k | grep -EA3 'VGA|3D|Display':
00:02.0 Display controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
Subsystem: Gigabyte Technology Co., Ltd 2nd Generation Core Processor Family Integrated Graphics Controller
Kernel driver in use: i915
Kernel modules: i915
--
01:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Cayman PRO [Radeon HD 6950]
Subsystem: Hightech Information System Ltd. Cayman PRO [Radeon HD 6950]
Kernel driver in use: radeon
Kernel modules: radeon
glxinfo | grep -i vendor:
server glx vendor string: SGI
client glx vendor string: Mesa Project and SGI
Vendor: X.Org (0x1002)
OpenGL vendor string: X.Org
EDID:
00ffffffffffff004c2d560f4d325530
071d0103803c22782a1375a757529b25
105054bfef80b300810081c081809500
a9c0714f0101565e00a0a0a029503020
350055502100001a000000fd00324b1b
5919000a202020202020000000fc0043
32374a4735780a2020202020000000ff
0048544f4d3230303034340a2020014d
02031bf146901f041303122309070783
01000067030c0010008032023a801871
|
First create the appropriate modeline with cvt
$ cvt 2560 1440
# 2560x1440 59.96 Hz (CVT 3.69M9) hsync: 89.52 kHz; pclk: 312.25 MHz
Modeline "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync
Then add the mode using xrandr --newmode
$ xrandr --newmode "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync
Finally set your display to that particular mode:
$ xrandr --addmode HDMI-3 2560x1440_60.00
$ xrandr --output HDMI-3 --mode 2560x1440_60.00
EDIT 1:
Going by the OP's EDID his monitor is reported as **C27JG5x
** . edid-decode also reports the following:
EDID version: 1.3
Manufacturer: SAM Model f56 Serial Number 810889805
Made in week 7 of 2019
Digital display
Maximum image size: 60 cm x 34 cm
Gamma: 2.20
DPMS levels: Off
RGB color display
First detailed timing is preferred timing
Display x,y Chromaticity:
Red: 0.6523, 0.3408
Green: 0.3203, 0.6083
Blue: 0.1455, 0.0654
White: 0.3134, 0.3291
Established timings supported:
720x400@70Hz 9:5 HorFreq: 31469 Hz Clock: 28.320 MHz
640x480@60Hz 4:3 HorFreq: 31469 Hz Clock: 25.175 MHz
640x480@67Hz 4:3 HorFreq: 35000 Hz Clock: 30.240 MHz
640x480@72Hz 4:3 HorFreq: 37900 Hz Clock: 31.500 MHz
640x480@75Hz 4:3 HorFreq: 37500 Hz Clock: 31.500 MHz
800x600@56Hz 4:3 HorFreq: 35200 Hz Clock: 36.000 MHz
800x600@60Hz 4:3 HorFreq: 37900 Hz Clock: 40.000 MHz
800x600@72Hz 4:3 HorFreq: 48100 Hz Clock: 50.000 MHz
800x600@75Hz 4:3 HorFreq: 46900 Hz Clock: 49.500 MHz
832x624@75Hz 4:3 HorFreq: 49726 Hz Clock: 57.284 MHz
1024x768@60Hz 4:3 HorFreq: 48400 Hz Clock: 65.000 MHz
1024x768@70Hz 4:3 HorFreq: 56500 Hz Clock: 75.000 MHz
1024x768@75Hz 4:3 HorFreq: 60000 Hz Clock: 78.750 MHz
1280x1024@75Hz 5:4 HorFreq: 80000 Hz Clock: 135.000 MHz
1152x870@75Hz 192:145 HorFreq: 67500 Hz Clock: 108.000 MHz
Standard timings supported:
1680x1050@60Hz 16:10 HorFreq: 64700 Hz Clock: 119.000 MHz
1280x800@60Hz 16:10
1280x720@60Hz 16:9
1280x1024@60Hz 5:4 HorFreq: 64000 Hz Clock: 108.000 MHz
1440x900@60Hz 16:10 HorFreq: 55500 Hz Clock: 88.750 MHz
1600x900@60Hz 16:9
1152x864@75Hz 4:3 HorFreq: 67500 Hz Clock: 108.000 MHz
Detailed mode: Clock 241.500 MHz, 597 mm x 336 mm
2560 2608 2640 2720 hborder 0
1440 1443 1448 1481 vborder 0
+hsync -vsync
VertFreq: 59 Hz, HorFreq: 88786 Hz
Monitor ranges (GTF): 50-75Hz V, 27-89kHz H, max dotclock 250MHz
Monitor name: C27JG5x
Serial number: HTOM200044
Has 1 extension blocks
Checksum: 0x4d (valid)
While this error might just as likely radeon (namely drmmode_do_crtc_dpms cannot get last vblank counter reported in Xorg.log) driver ( a fix I am in the process of putting together in EDIT 2), in OP's case the monitor might be able to produce an output with the following modeline as reported by edid-decode:
Modeline "2560x1440" 241.500 2560 2608 2640 2720 1440 1443 1448 1481 +hsync -vsync
and then again using xrandr as follows:
$ xrandr --newmode "2560x1440" 241.500 2560 2608 2640 2720 1440 1443 1448 1481 +hsync -vsync
$ xrandr --addmode HDMI-3 "2560x1440"
$ xrandr --output HDMI-3 --mode 2560x1440
This might very well work as both cvt and gtf fails in producing a modeline limited by the EDID reported max dotclock of 250MHz. My own monitor (only capable of 1080p) actually tries to produce the impossible the 2560x1440 resolution when given a modeline properly limited by the EDID max dotclock, unlike when given the cvt modeline which completely shuts down the monitor into standby mode with a message on the screen that says "input not available".
In OP's case it was necessary to further drop the refresh rate through limiting the dotclock so the following two modelines may need to be used instead of the one above.
xrandr --newmode "2560x1440_54.97" 221.00 2560 2608 2640 2720 1440 1443 1447 1478 +HSync -VSync
xrandr --newmode "2560x1440_49.95" 200.25 2560 2608 2640 2720 1440 1443 1447 1474 +HSync -VSync
One additional important point is to make sure that the GPU clock as specified by the driver is also capable of the chosen bandwidth by checking the value reported by:
grep -iH PixClock /var/log/Xorg.*
, and even more importantly that the cable standard you are using conforms to the following limits:
| How to enable 2560x1440 option for display in Linux Mint? |
1,393,210,618,000 |
What does Settings → Display → Adjust for TV do in GNOME? I'm using Wayland in case that makes any difference. A web search came up with nothing concrete other than bug reports related to or mentioning this feature. The GNOME wiki page doesn't explain it.
|
If you search for "HDTV Overscan" you will find that historically, there was no standard for the size of CRT screens (the bulky non-flat displays nobody uses anymore). As such it was difficult to show image that looks good and a trick called "overscan" was made. Overscan essentially "cuts off the edges" of a picture with the goal of fitting it (via resizing) to your screen. This made the first attempt to define standards (tile-safe/action-safe/underscan) about which parts of the picture coud be cut, the last one of which (underscan) is "cut nothing".
Nowadays, you want "cut nothing" because everyone has a flat TV has thas some standard resolution (e.g. 1080p), and furthermore when you connect a gaming console (Playstation, Xbox, Switch) or a computer (laptop) you most definitely don't want the edges of your screen cut off (you want "underscan").
Now, if you connect a PC to a TV and notice the corners missing (e.g. the start menu of Windows at the bottom is not visible or slightly cut off) it usually means your TV is clipping the image and/or resizing it. Usually you would go into your TV settings to "turn this off" (i.e. switch to "underscan"). This is different per manufacturer, for example:
Samsung TV: go to Menu / Picture / Picture Options / Size / Screen Fit (instead of 16:9).
LG TV: go to Settings / Picture / Aspect Ratio / Just Scan (instead of 16:9)
Sony TV: hit Home button, go to Settings / Screen / Display Area / Full Pixel
Sharp TV: hit View Mode button, select "Dot by Dot" or "Full screen"
However, what if you have a TV where you can't do this, or don't know how to? GNOME HAS YOU COVERED! Turn "Adjust for TV" on and Gnome will resize the image to create a blank space around it (the thing that the TV cuts off) so that when the TV does its thing you get back the full image!
Basically, 99% of the time you're better off figuring out how to tell your TV to just display the exact whole thing rather than using this (IMHO), but it's there if you need it...
| What does the "Adjust for TV" option in GNOME do? |
1,393,210,618,000 |
I can blank/turn off display on my laptop with following command:
xset dpms force off
then, any mouse movement or keyboard press "wakes up" the display.
Is it possible to ignore mouse movements, and only unblank the screen on keyboard action?
If this is not possible in xset at the moment, I would welcome any suggestion how to patch the sourcecode.
I am using Debian 10.
|
In Debian 10 gnome-screensaver-command -a can be used to enable blank screen (I set this up as a shortcut to Win-Shift-L). Mouse movement and modifier keys will turn the monitors back on but won't exit gnome-screensaver unless a normal key is pressed or a mouse button is clicked (or dragged?). Monitors turn back off after a very short delay while in gnome-screensaver.
Blank screen can also be done automatically after a delay the same way you would normally lock the screen automatically, just disable Automatic Screen Lock in Gnome Settings
Unfortunately the curtain appears to go away on mouse movements alone (and even other reasons?) in Gnome 3.38 (Debian 11 Bullseye); the net effect is that if anything unblanks the screen I get back to the desktop with normal blank timeout (set at 10 minutes for me, and this happens quite often even while I'm not at my computer at all) instead of blanking back after just a few seconds. I'm still looking for a way to re-enable it.
| xset: ignore mouse movement when display is blanked |
1,393,210,618,000 |
I've noticed this when trying to watch movies on that laptop running eOS. After 10 minutes or so the display is turned down.
I've looked for settings against this and found the following:
Power setting: put the computer to sleep: I set that to 'Never'. But it couldn't be this setting, my problem being that the display is shut, not that the computer is put to sleep.
Brightness and lock: Brightness: Turn screen off when inactive for: set that to 'Never'. That should be it but it does not work.
Because I'd experienced a similar issue with GUI settings for display not being followed in another Ubuntu based distro - Xfce - reported here - I imagined also that a screensaver setting was the matter. I've found a situation similar to that and tried that solution. Only that, unlike in Xfce, now a gnome-screensaver was installed but without accessible GUI settings for it. So, it looked like a certain blank-screen screensaver was active in the background. To get a GUI for screensaver I installed xscreensaver. When starting that I was prompted that gnome-screensaver was already running and asked to shut it down. Said yes and then disabled screensaver in Xscreensaver.
Afterwards I also uninstalled gnome-screensaver, but the same problem would still reappear.
|
Background
There are 2 solutions that were determined for this particular problem. The 1st involved launching xscreensaver, and disabling it so that no screensaver is configured.
The 2nd method involved completely disabling the screensaver in X altogether, through the use of the xset command.
Solution #1
A solution with a narrow scope (by cipricus) is that of adding a fourth step to those included in the answer.
Install xscreensaver
Remove gnome-screensaver
Set Xscreensaver NOT to use any screensaver ('Disable screensaver')
Add xscreensaver in the startup programs list. The command to add is:
xscreensaver -no-splash
This solution was suggested by the fact that this message appeared when starting xscreensaver before adding the fourth step:
Further instructions came from this source.
NOTE: To add a program to startup list in eOS, go to System Settings > Stertup Applications > Add
Solution #2
A solution with a wider scope by slm:
xset
Check to see what the xset setting is for screen blanking as well. You can check using this command:
$ xset q
We're specifically interested in this section of the output from the above command:
$ xset q
...
Screen Saver:
prefer blanking: yes allow exposures: yes
timeout: 600 cycle: 600
...
Disabling screensaver
You can change these settings like this:
$ xset s off
$ xset s noblank
Confirm by running xset q again:
$ xset q
...
Screen Saver:
prefer blanking: no allow exposures: yes
timeout: 0 cycle: 600
...
DPMS
You might also need to disable power management as well, that's the DPMS settings in the xset q output:
$ xset q
...
DPMS (Energy Star):
Standby: 0 Suspend: 0 Off: 0
DPMS is Enabled
Monitor is On
...
Disable it like so:
$ xset -dpms
Confirm:
$ xset q
...
DPMS (Energy Star):
Standby: 0 Suspend: 0 Off: 0
DPMS is Disabled
...
Re-enabling features
You can re-enable these features at any time with these commands
$ xset s blank # blanking screensaver
$ xset s 600 600 # five minute interval
$ xset +dpms # enable power management
Confirming changes:
$ xset q
...
Screen Saver:
prefer blanking: yes allow exposures: yes
timeout: 600 cycle: 600
...
...
DPMS (Energy Star):
Standby: 0 Suspend: 0 Off: 0
DPMS is Enabled
Monitor is On
...
| Display shuts down while watching a movie after 10 minutes no matter the settings in Elementary OS |
1,393,210,618,000 |
I have a BenQ gw2765 monitor with 2560x1440 resolution... but my computer will only give it a maximum of 1920x1080 resolution.
The monitor is connected to my Lenovo Thinkpad X1 laptop via an HDMI-MiniDP connector. The laptop is running a brand-new installation of KDE's Neon (based on Ubuntu): KDE neon 5.11, KDE Plasma Version 5.11.2, KDE Fameworks Version 5.29.0, Qt Version 5.9.1.
A few years ago I tried in vain to get this working with some xrandr stuff. I was hoping that with this new installation would just work.
When I dig around the internet I'm surprised how little I see about this; I saw recommendations to "just use Gnome3 because it works" on one end of the spectrum and on the other end of the spectrum were questions by people who actually know what xrandr is.
I don't know what the x-server is or how it works, but if I need to edit some xorg.conf file or something I'm ready to try. I'd just love a little guidance, or a point-in-the-right-direction, in case your wisdom might help me avoid breaking things :)
edit:
$ sudo lshw -c video
description: VGA compatible controller
product: 3rd Gen Core processor Graphics Controller
vendor: Intel Corporation
physical id: 2
bus info: pci@0000:00:02.0
version: 09
width: 64 bits
clock: 33MHz
capabilities: msi pm vga_controller bus_master cap_list rom
configuration: driver=i915 latency=0
resources: irq:28 memory:f0000000-f03fffff memory:e0000000-efffffff ioport:4000(size=64) memory:c0000-dffff
Also:
When I sudo apt-get install xserver-xorg-video-all it wants to remove a package named "neon-desktop" which sounds dangerous as I'm using https://neon.kde.org/. ((and I'm ultra-cautious of removing packages right now because last week apt-get install ruby-dev uninstalled so much stuff that it resulted in kernel panic every time I tried to boot {hence the brand-new os now}))
I found https://forum.kde.org/viewtopic.php?f=309&t=141545#p380311
which sounds quite identical to the issue I'm facing, but that solution is not working for me
$ cvt 2560 1440 60
cvt# 2560x1440 59.96 Hz (CVT 3.69M9) hsync: 89.52 kHz; pclk: 312.25 MHz
Modeline "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync
$ sudo xrandr --newmode "2560x1440_60.00" 312.25 2560 2752 3024 3488 1440 1443 1448 1493 -hsync +vsync
$ sudo xrandr --addmode HDMI-1 2560x1440_60.00
This appears like it would work: it adds a display setting that's listed by xrandr and then the new size is available in KDE's system settings... but when I select it in the system settings gui and click 'apply' it resets back to the previously selected setting. I've tried toggling back and forth between different sizes but won't display at the proper large size.
The current xrandr with the 2560x1440_60.00 at the bottom:
Screen 0: minimum 320 x 200, current 1920 x 1980, maximum 8192 x 8192
LVDS-1 connected primary 1600x900+160+1080 (normal left inverted right x axis y axis) 309mm x 174mm
1600x900 59.97*+
1440x900 59.89
1360x768 59.80 59.96
1152x864 60.00
1024x768 60.04 60.00
960x720 60.00
928x696 60.05
896x672 60.01
960x600 60.00
960x540 59.99
800x600 60.00 60.32 56.25
840x525 60.01 59.88
800x512 60.17
700x525 59.98
640x512 60.02
720x450 59.89
640x480 60.00 59.94
680x384 59.80 59.96
576x432 60.06
512x384 60.00
400x300 60.32 56.34
320x240 60.05
VGA-1 disconnected (normal left inverted right x axis y axis)
HDMI-1 connected 1920x1080+0+0 (normal left inverted right x axis y axis) 597mm x 336mm
1920x1080 60.00* 50.00 59.94
1920x1080i 60.00 50.00 59.94
1680x1050 59.88
1600x900 60.00
1280x1024 75.02 60.02
1280x800 59.91
1152x864 75.00
1280x720 60.00 50.00 59.94
1024x768 75.03 60.00
832x624 74.55
800x600 75.00 60.32
720x576 50.00
720x576i 50.00
720x480 60.00 59.94
720x480i 60.00 59.94
640x480 75.00 60.00 59.94
720x400 70.08
2560x1440_60.00 59.96
DP-1 disconnected (normal left inverted right x axis y axis)
And then when I reboot the computer the 2560x1440_60.00 is no longer listed by xrandr.
|
This isn't an answer, strictly speaking, but I didn't want it to get buried in the existing comment section:
This appears like it would work: it adds a display setting that's listed by xrandr and then the new size is available in KDE's system settings... but when I select it in the system settings gui and click 'apply' it resets back to the previously selected setting.
I experienced this exact problem and struggled to find other reports of this issue online. I double-checked my GPU's spec sheet, which claimed that its 4K display support was just a few pixels shy of my actual display resolution (3280x2160 display vs. 3280x2000 supported by GPU). This spec conflicted with the computer manufacturer's spec sheet, which claimed support for resolutions up to 4096x2160.
In a last-ditch effort, I replaced my HDMI cable with a DisplayPort cable and now it works up to 3280x2160@30Hz. Oddly, the HDMI cable was not the problem—I had used the very same cable to connect the monitor to my laptop with no problems whatsoever.
If anyone else experiences this problem, I would encourage you to:
Find the spec sheet for your GPU online. In my case, I followed the instructions on this official Intel support page, which led me to the first spec sheet linked above.
If all else fails, try connecting via DisplayPort rather than HDMI (or vice versa).
As the comment suggests, the converter may be a problem. If going from MiniDP (on your computer) to HDMI (monitor) is problematic going from MiniDP to Display Port might work (hopefully the monitor has Display Port).
| how to enable full resolution on large monitor (in KDE)? |
1,393,210,618,000 |
My secondary monitor is partially broken in the top 1/3 of the screen so I want to add blackbars and have something like in the picture below.
The top part it's how everything is now. The bottom part is what I want to achieve. The resolutions listed there are the maximum for each screen.
This is the xrandr output for the second screen:
VGA1 connected 1920x1080+1366+12 (normal left inverted right x axis y axis) 890mm x 500mm
1920x1080 60.00*+
1600x1200 60.00
1680x1050 59.95
1280x1024 75.02 60.02
1440x900 74.98 59.89
1280x960 60.00
1360x768 60.02
1280x800 59.81
1152x864 75.00
1024x768 75.08 70.07 60.00
832x624 74.55
800x600 72.19 75.00 60.32
640x480 75.00 72.81 66.67 60.00
720x400 70.08
|
Possible workarounds:
Use empty panels covering the damaged region to force windows using the remaining space. For example, xfce4-panel can be configured well. It depends on your desktop environment how well this works. Xfce and LXDE will work fine, Gnome will make problems, I assume.
This does not help for fullscreen applications that would cover the panel, too, for example, firefox+F11, or VLC in fullscreen.
Workaround for fullscreen applications: Starting Xephyr with desired screen size, positioning it and starting applications inside it. Automate this with a script and 'xdotool`:
Xephyr :1 -screen 1500x800x24 &
xdotool search --name Xephyr windowmove 0 437
Start applications in Xephyr window with DISPLAY=:1 firefox. Xephyr does not support hardware acceleration, but virtualgl can help here.
Best workaround:
Use weston with Xwayland. It supports hardware acceleration and fullscreen applications.
Use a quite lightweight window manager like openbox at startup (or even better, one without window decorations at all like evilwm). It serves as background environment only, weston will cover it.
Create a custom myweston.ini file like this one (see man weston.ini):
[core]
shell=desktop-shell.so
idle-time=0
[shell]
panel-location=none
locking=false
[output]
name=X1
mode=1366x768
[output]
name=X2
mode=1500x768
Create a script like this one to start weston in evilwm and Xwayland in Weston (customize positions of 2 weston windows). Finally, start your desired desktop environment:
# start weston with custom config and two output windows
weston --socket=wayland-1 --config=$HOME/myweston.ini --output-count=2 >$HOME/weston.log 2>&1 &
sleep 1 # wait until weston is ready
# get window id's from logfile and move windows at target (xwininfo could give id's, too)
xdotool windowmove 0x$(printf '%x\n' $(cat $HOME/weston.log | grep 'window id' | head -n1 | rev | cut -d' ' -f1 | rev)) 0 0
xdotool windowmove 0x$(printf '%x\n' $(cat $HOME/weston.log | grep 'window id' | tail -n1 | rev | cut -d' ' -f1 | rev)) 1369 400
# start X server Xwayland in weston
WAYLAND_DISPLAY=wayland-1 Xwayland :1 &
sleep 1 # wait until Xwayland is ready
# start your desired desktop environment
DISPLAY=:1 startlxde
The start script above does not set up cookie authentication for X clients. Instead, you can use x11docker to have cookie authentication, too:
# start weston and Xwayland, read X environment
read Xenv < <(x11docker --weston-xwayland --gpu --output-count=2 --westonini=$HOME/myweston.ini)
# move weston windows to target locations
xdotool windowmove $(xwininfo -name "Weston Compositor - X1" | grep "Window id" | cut -d' ' -f4) 0 0
xdotool windowmove $(xwininfo -name "Weston Compositor - X2" | grep "Window id" | cut -d' ' -f4) 1367 400
# start desired desktop environment
env $Xenv startlxde
Xwayland appears as a client "window" of weston. Unfortunately, due to a bug in Weston or Xwayland, it does not always sit at position 0:0. You can move Xwayland to desired position with [META]+left-mouse-button.
I wrote a bug report, but got noresponse.
| My monitor is partially broken. How can I decrease the viewable space using xrandr or any similar tool? |
1,393,210,618,000 |
In a dual monitor system is it possible to mirror a single window and not the entire screen?
Put it another way, can I ask an X application to open simultaneously on DISPLAY :0.0 and :0.1?
Basically this can be useful for presentations, where one needs to send to the projector just a copy of the PDF (or the like) window.
My window manager is Openbox.
|
For this specific requirement with X11 and Openbox, I don't know if it is possible to do such a hackery, but with VNC is quite easy to achieve what you are after.
Mirroring a single X application with x11vnc
Get id of window that you want to mirror: xwininfo
x11vnc -id {replace-by-window-id}
Probably you will have to install x11vnc but you can use whatever VNC client is installed already on your pc (Remmina on Ubuntu), just mind the port number given by x11vnc.
Mirroring the whole screen with default apps
Most of the popular linux distros already have a vnc server and client installed. On Ubuntu the VNC server is vino and Remmina the client, installing them is straight forward, something like sudo apt-get install vino remmina or the equivalent sudo yum install vino remmina.
To check if Vino is installed, launch vino-preferences and if you get preferences window you already have it, enable the sharing and at the security section require a confirmation dialog on connection or password.
Once this is done you can start the Vino server by /usr/lib/vino/vino-server (at least for Ubuntu this is working). Enter this command in your startup apps if you wish vino to start automatically.
Then you just need to connect with Remmina: select VNC as connection type and for the address input box enter 0.0.0.0:5900 and press Connect! A dialog should pop up asking if you allow the remote connection if you have set it so in the vino-preferences. After you allow it you'll get one of the dual screens mirrored on the other one. Problem solved.
If you are in a secure network and so speed&quality can be your utmost priority, you could enable connections to your vino server without encryption: gsettings set org.gnome.Vino require-encryption false
| Mirror a single X application on a dual-monitor setup with Openbox |
1,393,210,618,000 |
I often run simulations or other processes on my desktop for hours or days where I don't touch the machine.
During this time, I'd like the display to sleep and I want the discrete graphics card to sleep or turn off with it, using as little power as possible. (edit: I can physically power off the monitor, so that's why I'm asking about the GPU.)
How should I do it? Is it possible to force the graphics card to either turn off or idle until I come back to the computer and move the mouse / bang the keyboard?
If it matters, the current card is an AMD Radeon, I'm running Arch.
|
Preface: Whether this works is highly dependent on your hardware.
Since you're using a Radeon card and all more recent cards(starting with GCN 1.0 circa 2011) support something called ZeroPower, the first step would be to check if it's maybe just a DPM issue. Try forcing your card into a low power state. I assume that you're using the open source drivers since fglrx pretty much useless on Arch. Try
echo low > /sys/class/drm/card0/device/power_dpm_force_performance_level
to force the card into it's lowest power state, turn of your screen and see if the fan turns off. For more information about DPM you can look here.
If the above fails or you simply want to try it out, you can also try removing/disabling the card. A word of caution: Playing with PCIe hotplug can be a very entertaining way to crash your system. To do so, stop Xorg and unload the kernel module your GPU driver uses(probably radeon in your case). Afterwards, find out how your CPU is called (lspci, some line will contain somthing like 01:00.0 VGA compatible controller). Using this number, you can remove the GPU from the bus by doing(adjust the numbers)
echo 1 > /sys/bus/pci/devices/0000\:01\:00.0/remove
This will not turn off the power but hopefully cause the GPU to power down since it's no longer attached. To reattach it you can try to redetect it using
echo 1 > /sys/bus/pci/rescan
via SSH or rebooting the machine (probably also via SSH).
| Display sleep: "sleep" graphics card? |
1,377,304,267,000 |
Currently when I am not using screen utility, I am able to see the VIM content wiped out of the display when I quit VIM.
However when I am using GNU Screen utility and opening a file in one of the screen window and closing it, I can see the trailing file content on the display. It’s not wiping out the file content from the display the way it does when I am not using GNU Screen.
I found the below post where it has been discussed without GNU Screen.
How to set the bash display to not show the vim text after exit?
In my case, in both the scenario [with and without GNU Screen] the Terminal Type is “xterm”. But the behavior is different when I close a VIM file.
Kindly help.
|
GNU screen supports the xterm alternate-screen feature using the altscreen setting in your .screenrc file. According to the manual:
— Command: altscreen state
(none)
If set to on, "alternate screen" support is enabled in virtual terminals, just like in xterm. Initial setting is ‘off’.
A quick check shows that screen is actually simulating the feature, because it clears and/or restores the screen contents itself without sending the control sequence used by xterm. The screen feature works whether or not the actual terminal (or its terminal description) supports the alternate screen feature. You can test this by setting TERM to "vt100" before running screen.
You can read more about the alternate screen feature in the xterm FAQ Why doesn't the screen clear when running vi?
| Set the bash display [while using GNU Screen utility] to not show the vim text after exit |
1,377,304,267,000 |
I have a project that needs to detect the DISPLAY in shell (bash) variable to be able to display some gui stuff on the local machine.
Or a better solution (dbus ?) to open gui stuff in non interactive shell without trying to figure out DISPLAY and XAUTHORITY.
I can set DISPLAY=:0 but that will fail if a user uses another session.
As far as I'm not in interactive mode, what I tried (works well, but only as root) is:
strings /proc/$(pidof Xorg)/environ | grep -Eo 'DISPLAY=:[0-9]+(:[0-9])*'
or as user:
ps uww $(pidof Xorg) | grep -oE '[[:blank:]]:[0-9]+(:[0-9])*\b'
But I don't know if it's reliable on any Linux (Unixes?)
Is there a more reliable/portable way?
|
Final solution not requiring to be root, and accessible from a non-interactive shell in an automated way and more advanced and usable than possible duplicate link provided earlier :
-XAUTHORITY :
ps -u $(id -u) -o pid= |
xargs -I{} cat /proc/{}/environ 2>/dev/null |
tr '\0' '\n' |
grep -m1 '^XAUTHORITY='
- DISPLAY :
ps -u $(id -u) -o pid= |
xargs -I{} cat /proc/{}/environ 2>/dev/null |
tr '\0' '\n' |
grep -m1 '^DISPLAY='
The snippet list all user's pids, iterate over them, then break on the first match
Based on this
| What is the best way to find the current DISPLAY and XAUTHORITY in non interactive shell for the current user? [duplicate] |
1,377,304,267,000 |
Can the output a list command contain permissions listed. For example, when executing find / -name filename 2> /dev/null and I get results, is it possible to have my results include the file permissions? Thanks in advance!
|
Check out this command:
$ find . -name '*.sh' -printf "depth="%d/"sym perm="%M/"perm="%m/"size="%s/"user="%u/"group="%g/"name="%p/"type="%Y\\n
depth=1/sym perm=-rwxr-xr-x/perm=755/size=1678/user=root/group=root/name=./yadpanned.sh/type=f
depth=1/sym perm=-rwxr-xr-x/perm=755/size=154/user=root/group=root/name=./remove.sh/type=f
| List permissions with find command [duplicate] |
1,377,304,267,000 |
I am running Linux (Debian). I recently installed the x11VNC server on my computer.
I found the command to start the server is:
x11vnc -display :0
I have been searching but did not find any information on how to restart and shutdown the x11VNC server. Is there a command(s) to do this?
|
If you're using systemd you should be able to set it up as a service. I found this thread which shows a similar task of setting up x11vnc as a Systemd service. The thread is titled: Index» Newbie Corner» how to enable x11vnc at startup using systemd ?.
From a comment in that thread
Create the file: /etc/systemd/system/x11vnc.service
[Unit]
Description=VNC Server for X11
Requires=display-manager.service
After=display-manager.service
[Service]
Type=forking
ExecStart=/usr/bin/x11vnc -norc -forever -shared -bg -rfbauth /etc/x11vnc.pass -allow 192.168.1. -autoport 5900 -o /var/log/x11vnc.log
Create the file: /etc/systemd/system/graphical.target
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
[Unit]
Description=Graphical Interface
Documentation=man:systemd.special(7)
Requires=multi-user.target
After=multi-user.target
Conflicts=rescue.target
Wants=display-manager.service
Wants=x11vnc.service
AllowIsolate=yes
[Install]
Alias=default.target
Enable Systemd service
$ sudo systemctl enable graphical.target
This should create a link like this:
/etc/systemd/system/default.target -> /etc/systemd/system/graphical.target
Reboot
| Is there a shutdown/restart command for x11VNC? - Linux (Debian) |
1,377,304,267,000 |
I am having problems with this and I don't know why. There are many related questions but none of them helped me.
I have two VMs:
CentOS 7 with GNOME 192.168.1.53
Mint 17.1 Rebbeca with XFCE 192.168.1.54
I know that by default exporting the display should be strait forward, like:
#While I am Logged in on the desktop on the MINT:
user@mint:~$ xhost +
#I am SSHing to the Centos from the MINT
user@mint:~$ ssh -XY [email protected]
#At the CentOS I export the display
[root@cent ~]$ export DISPLAY=192.168.1.54:0.0
[root@cent ~]$ echo $DISPLAY
192.168.1.54:0.0
#Trying to start a simple program but I get an error message instead:
[root@cent ~]$ xclock
Error: Can't open display: 192.168.1.54:0.0
What I am doing wrong?
I tried the suggestions on a number of forums but I still get the error message. I also tried to export the display from the Mint to the Centos (the oposite way) and I still get the same error but this time on the Mint.
Could it be that the error is because one system has XFCE and the other GNOME?
I am thinking that there may be some default security settings in effect on one/both of the distros for which I am not aware of.
I also tried to eddit the /etc/gdm/custom.conf on the CentOS as explained here:
http://www.softpanorama.org/Xwindows/Troubleshooting/can_not_open_display.shtml
|
You're trying to create an X tunnel through SSH then overriding it by specifying an IP address which bypasses the SSH tunnel. This doesn't work. When SSH tunnelling, SSH deals with transferring data between the local and remote IP addresses by opening a port on localhost on each machine it speaks to. You don't get to specify the IP address of either computer.
You need to export the display that is tunnelled through SSH, and that means export DISPLAY=localhost:x.y, which should have been done for you automatically when you connect using ssh -X.
| Why I can't export the Linux DISPLAY? |
1,377,304,267,000 |
After window resizing, font size changes, etc., how can I easily and quickly check what is the current display width of my terminal?
|
This has been answered (and mis-answered) repeatedly. But:
tput cols provides information that the operating system can tell you about the width.
the COLUMNS variable may be set by your shell, but (a) it is unreliable (set in certain shells) and has the drawback that if exported will interfere with full-screen applications.
the resize program can tell you the size for special cases where the terminal cannot negotiate its window-size with the operating system.
Further reading: COLUMNS in the ncurses manual page.
| How can I quickly check exactly how many columns my terminal has? [duplicate] |
1,377,304,267,000 |
When starting up by machine, I am prompted with a terminal asking me to login, rather than the nice GUI that I am use to. When I login in, I am able to run startx and everything works smoothly.
I added the following to my ~/zprofile but it only ran once I was logged in.
if [[ ! $DISPLAY && $XDG_VTNR -eq 1 ]]; then
startx
fi
How can I get the login screen that I am use to, to appear again?
|
The Gui is loaded by systemd, when the init system is systemd, this is the case of Ubuntu
Here is a nice answer about the subject
systemctl get-default permit to see what target is set for the startup either multi-user.target or graphical.target
To enable x at startup time you can use:
sudo systemctl enable graphical.target --force
sudo systemctl set-default graphical.target
And to disable it
sudo systemctl enable multi-user.target --force
sudo systemctl set-default multi-user.target
Note that /etc/X11/default-display-manager contain the default used display manager (this file is not required tho)
Also find here how to setup the default display manager, this is required as well
For a detailed answer more information about the setup are required (what desktop are you using kde/gnome what dm are using lightdm/sddm etc)
| Startx not automatically running on reboot |
1,377,304,267,000 |
On my Dell latitude e6540, the WMI hotkeys Fn+Up and Fn+Down are not working. I have all necesary modules compiled in my kernel:
CONFIG_DELL_LAPTOP=m
CONFIG_DELL_WMI=m
CONFIG_DELL_WMI_AIO=m
On the predecessor model (Latitude e6520), all worked fine, without any need for additional setup. I am using the same (custom build) kernel 3.16.6 on both laptops. On e6520 wmi works, on e6540 it doesn't.
I can still change the brightness with echo:
echo 35 > /sys/class/backlight/acpi_video0/brightness
but only as root, obviously.
Pressing Fn+Up and Fn+Down does not change the walue in /sys/class/backlight/acpi_video0/brightness. On the previous model, it does change the value.
One thing I noticed, on the older model, the max value is 15. On the new model it is 95. Looks like something might have changed inside this mechanism.
Thus my question:
How can I make WMI hotkeys work on my new laptop?
I am using Debian wheezy with custom kernel 3.16.6. I have also tried distribution kernel 3.16 (linux-image-3.16-0.bpo.2-amd64 from Wheezy-backports) and the wmi keys don't work either.
UPDATE:
I have just noticed that the WMI hotkeys work fine when I am in BIOS !!!
That is quite surprising that they don't work when I boot into linux.
following is output of dmesg. The mention of dell_wmi: Received unknown WMI event looks relevant to my problem, but I get the same messages on the old laptop, where wmi hotkeys are working. So this alone does not seeem the be the issue.
dmesg | egrep -i '(dell|wmi)'
[Tue Apr 15 22:04:30 2014] DMI: Dell Inc. Latitude E6540/05V0V4, BIOS A05 09/03/2013
[Tue Apr 15 22:04:30 2014] ACPI: RSDP 00000000000eee60 00024 (v02 DELL )
[Tue Apr 15 22:04:30 2014] ACPI: XSDT 00000000d8fe0080 0007C (v01 DELL CBX3 01072009 AMI 00010013)
[Tue Apr 15 22:04:30 2014] ACPI: FACP 00000000d8fed7e8 0010C (v05 DELL CBX3 01072009 AMI 00010013)
[Tue Apr 15 22:04:30 2014] ACPI: DSDT 00000000d8fe0188 0D659 (v02 DELL CBX3 00000014 INTL 20091112)
[Tue Apr 15 22:04:30 2014] ACPI: APIC 00000000d8fed8f8 00072 (v03 DELL CBX3 01072009 AMI 00010013)
[Tue Apr 15 22:04:30 2014] ACPI: FPDT 00000000d8fed970 00044 (v01 DELL CBX3 01072009 AMI 00010013)
[Tue Apr 15 22:04:30 2014] ACPI: HPET 00000000d8feed38 00038 (v01 DELL CBX3 01072009 AMI. 00000005)
[Tue Apr 15 22:04:30 2014] ACPI: MCFG 00000000d8fef148 0003C (v01 DELL CBX3 01072009 MSFT 00000097)
[Tue Apr 15 22:04:38 2014] dcdbas dcdbas: Dell Systems Management Base Driver (version 5.6.0-3.2)
[Tue Apr 15 22:04:39 2014] wmi: Mapper loaded
[Tue Apr 15 22:04:39 2014] input: Dell WMI hotkeys as /devices/virtual/input/input10
[Wed Apr 16 18:30:04 2014] dell_wmi: Received unknown WMI event (0x0)
[Fri Apr 18 17:09:41 2014] dell_wmi: Received unknown WMI event (0x0)
[Fri Apr 18 17:09:41 2014] dell_wmi: Received unknown WMI event (0x0)
[Fri Apr 18 17:09:49 2014] dell_wmi: Received unknown WMI event (0x0)
UPDATE2
after patching the WMI module, I get following messages for Fn+Up and Fn+Down
2014-04-18 19:00:49 kernel: [ 120.731480] dell_wmi: WMBU = 0002 0010 0048
2014-04-18 19:00:49 kernel: [ 120.731496] wmi: DEBUG Event GUID: 9DBB5994-A997-11DA-B012-B622A1EF5492
2014-04-18 19:00:53 kernel: [ 123.935400] dell_wmi: WMBU = 0002 0010 0050
2014-04-18 19:00:53 kernel: [ 123.935415] wmi: DEBUG Event GUID: 9DBB5994-A997-11DA-B012-B622A1EF5492
UPDATE3
Also interesting is, that the laptop came with pre-installed Ubuntu 12.04, and the wmi keys are working in Ubuntu.
|
You could install xbacklight, a utility for managing your brightness using RandR. Then, to activate it, use a simple script along these lines—bound to your two keys:
#!/usr/bin/env bash
up() {
xbacklight -inc 10
}
down() {
xbacklight -dec 10
}
notify() {
bright=$(</sys/class/backlight/acpi_video0/actual_brightness)
if [[ "$bright" -eq 95 ]]; then
score="100%"
else score="$(( $bright * 100 / 95 ))"
fi
printf '%s\n' "Backlight set to ${score}%" | dzen2 -p 3
}
if [[ $1 = up ]]; then
up && notify
elif [[ $1 = down ]]; then
down && notify
fi
Swap out your notification method for whatever you use as part of your normal setup, eg., notify-send.
| WMI-based hotkeys on not working |
1,377,304,267,000 |
I would like to do screenshots of the active monitor (the one on which my mouse is) or screenshots of always the same monitor ; and not the two monitors at the same time.
Is there any command/option that would let me do this?
I'm not interested in cropping each of my screenshot, even with a script (since I'm using them in parallel and I do a lot of them), neither grabing a selected area, (since I need them to be taken from the same place, with the same size).
For now, I unplug my second monitor but this is very uncomfortable.
|
Well, I just found shutter, a nifty tool that can do this. You can install on Debian-based systems with
sudo apt-get install shutter
Then, once you launch shutter, take your screenshot limiting it to the active monitor only:
I just checked and it works perfectly on my LMDE running Cinnamon, it correctly took screenshots of the monitor where my mouse was displayed.
| screenshot only the active monitor on ubuntu/debian |
1,377,304,267,000 |
Is there an application I can use to obtain the RGB values of a particular pixel on the screen? I'm thinking something similar to paint programs where you can use an "eyedropper" to sample a color.
|
You can find Gpick (package name gpick) and gcolor2 in many repositories.
| X11 "eyedropper" application to inspect pixel color? |
1,377,304,267,000 |
I want to display a text above the user screen(as an upper layer). I know that there is solutions like xmessages that could display the text in a box, But need it to be displayed without a box on the entire screen if possible
I am running Raspbian
Is there any solution/software that could do this ?
|
xosd, which is available in Raspbian, can display text on top of the current X screen. It takes its input from a file or from the standard input:
echo Hello | osd_cat -p middle -A center
It's an old-style X11 application so its configuration can be verbose; changing the font in particular looks like
echo Hello | osd_cat -p middle -A center -f '-*-lucidatypewriter-bold-*-*-*-*-240'
or even strictly speaking
echo Hello | osd_cat -p middle -A center -f '-*-lucidatypewriter-bold-*-*-*-*-240-*-*-*-*-*-*'
You can customise the colour, add a shadow and/or outline, change the delay, even add a progress bar.
| How display a text for users in the entire screen |
1,377,304,267,000 |
OS: GNOME 3.30.2 on Debian GNU/Linux 10 (64-bit)
My laptop has no output from the HDMI port. The monitor shows "NO INPUT DETECTED". Previously I had Kubuntu installed and before that I had windows 10, Both worked fine, which means this is not a hardware issue.
I have tried:
Using the package "ARandR" to scan for new
displays.
Plugging in different monitors and HDMI cords.
Booting the machine with the display plugged in.
SPECS:
LAPTOP: Acer Nitro 7 (AN715-51)
GPU: GeForce GTX 1650
CPU: Intel Core i7-9750H
Output of xrandr:
Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 8192 x 8192
eDP-1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 344mm x 193mm
1920x1080 60.01*+ 60.01 59.97 59.96 59.93
1680x1050 59.95 59.88
1600x1024 60.17
1400x1050 59.98
1600x900 59.99 59.94 59.95 59.82
1280x1024 60.02
1440x900 59.89
1400x900 59.96 59.88
1280x960 60.00
1440x810 60.00 59.97
1368x768 59.88 59.85
1360x768 59.80 59.96
1280x800 59.99 59.97 59.81 59.91
1152x864 60.00
1280x720 60.00 59.99 59.86 59.74
1024x768 60.04 60.00
960x720 60.00
928x696 60.05
896x672 60.01
1024x576 59.95 59.96 59.90 59.82
960x600 59.93 60.00
960x540 59.96 59.99 59.63 59.82
800x600 60.00 60.32 56.25
840x525 60.01 59.88
864x486 59.92 59.57
800x512 60.17
700x525 59.98
800x450 59.95 59.82
640x512 60.02
720x450 59.89
700x450 59.96 59.88
640x480 60.00 59.94
720x405 59.51 58.99
684x384 59.88 59.85
680x384 59.80 59.96
640x400 59.88 59.98
576x432 60.06
640x360 59.86 59.83 59.84 59.32
512x384 60.00
512x288 60.00 59.92
480x270 59.63 59.82
400x300 60.32 56.34
432x243 59.92 59.57
320x240 60.05
360x202 59.51 59.13
320x180 59.84 59.32
Output of xrandr --listproviders:
Providers: number : 1
Provider 0: id: 0x43 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 1 associated providers: 0 name:modesetting
Output of lspci -nn | grep VGA:
00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 630 (Mobile) [8086:3e9b]
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1f91] (rev a1)
Output of aplay -l:
card 0: PCH [HDA Intel PCH], device 0: ALC255 Analog
[ALC255 Analog]
Subdevices: 0/1
Subdevice #0: subdevice #0
Output of lshw -c video:
*-display
description: VGA compatible controller
product: NVIDIA Corporation
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:01:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress vga_controller bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: irq:154 memory:a3000000-a3ffffff memory:90000000-9fffffff memory:a0000000-a1ffffff ioport:5000(size=128) memory:a4000000-a407ffff
*-display
description: VGA compatible controller
product: Intel Corporation
vendor: Intel Corporation
physical id: 2
bus info: pci@0000:00:02.0
version: 00
width: 64 bits
clock: 33MHz
capabilities: pciexpress msi pm vga_controller bus_master cap_list rom
configuration: driver=i915 latency=0
resources: irq:128 memory:a2000000-a2ffffff memory:b0000000-bfffffff ioport:6000(size=64) memory:c0000-dffff
|
You have a laptop with two GPUs, using Nvidia's "Optimus" technology.
The low-power CPU-integrated Intel iGPU is physically wired to output to the laptop's internal display, while the HDMI output is wired to the more powerful Nvidia discrete GPU. The device ID 10de:1f91 indicates the Nvidia GPU is GeForce GTX 1650 Mobile / Max-Q. The Nvidia codename for that GPU is TU117M.
The laptop may or may not have the capability of switching the outputs between GPUs; if such a capability exists, vga_switcheroo is the name of the kernel feature that can control it. You would then need to have a driver for the Nvidia GPU installed (either the free nouveau or Nvidia's proprietary driver; since the Nvidia GPU model is pretty new, the support for it in nouveau is still very much work-in-progress), then trigger the switch to Nvidia before starting up the X server.
If there is no output switching capability (known as "muxless Optimus"), then you would need to pass the rendered image from the active GPU to the other one in order to use all the outputs. With the drivers (and any required firmware) for both the GPUs installed, the xrandr --listproviders should list two providers instead of one, and then you could use xrandr --setprovideroutputsource <other GPU> <active GPU> to make the outputs of the other GPU available for the active GPU.
Unfortunately, the Nvidia proprietary driver seems to be able to participate in this sharing only in the role of the active GPU, so when using that driver, you might want to keep two different X server configurations to be used as appropriate.
One configuration would be for using with external displays (and probably with power adapter plugged in too) with the Nvidia GPU as the active one, feeding data through the iGPU for the laptop's internal display
The other configuration would be appropriate when using battery power and don't need maximum GPU performance: in this configuration, you would use the Intel iGPU as the active one, and might want to entirely shut down the Nvidia GPU to save power (achievable with the bumblebee package). If you want some select programs to have more GPU performance, you could use the primus package to use the Nvidia GPU with no physical screen attached to render graphics, and then pass the results to the Intel iGPU for display.
With Kubuntu, you probably were asked about using proprietary drivers on installation and answered "yes", so it probably set up one of the configurations described above for you. But Debian tends to be more strict about principles of Open Source software, so using proprietary drivers is not quite so seamless.
Generally, the combination of the stable release of Debian (currently Buster) and the latest-and-greatest Nvidia GPU tends not to be the easy way to happy results, because the Debian-packaged versions of Nvidia's proprietary drivers tend to lag behind Nvidia's own releases: currently the driver version in the non-free section of Debian 10 is 418.116, and the minimum version required to support GeForce GTX 1650 Mobile seems to be 430.
However, the buster-backports repository has version 440 available. To use it, you'll need to add the backports repository to your APT configuration. In short, add this line to /etc/apt/sources.list file:
deb http://deb.debian.org/debian buster-backports non-free
Then run apt-get update as root. Now your regular package management tools should have the backports repository available, and you could use
apt-get -t buster-backports install nvidia-driver
to install a new enough version of the Nvidia proprietary driver to support your GPU.
| Debian 10 [Buster]: HDMI Input Not detected |
1,377,304,267,000 |
I regularly use my notebook for teaching, with the full screen shown on the projector. With projectors getting better, I often find the projectors resolution to be higher or different than the ones offered by my LCD screen (in the past I just used 1024x768).
What I now would like to do is use the best resolution of the external display, while having the same content down-scaled on the notebook screen (without panning). Alternatively, if the resolution of the projector is smaller in one dimension, black bars would be ok on the LC display. I don't worry about aliasing artefacts on the LCD as long as the external projector uses the highest quality possible.
For example, I recently had:
LVDS1 connected 1280x800+0+0
1280x800 60.2*+ 50.0
...
VGA1 connected 1280x720+0+0
1280x720 60.0*+
...
I tried:
xrandr --output VGA1 --mode 1280x720 --output LVDS1 --mode 1280x800
but then the bottom of a full screen presentation was clipped on the projector. In this case, I would like a black bar or vertical rescaling on the laptop screen. How can I achieve that?
I played with the scale option (can't reproduce this here without projector) but was unsuccessful.
How can I achieve this behaviour?
|
I figured out that the --scale-from option does what I need:
xrandr --output VGA1 --mode 1280x720 --output LVDS1 --primary --scale-from 1280x720
| xrandr: clone and scale |
1,377,304,267,000 |
In the picture is shown what works and what doesn't work (also doesn't work for warning descriptions in the code).
Current version Gnome 3.14.1. I tried to change window style settings with gnome-tweak-tool, but it did not change the looks. What else could I try? It worked fine with Debian Wheezy, e.g. Gnome 3.4 (and probably some days ago, before installing the many every-day upgrades coming with Debian testing. -> I'm going to check the aptitude logs.)
|
It is a well-known issue with Gnome >=3.10. Unfortunately there is no fix or workaround for it yet.
EDIT (link is broken) Source: http://www.mathworks.in/matlabcentral/answers/125436-why-will-warning-error-messages-not-show-up-in-a-pop-up-window-when-i-hover-over-the-text-in-the-mat
The only solution suggested by the answers on the Mathworks forum is to switch off Gnome and turn on some other desktop environment (Xfce, for example.)
| Blank matlab code tooltips in Gnome 3, Debian Jessie |
1,377,304,267,000 |
Everything about the backlight settings and controls work for me except:
Backlight resets to max. on every reboot/boot.
The backlight minimum goes on to complete black screen, instead of the minimum brightness setting elsewhere.
NB> I see that 1. above is answered on a lot of places with different answers, so I am actually looking for someone or somplace where I can read and understand how it all works.
I have 2 different backlight folders in my laptop and many conf files to edit. So need to understand what on those files impacts what on the sytem.
|
At the heart of backlighting is this Linux Kernel parameter that's exposed to you through here under /sys. You can manipulate it by setting the value to something between 1 and 15. For example:
$ echo 5 | sudo tee /sys/class/backlight/acpi_video0/brightness
Set's the brightness to 5. Manipulating this Kernel parameter is abstracted away so that when you're changing the value with your keyboard or a desktop applet you're manipulating it through D-Bus and HAL.
D-Bus is allowing you to manipulate this structure, org.freedesktop.Hal.Device.KeyboardBacklight, and HAL is allowing the privilege to do so. You can see this on my Fedora 14 system like this:
$ grep -i backlight /etc/dbus-1/system.d/*
/etc/dbus-1/system.d/hal.conf: send_interface="org.freedesktop.Hal.Device.KeyboardBacklight"/>
/etc/dbus-1/system.d/hal.conf: send_interface="org.freedesktop.Hal.Device.KeyboardBacklight"/>
In the file hal.conf:
<!-- Only allow users at the local console to manipulate devices -->
<policy at_console="true">
...
<allow send_destination="org.freedesktop.Hal"
send_interface="org.freedesktop.Hal.Device.KeyboardBacklight"/>
You can query the current value, through D-Bus like so:
$ dbus-send \
--print-reply \
--system \
--dest=org.freedesktop.Hal \
/org/freedesktop/Hal/devices/computer_backlight \
org.freedesktop.Hal.Device.LaptopPanel.GetBrightness | \
tail -1 | \
awk '{print $2}'
Which returns the value:
15
You can also manipulate it from the command line like so, (the bit int32:10 below is setting brightness to "10"):
$ dbus-send \
--print-reply \
--system \
--dest=org.freedesktop.Hal \
/org/freedesktop/Hal/devices/computer_backlight \
org.freedesktop.Hal.Device.LaptopPanel.SetBrightness \
int32:10 #2&>1 > /dev/null
You can see that we changed the brightness:
$ cat /sys/class/backlight/acpi_video0/brightness
10
So how do I fix this?
One idea would be to save the current brightness out to a file prior to either a shutdown and/or reboot and then add to your startup (perhaps ~/.xinitrc) the dbus-send ... command above adding in the brightness value you previously saved out to the file.
Why do I have multiple files under /sys/class/backlight?
I came across this Q&A on askubuntu.com titled: Why there are two brightness control file (/sys/class/) in my system. In the answer to this there was this comment:
If the system starts with the kernel parameter acpi_backlight=vendor,
the item acpi_video0 is replaced by the item intel, but then the
Fn-Keys can not change the value of this item.
I also came across this documentation for the Kernel, titled: Kernel Parameters. In this doc the following aCPI options are mentioned:
acpi_backlight= [HW,ACPI]
acpi_backlight=vendor
acpi_backlight=video
If set to vendor, prefer vendor specific driver
(e.g. thinkpad_acpi, sony_acpi, etc.) instead
of the ACPI video.ko driver.
I think the intel_backlight referenced in /sys/class/backlight is part of the backlighting for the video card drivers provided for Intel graphics cards.
References
Save brightness setting on reboot in Debian Squeeze / Wheezy
DBUS Backlight Brightness
| How does the screen backlight work? |
1,377,304,267,000 |
I have 2 monitors plugged in via DisplayPort*, named DP1 and DP2.
I configure them next to each other like this:
xrandr --output DP1 --pos 0x0 --output DP2 --pos 3840x0
Problem: Sometimes, they are detected in the opposite order on startup, so the left monitor is labeled DP2 and the right one becomes DP1 instead. This is random, so after every startup I need to check and eventually reconfigure the layout.
Therefore, I'm looking for a way to reliably detect which monitor is which, across reboots. E.g., is there a way to determine which port ID corresponds to which assigned monitor name?
I'm on Arch. FWIW, Windows 10 remembers the order correctly.
*I'm using a Dell docking station connected via Thunderbolt, if that matters.
|
You might be able to use the EDID block for the monitors. For example, set up the system in the desired way, then run
$ xrandr --prop | grep -A2 EDID > desired-setup.txt
Thereafter, each time after a setup is done, your run the similar
$ xrandr --prop | grep -A2 EDID > current-setup.txt
Then, if current-setup.txt is the same as desired-setup.txt, all is fine, and otherwise you'll need the alternative set up with swapped DP1 and DP2.
This scheme only works if the monitors' EDID report is distinctive, where the first 18 bytes includes manufacturer id, product code and serial number (bytes 12-15), as well as week and year of manufacture. It of course also only works for the particular monitors. (If you need more flexibility, you'll need more advanced decision logic, and a "library" of EDID captures)
The output from xrandr shows the EDID bytes in hex lines of 16 bytes, which is why you may need -A2 to get its first 32 bytes for each monitor. (see eg wikipwedia for a description of the EDID block).
| Uniquely identify DP monitors for use in xrandr |
1,377,304,267,000 |
I'm trying to start a VMWare Workstation VM from the command line. I can do this under my user (not root) by running the following
/usr/bin/vmplayer /home/myUser/vmware/myVm.vmx
Now, I want to set this VM to start on boot, so I created a service /lib/systemd/system/myService.service with the following:
[Unit]
Description=my vm service
[Service]
User=myUser
ExecStart=/usr/bin/vmplayer /home/myuser/vmware/myVm.vmx
Environment=DISPLAY=:0
[Install]
WantedBy=multi-user.target
If I run the service from the terminal (i.e. sudo systemctl restart myService) I can see the VM window popping up and starting correctly. However, if I reboot the system the VM doesn't start and this is the status I get
Jan 23 12:55:59 home systemd[1]: Started myService service.
Jan 23 12:56:00 home truenas.sh[848]: [AppLoader] Use shipped Linux kernel AIO access library.
Jan 23 12:56:00 home truenas.sh[848]: An up-to-date "libaio" or "libaio1" package from your system is preferred.
Jan 23 12:56:00 home vmplayer[848]: cannot open display: :0
Jan 23 12:56:00 home systemd[1]: myService.service: Main process exited, code=exited, status=1/FAILURE
Jan 23 12:56:00 home systemd[1]: myService.service: Failed with result 'exit-code'.
I thought DISPLAY=:0 on the environment would fix the issue but that's the error I got and I'm not able to fix it.
|
As @Stephen Boston pointed out in the comments you should use a systemd user service but you have to change, remove and add some directives in your unit:
Change: WantedBy=multi-user.target to WantedBy=graphical-session.target
Add the directives: PartOf=graphical-session.target and After=graphical-session.target in [Unit] section.
Add in [Service] the directive: Type=exec
You can remove Environment=DISPLAY=:0
May be you want to add also this directive in [Service] section: Restart=no
So your unit would become:
[Unit]
Description=my vm service
PartOf=graphical-session.target
After=graphical-session.target
[Service]
Type=exec
Restart=no
ExecStart=/usr/bin/vmplayer /home/myuser/vmware/myVm.vmx
[Install]
WantedBy=graphical-session.target
Finally place the unit under: $HOME/.config/systemd/user/ directory and to enable/start it you should use systemctl --user ....:
systemctl --user enable myService.service
systemctl --user start myService.service
You can also run apps on StartUp by using .desktop files and place them in /etc/xdg/autostart or in ~/.config/autostart. Check this answer for more details.
| How to open GUI from systemd service on startup? |
1,377,304,267,000 |
I am trying to run a Samsung SyncMaster 226 NW display with an HDMI to VGA adapter on Debian 10. The GPU is an RTX 2060 Super, with the proprietary Nvidia drivers of version 440.64.
In Linux, the only resolutions that are detected as usable are 1280×720, 1024×768, 800×600, and 640x480.
However, the actual native resolution is 1680×1050, and when dual booting Windows, this resolution can be set and used.
I have attempted to use xrandr to add a custom resolution, first using cvt to generate the modeline. The command used to make a new mode for xrandr was
xrandr --newmode "1680×1050_60.00" 146.25 1680 1784 1960 2240 1050 1053 1059 1089 -hsync +vsync
After doing this, running xrandr returned
Screen 0: minimum 8 x 8, current 1024 x 768, maximum 32767 x 32767
DP-0 disconnected (normal left inverted right x axis y axis)
DP-1 disconnected (normal left inverted right x axis y axis)
HDMI-0 connected primary 1024x768+0+0 (normal left inverted right x axis y axis) 304mm x 228mm
1024x768 60.00*+ 60.00
1280x720 60.00
800x600 60.32
640x480 59.94
DP-2 disconnected (normal left inverted right x axis y axis)
DP-3 disconnected (normal left inverted right x axis y axis)
DP-4 disconnected (normal left inverted right x axis y axis)
DP-5 disconnected (normal left inverted right x axis y axis)
USB-C-0 disconnected (normal left inverted right x axis y axis)
1680x1050_60.00 (0x1e4) 146.250MHz -HSync +VSync
h: width 1680 start 1784 end 1960 total 2240 skew 0 clock 65.29KHz
v: height 1050 start 1053 end 1059 total 1089 clock 59.95Hz
However, when attempting to use
xrandr --addmode HDMI-0 "1680×1050_60.00"`
the error
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 18 (RRAddOutputMode)
Serial number of failed request: 43
Current serial number in output stream: 44
was returned. Using
xrandr --output HDMI-0 --mode "1680×1050_60.00"
returned
xrandr: cannot find mode 1680x1050_60.00` as an error.
How do I properly set the output resolution to 1680x1050?
|
I just spent 2-3 Hours on exactly the same problem. So annoying, xrandr seems not to work at all with the new nvidia-drivers. Now, after getting crazy and mad, I finally came up with a solution, hoping it will work for you as well.
Start in terminal "nvidia-settings", switch to "X Server Display Configuration", Click the button on the bottom "Save to X configuration File", then "Show Preview". Now in this preview go to the block 'Section "Monitor"..... EndSection' and save it for later. Thats how I found out about my monitor-settings for xorg.conf. Note, there are other ways but this one should be quite safe and convenient for nvidia-users.
Then get the "Modeline" for your resolution, type in terminal:
cvt 1680x1050
and save the output for later.
Ok, now you just have to add all this stuff in a xorg.conf file, call it e.g. /etc/X11/xorg.conf.d/10-monitor.conf (thats at least the path for my distro). As I neither have your Monitor-Section nor your Modeline I will give you an example with my monitor-section and my modeline(my desired/undetected resolution was 1920x1080):
Section "Monitor"
Identifier "Monitor1"
VendorName "Unknown"
ModelName "Acer B246HL"
HorizSync 30.0 - 80.0
VertRefresh 55.0 - 76.0
Option "DPMS"
Modeline "1920x1080_60.00" 173.00 1920 2048 2248 2576 1080 1083 1088 1120 -hsync +vsync
EndSection
Section "Device"
Identifier "Card0"
Driver "nvidia"
Option "HDMI-0" "Monitor1"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Card0"
Monitor "Monitor1"
DefaultDepth 24
SubSection "Display"
Depth 24
Modes "1920x1080_60.00"
EndSubSection
EndSection
So in the Monitor-section you just keep the first line(Identifier) and replace the rest with your saving from nvidia-settings plus the last line is the output of your cvt-command.
Device-Section should be fine for you.
Screen-Section just needs the "Modes"-line changed to the name of your Mode, so probably something like Modes "1680x1050_60.00"
In my Device Section is also one line 'BusID "PCI:39:0:0"', but I think you don't need that. However, I got that line from executing "X -configure"(xorg must not be running). BusID should be in the generated xorg.conf.new. Strangely, in my case it was different from the BusID of lspci.
Additionally, if you would like to run several monitors(like me) just add new Monitor-Sections with Identifier "Monitor2" and so on, then in Device-Section add e.g. 'Option "HDMI-1" "Monitor2"' accordingly, and finally add the Monitor in Screen-Section like 'Monitor "Monitor2"'.
The strange part in my case was, that I have 3 exactly identical monitors and one of them was always not recognized by nvidia-modeset. It has something to do with EDID and the error can be found with:
dmesg | grep EDID
| Set undetected resolution with xrandr |
1,377,304,267,000 |
I want to be able to run (just) a program/few programs under a test user named "test" and at the same time benefit of the GUI of the program.
I need this because I want to be able to save test settings without conflicting with my own settings.
What I found so far is that I can either:
use su test to switch to the user, but then I can not run programs with GUI... they complain about not having a display:
No protocol specified
** (gedit:17086): WARNING **: Could not open X display
No protocol specified
(gedit:17086): Gtk-WARNING **: cannot open display: :0
use the dm-tool switch-to-user test to actually switch over to that user
Any Idea how I can run programs as another user without having to change users and desktops each time?
|
I finally found a solution as provided in this answer
All I had to do was run the following command as root:
xhost si:localuser:test
Apparently this command allows the user to use the display server.
Please edit this answer if you know more about this issue.
| Run a program under another user with X server display |
1,377,304,267,000 |
I have just installed Debian Wheezy for the first time and have had mixed success getting my system up and running. Currently my main issue is that I am unable to extend my desktop to my second screen.
I have an ATI radeon HD 7700 series graphics card connected to 2 displays. Running lspci results in this line, among others:
04:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI Cape Verde [Radeon HD 7700 Series]
Currently, both are cloned. I initially tried to follow the instructions for installing proprietary ATI drivers, which resulted in the 2nd display being detected but I was unable to extend the desktop rather than clone it (something about my virtual screen not being big enough).
I gathered that support might be better for the free version so I followed these instructions for removing said drivers, followed by these instructions to install the free ones.
As per the troubleshooting step in that page I ran this command:
dmesg | grep -E 'drm|radeon' | grep -iE 'firmware|microcode'
which produced this output:
[ 4.925773] [drm] Loading VERDE Microcode
[ 4.990158] platform radeon_cp.0: firmware: agent loaded radeon/VERDE_pfp.bin into memory
[ 5.152647] platform radeon_cp.0: firmware: agent loaded radeon/VERDE_me.bin into memory
[ 5.236165] platform radeon_cp.0: firmware: agent loaded radeon/VERDE_ce.bin into memory [ 5.260082] platform radeon_cp.0: firmware: agent loaded radeon/VERDE_rlc.bin into memory
[ 5.376566] platform radeon_cp.0: firmware: agent loaded radeon/VERDE_mc.bin into memory
That's different to the outputs on that page but then they don't really give anymore information so I just assumed everything had worked.
Anywho, now when I go to System Tools -> Preferences -> System Settings -> Display I just see a single display called Unknown, which clones across both of my monitors.
running xrandr -q produces this output:
xrandr: Failed to get size of gamma for output default Screen 0:
minimum 1680 x 1050, current 1680 x 1050, maximum 1680 x 1050 default
connected 1680x1050+0+0 0mm x 0mm 1680x1050 0.0*
FYI 1680x1050 is the native resolution of both monitors. I do not have a /etc/X11/xorg.conf file, and my /etc/X11/xorg.conf.d/ directory seems to be empty.
This is my first time running a linux system so I'm totally confused and would appreciate kind words and idiot-proof guidance.
|
I found this answer on AU in a Q&A titled: How can I set up dual monitor display with ATI driver?.
excerpt
Open a terminal and type:
$ gksudo gedit /etc/X11/xorg.conf
In the sub-section "display" add this code or modify if already exist:
virtual 2880 1024
Where 2880 and 1024 are the value returned by the error: required
virtual size does not fit available size: requested=(2880, 1024),
minimum=(320, 200), maximum=(1600, 1600).
Restart the computer.
Then you will be able to extend your desktop without issue.
In the OP's configuration he opted to use this:
virtual 3360 1050
| How do I extend my desktop to my 2nd monitor rather than just cloning it? |
1,377,304,267,000 |
In short
I have a laptop with an external DVI display which is hooked up to its Display Port (via adapter). When I rotate the internal screen using xrandr, it will forget about the secondary display. Calling xrandr multiple times does not have any effect. I have to power off the external display, wait for a while, call xrandr again and it finds the display.
The details
I use Kubuntu 13.10 with Awesome WM instead of the regular KWin Windows Manager. Therefore, there are already two systems which do stuff with displays.
The laptop is a ThinkPad X220 Tablet, which has a Display Port on the device itself, and another one on the UltraBase docking station, which I use currently. The display is a Samsung SyncMaster 2443BW with a DVI to Display Port adapter.
Then, I use my own think-rotate to rotate the internal (LVDS1) screen when I rotate the physical display (via ACPI hook). Said script uses xrandr to rotate the screen.
When I call think-rotate, it will glitch and give me the following xrandr output afterwards:
Screen 0: minimum 320 x 200, current 1366 x 768, maximum 32767 x 32767
LVDS1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 277mm x 156mm
1366x768 60.0*+
1360x768 59.8 60.0
1024x768 60.0
800x600 60.3 56.2
640x480 59.9
VGA1 disconnected (normal left inverted right x axis y axis)
HDMI1 disconnected (normal left inverted right x axis y axis)
DP1 disconnected (normal left inverted right x axis y axis)
HDMI2 disconnected (normal left inverted right x axis y axis)
HDMI3 disconnected (normal left inverted right x axis y axis)
DP2 disconnected (normal left inverted right x axis y axis)
DP3 disconnected (normal left inverted right x axis y axis)
You see, that the HDMI2 is completely missing. After letting all the power drain from the external display, calling xrandr again gave me the screen back and the following output:
Screen 0: minimum 320 x 200, current 1366 x 768, maximum 32767 x 32767
LVDS1 connected primary 1366x768+0+0 (normal left inverted right x axis y axis) 277mm x 156mm
1366x768 60.0*+
1360x768 59.8 60.0
1024x768 60.0
800x600 60.3 56.2
640x480 59.9
VGA1 disconnected (normal left inverted right x axis y axis)
HDMI1 disconnected (normal left inverted right x axis y axis)
DP1 disconnected (normal left inverted right x axis y axis)
HDMI2 connected (normal left inverted right x axis y axis)
1920x1200 60.0 +
1600x1200 60.0
1280x1024 60.0
1280x960 60.0
1024x768 60.0
800x600 60.3 56.2
640x480 60.0
HDMI3 disconnected (normal left inverted right x axis y axis)
DP2 disconnected (normal left inverted right x axis y axis)
DP3 disconnected (normal left inverted right x axis y axis)
VIRTUAL1 disconnected (normal left inverted right x axis y axis)
I looked into the /var/log/syslog and found the following within the debug output of the think-rotate script:
Jan 6 20:20:26 Martin-X220 colord: device removed: xrandr-Samsung Electric Company-SyncMaster-H9XS113172
Jan 6 20:20:26 Martin-X220 colord: Device added: xrandr-Samsung Electric Company-SyncMaster-H9XS113172
How can I triage this issue? I got other DVI and VGA displays around, I got another Display Port on the ThinkPad itself.
I assume that the problem is the interaction between something in KDE, Awesome WM (which reloads its config on every display change) and my xrandr calls.
Follow-up
Without Awesome WM, plain KDE with KWin
I just started KDE without Awesome WM. It did not only forget about the external display when I rotated, but it also caused severe compositing glitches. I found that think-rotate is called twice by the ACPI hook, so I am currently implementing a guard against that. Running it twice simultaneously might be the cause of the glitches. The forgetting about the display is caused when I call think-rotate manually, so that is not caused by running it twice.
Without KDE, plain Awesome WM
Now I am just with Awesome WM, no KDE started. There is no problem. I saw here that it is called twice and work on that guard. It is called twice, but runs only one now. Apparently, the issue does not persist here. Although I do not like to admit, but with the information I currently have, KDE seems to cause the issue.
Update 2014-01-06 21:37+0100
Well, it seems like rotation works without problems with plain Awesome WM. When I put it onto the docking station, it does not see the external display right away. I have to power it off and call xrandr manually to get it working.
Update 2014-01-21 21:53+0100
I tried another DVI display, and I did not have any problems with it. When I go back to my Samsung SyncMaster, I get the issue again. Awesome WM alone caused problems, KDE with Awesome WM causes problems as well. It is still broken!
Update 2014-03-10 18:17+0100
The issue persists. I used a VGA cable to test it, and that did not appear to have the issue. My current workaround is to switch off the display, call xrandr and then turn it on again. Maybe Ubuntu 14.04 will fix that issue.
|
The issue also persists when it is hooked up to different computers, also a Windows 7 machine. It is a hardware defect, I bought a new display a while ago.
| xrandr call makes it forget about secondary display |
1,377,304,267,000 |
I've been having this issue for at least 3 years where my display will be blurry while using the proprietary Nvidia driver. Nouveau doesn't fix anything either.
Any screenshots that I take using the monitor will show up crystal clear, but visually looking at it, everything seems fuzzy, blurry and sometimes even ghosts if I move a window around. I have a second monitor plugged in via a DP->VGA adaptor and, even when my main monitor is blurry, that looks perfectly fine. Sometimes my monitor will work flawlessly, but that's once in a blue moon.
I'm currently running Void Linux with the latest Linux kernel and KDE Plasma.
I am using an Nvidia RTX 2060, and the monitor is getting the signal via HDMI. I am unsure of the monitor's control board and anything like that as it's a seemingly random brand that have no official datasheets (and it doesn't even seem to be sold anymore). What I do know is it's a 32" 1080p@60Hz display, and it won't go higher than that.
I don't have this issue on Windows 10, and everything is displayed crisp and clear as day.
Here are some steps I have taken:
Tried GNOME, KDE Plasma and even XFCE - all produce the same output
Swapped the HDMI cable and even bought a whole new one
Tried changing the HDMI port that I use on the monitor
The issue has persisted across two GPUs now (I used to use a GTX1050)
Reinstalling the Nvidia Linux drivers (and reconfiguring them). They show up in lsmod and X11 is set to use them - nouveau is blacklisted and I confirmed this with lsmod
The issue has persisted over many distros, such as Ubuntu, Pop, Arch, Gentoo and Void.
I have looked in the monitor settings and found nothing relevant, and changing everything I can see does nothing to fix the clarity either
I've looked high and low in Plasma's and Nvidia's settings, and tried things such as forced anti-aliasing, text rendering, and even vsync changes. They make no difference even after a save + reboot.
I've installed every (even seemingly) relevant package across every Linux distribution that I've used, and none of them made a difference.
I re-tried POP_OS and ensured that I was using the Nvidia ISO and installs to root out any misconfiguration, to no avail.
Ensured that FXAA was disabled in the Nvidia settings
I can view this image found by @ArtemS.Tashkinov in Firefox while using F11, with no vertical/horizontal scroll bars
I have checked through the nvidia log file (the one you're able to generate manually) and found nothing that seems to relate to my issue.
It's probably important to note that, in a live Linux ISO, if I change my output to a lower resolution and then back to native a good 5-8 times, the monitor will clear up. This is not a permanent fix but I think it has something to do with the connection being re-initialised, although I'm not too sure about the reason.
Another odd thing I have noticed is that, occasionally, on first boot the monitor will look perfectly fine (as it does on Windows). If the monitor goes into standby mode and turns back on however, things go back to the usual blurry state.
I have looked around for others that have experienced something similar to me and all of the issues turned out to be misconfiguration or things that had nothing to do with what I'm experiencing (mostly anti-aliasing issues).
I find it hard to think of things that could be at fault as it works perfectly fine in Windows, which is baffling to me. In addition to this, when I ran Windows under Linux via KVM+QEMU (with Single GPU Passthrough), all the issues went away as the drivers were handled by Windows.
Here is an output of xrandr:
Screen 0: minimum 8 x 8, current 3360 x 1179, maximum 32767 x 32767
HDMI-0 connected primary 1920x1080+1440+0 (normal left inverted right x axis y axis) 376mm x 301mm
1920x1080 60.00*+ 59.94 50.00 23.98
1680x1050 59.95
1440x900 59.89
1280x720 60.00 59.94 50.00
1152x864 60.00
1024x768 60.00
800x600 60.32 56.25
720x576 50.00
720x480 59.94
640x480 75.00 72.81 59.94 59.93
DP-0 connected 1440x900+0+279 (normal left inverted right x axis y axis) 408mm x 255mm
1440x900 59.89*+
1280x1024 75.02 60.02
1280x960 60.00
1152x864 75.00
1024x768 75.03 70.07 60.00
800x600 75.00 72.19 60.32 56.25
640x480 75.00 72.81 59.94
DP-1 disconnected (normal left inverted right x axis y axis)
DP-2 disconnected (normal left inverted right x axis y axis)
DP-3 disconnected (normal left inverted right x axis y axis)
DP-4 disconnected (normal left inverted right x axis y axis)
DP-5 disconnected (normal left inverted right x axis y axis)
Here are two previews of google.com, this is windows and this is Linux. I know the differences aren't too perceivable on camera, but they are to the naked eye. Looking at it more, it seems that the output is also heavily under-saturated while in Linux.
Just as an additional comparison, I ran this sharpness test on both Windows and Linux. Here is Windows, and here is Linux.
I'm completely out of ideas now, so any help or insight as to how I'd go about even debugging this would be greatly appreciated!
I have also asked this question on the Nvidia Linux forums, as suggested by a commenter, but have had zero help there.
|
Try:
Change the permanent resolution
On and off 3d acceleration
"Vesa" video driver instead of nvidia and nouveau.
Every kernel driver(module) has it's own parameters. See modinfo nvidia output for example("parm" records). Load your driver with different "parms" I can't recommend you particular ones. But anyway, just play with it some time.
Install this nvidia drivers. Or kind of it with version 390 from your distro repository. It is the Nvidia drivers for old video cards. You can have a driver for modern cards only.
| Display Constantly "Fuzzy" with Nvidia Driver |
1,498,585,056,000 |
When I run Xvfb server directly, x11vnc can attach to the display fine as per Wikipedia page.
However I'd like to achieve the same by running the X app using xvfb-run.
Here is my attempt (to run wine explorer as an example):
$ xvfb-run -l --server-args="-screen 0 1024x768x24" wine explorer
$ ps x | grep Xvfb
19536 pts/2 Sl 0:00 Xvfb :99 -screen 0 1024x768x24 -auth /tmp/xvfb-run.nJKLnF/Xauthority
However when I'm trying to run x11vnc it fails:
$ x11vnc -display :99.0 -usepw -forever -autoport 5900
24/11/2016 22:51:29 XOpenDisplay(":99.0") failed.
24/11/2016 22:51:29 Trying again with XAUTHLOCALHOSTNAME=localhost ...
No protocol specified
24/11/2016 22:51:29 ***************************************
24/11/2016 22:51:29 *** XOpenDisplay failed (:99.0)
*** x11vnc was unable to open the X DISPLAY: ":99.0", it cannot continue.
*** There may be "Xlib:" error messages above with details about the failure.
I also tried the command suggested from x11vnc troubleshooting page, but with no luck.
How can I run X command via xvfb-run so it display can be accessible by x11vnc?
|
As you can see in your ps output, the Xvfb server is run with parameter -auth followed by the name of a temporary file. To connect to this server you therefore need to provide a copy of the MIT-MAGIC-COOKIE that is held in this file.
Usually this is done by simply setting the XAUTHORITY variable in the environment of the command, eg
XAUTHORITY=/tmp/xvfb-run.nJKLnF/Xauthority x11vnc ...
To simplify, your xvfb-run script might accept an option -f followed by the name of a file of your choice in which to save the cookie.
| How to connect to X app via VNC which was by xvfb-run? |
1,498,585,056,000 |
I'm having a problem with my system for some time. I've been hitting this problem for some time. I looked for people with a problem similar to mine but found none. I use 3 monitors in portrait mode 24" each(1920x1080;1920x1200;1920x1080).
What I want What I have
+------++--------++------+ +------+ +--------+ +------+
| || || | | | | | | |
| || || | | | | | | |
| 1 || 2 || 3 | | 1 | | 2 | | 3 |
| || || | | | | | | |
| || || | | | | | | |
+------++--------++------+ +------+ +--------+ +------+
I keep on getting a weird virtual gap between them and I can lose icons and opened window apps there. I have a GTX1080Ti for GPU and My DE: MATE 1.18.2.. I want to have the gap gone like the monitors are really close to each other, I would like the reverse action of this post Stretch window over two monitors with "gap" in between . I think something might be done with xrandr here but I just can't figure it out, here is my config:
Screen 0: minimum 8 x 8, current 4920 x 1920, maximum 32767 x 32767
DVI-D-0 connected primary 1200x1920+1920+0 left (normal left inverted right x axis y axis) 519mm x 324mm
1920x1200 59.95*+
1680x1050 59.95
1600x1200 60.00
1280x1024 75.02 60.02
1152x864 75.00
1024x768 75.03 60.00
800x600 75.00 60.32
640x480 75.00 59.94
HDMI-0 connected 1080x1920+0+0 left (normal left inverted right x axis y axis) 531mm x 299mm
1920x1080 60.00*+
1680x1050 59.95
1440x900 59.89
1280x1024 75.02 60.02
1280x960 60.00
1280x800 59.81
1280x720 60.00
1152x864 75.00
1024x768 75.03 70.07 60.00
800x600 75.00 72.19 60.32 56.25
640x480 75.00 72.81 59.94
HDMI-1 connected 1080x1920+3840+0 right (normal left inverted right x axis y axis) 368mm x 207mm
1920x1080 60.00*+
1680x1050 59.95
1600x900 60.00
1400x1050 59.98
1280x1024 75.02 60.02
1280x960 60.00
1280x800 59.81
1280x720 60.00
1152x864 60.00
1024x768 75.03 70.07 60.00
800x600 75.00 72.19 60.32 56.25
640x480 75.00 72.81 59.94
DP-0 disconnected (normal left inverted right x axis y axis)
DP-1 disconnected (normal left inverted right x axis y axis)
DP-2 disconnected (normal left inverted right x axis y axis)
DP-3 disconnected (normal left inverted right x axis y axis)
|
1920 is the width in landscape mode, so becomes the height in portrait mode. Your config starts each monitor 1920 pixels from the start of the last monitor, so there's a 700-900 pixel gap between them:
+0 +1920 +3840
+------+ +--------+ +------+ total: 4920w x 1920h
| | | | | |
|1080w | |1200w | |1080w |
| 1 | | 2 | | 3 |
| | | | | |
+------+ +--------+ +------+
| | | |
840 gaps 720
The overall geometry is in the xrandr output. Notice the starting X positions are multiples of 1920 -- you'd expect that in landscape mode, but not in portrait where the widths are smaller:
Screen 0: minimum 8 x 8, current 4920 x 1920 [<== total]
DVI-D-0 connected primary 1200x1920+1920+0 left [<== WidthxHeight+StartX+StartY]
...
HDMI-0 connected 1080x1920+0+0 left [<== WidthxHeight+StartX+StartY]
...
HDMI-1 connected 1080x1920+3840+0 right [<== WidthxHeight+StartX+StartY]
...
What you want:
+0 +1080 +2280
+------++--------++------+ total: 3360w x 1920h
| || || |
|1080w ||1200w ||1080w |
| 1 || 2 || 3 |
| || || |
+------++--------++------+
You don't provide an exact xrandr command that you use to achieve what you have, but I think this will get what you want (I'm not certain about the rotations; they're from your xrandr output):
xrandr --output HDMI-0 --rotate left --pos 0x0 \
--output DVI-D-0 --rotate left --right-of HDMI-0 \
--output HDMI-1 --rotate right --right-of DVI-D-0
| Multiple monitor with virtual gap between them |
1,498,585,056,000 |
I know how to change brightness and gamma with xrandr:
xrandr --output eDP1 --brightness 0.8 --gamma "0.90:0.85:0.80"
but how can I change saturation, ie decrease the amount of color and move on the spectrum closer to black/white ?
I need a way to change this on the command line, not on the hardware settings of my monitor.
I am using Debian 10
|
The most convenient way (as of 2022) is via a tool named vibrant-cli (which should work on any X11 setup). The syntax is:
vibrant-cli OUTPUT [SATURATION]
Get or set saturation of output.
OUTPUT is the name of the X11 output. You can find this by running xrandr.
SATURATION is a floating point value between (including) 0.0 and (including) 4.0.
0.0 or 0 means monochrome
1.0 or 1 is normal color saturation (100%)
if empty the saturation will not be changed
e.g. to reduce saturation to 30% on my laptop I'd run
vibrant-cli eDP-1 0.3
libvibrant version 1.0.2
Saturation of eDP-1 is 0.300000
libvibrant identifies your graphics chipset and attempts to change the saturation via the known methods supported by the driver for that particular GPU. If your hardware/drivers don't support changing color vibrance, you'll get an error.
Note that Color Management for DRM (Direct Rendering Manager) layer is rather recent:
Color Manager framework defines a color correction property for color space
transformation and Gamut mapping.
This property is called CTM (Color Transformation Matrix).
This patch adds a new structure in DRM layer for CTM. This structure can be used
by all user space agents to configure CTM coefficients for color correction.
So, in order to determine whether your platform supports color management via open-source drivers1 (i915 and amdgpu) you would run xrandr --properties. If there is no mention of CTM or if it says CTM: 0 then your setup doesn't support changing saturation via CTM (maybe via other methods, see the note at the bottom of the post concerning nVidia).
If you have a line like CTM: 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 e.g. for eDP-1 output:
..................
eDP-1 connected primary.....
..................
link-status: Good
supported: Good, Bad
CTM: 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0
0 1
CONNECTOR_ID: 78
supported: 78
then your setup supports color management i.e. the property Color Transform Matrix can be set... This is still a job of vibrant-cli unless, of course, you want to do the math yourself... for instance, to set saturation to 0 (grayscale) the command is
xrandr --output eDP-1 --set CTM '1431655765,0,1431655765,0,1431655765,0,1431655765,0,1431655765,0,1431655765,0,1431655765,0,1431655765,0,1431655765,0'
1: I only have access to those two platforms (and I have not tried the AMD proprietary driver - no idea if it supports CTM). For the nVidia GPUs, apparently there's a property called digital vibrance that can be set (when using the nVidia driver) via nVidia control panel or in terminal running e.g. nvidia-settings -a [gpu:0]/DigitalVibrance[DFP-1]=235 (consult the manual for proper syntax). I don't know if nouveau supports the same property or not...
| xrandr: change saturation (less color, more black/white) |
1,498,585,056,000 |
I installed XQuartz on my Mac (Big Sur, v. 11.5.2) using the download available on https://www.xquartz.org/ (XQuartz-2.8.1.dmg), but I have been unable to use it. I've been failing to fix this issue for the past couple of weeks, and I'm really hoping someone can help point me in the right direction.
I've been using xeyes to test the installation and get the following response:
(base) magnoliafork ~ % xeyes
Error: Can't open display: :0.0
(base) magnoliafork ~ % echo "$DISPLAY"
:0.0
One of the ways in which I have tried fixing it is to set my DISPLAY inside my .zshrc file to :0, localhost=0, and just the number 0. I also tried putting my IP address in front of the :0.0 at one point. None of those solutions worked.
##### DISPLAY, for plotting
PATH="/opt/:$PATH"
export DISPLAY
DISPLAY=":0.0"
Someone in another thread recommended changing the default options in the sshd_config file, so I updated the X11 options as shown below:
#AllowAgentForwarding yes
#AllowTcpForwarding yes
#GatewayPorts no
X11Forwarding yes
X11UseForwarding yes
X11DisplayOffset 10
X11UseLocalhost yes
#PermitTTY yes
#PrintMotd yes
#PrintLastLog yes
#TCPKeepAlive yes
#PermitUserEnvironment no
#Compression delayed
#ClientAliveInterval 0
#ClientAliveCountMax 3
#UseDNS no
#PidFile /var/run/sshd.pid
#MaxStartups 10:30:100
#PermitTunnel no
#ChrootDirectory none
#VersionAddendum none
It's still not working, and I suspect the previous solution was to fix the case where you can get XQuartz to work locally but not through ssh. I can't even get it to work locally, and at this point, I have no idea what to try next. Any ideas would be really helpful!
Don't know if this is helpful, but I copied this from my console:
X11.app: do_start_x11_server(): argc=7
argv[0] = /opt/X11/bin/Xquartz
argv[1] = :0
argv[2] = -nolisten
argv[3] = tcp
argv[4] = -iglx
argv[5] = -auth
argv[6] = /Users/magnoliafork/.serverauth.2211
More info for comments:
When I run xeyes from the Apple terminal, the XQuartz icon does not pop up.
If I try to run XQuartz directly from the Apple terminal using the xquartz command, the XQuartz icon pops up, and then I get a problem report from Apple that says, "Cannot establish any listening sockets - Make sure an X server isn't already running"
I can run xeyes from the XQuartz terminal "xterm", but I would really prefer the Apple terminal since it is more functional.
|
Note: In an attempt to clearly differentiate between the Finder (and associated applications) and binaries run from the terminal, I am using Bold and code formatting respectively.
You seem to have two issues:
Launching XQuartz installation from the terminal
Setting the DISPLAY env var correctly
The first issue is that you installed XQuartz using a .dmg, rather than via brew, and installed it in the Finder. This means that you need to launch it either via the Finder or on the command line using
$ /Applications/Utilities/XQuartz.app/Contents/MacOS/X11 &
Note: My version of XQuartz was also installed via a .dmg and maybe because of this, I don't appear to have an xquartz command available to launch XQuartz via the Terminal - hence the usage of the X11 binary from within the XQuartz application bundle.
As it now appears that you are trying to run xeyes from an instance of the Terminal application, rather than from the xterm running inside XQuartz, the solution is quite simple.
You need to get the DISPLAY of your XQuartz and assign it to the DISPLAY of your Terminal application.
First launch XQuartz either from the Finder, or using the command line that I specified above.
In the xterm in XQuartz type
echo $DISPLAY
This should give you something like
bash-3.2$ echo $DISPLAY
/private/tmp/com.apple.launchd.8cSMuyvAKe/org.macosforge.xquartz:0
bash-3.2$
Now in the Terminal, type (substituting in your full DISPLAY value)
$ export DISPLAY=/private/tmp/com.apple.launchd.8cSMuyvAKe/org.macosforge.xquartz:0
and then
$ xeyes
and then xeyes should appear in the XQuartz session.
Note this works on XQuartz 2.7.11 running on High Sierra 10.13.6. You could have a Big Sur specific issue.
| How can I get xeyes to work? - XQuartz Display Error on Local System |
1,498,585,056,000 |
I'm trying to start cheese such that it runs on a specific head on a multi-head display setup. The application options include a --display=DISPLAY setting:
$ cheese --help
Usage:
cheese [OPTION...]
...
Application Options:
-w, --wide Start in wide mode
-d, --device=DEVICE Device to use as a camera
-v, --version Output version information and exit
-f, --fullscreen Start in fullscreen mode
--display=DISPLAY X display to use
I'm thinking that I can set the head with something like --display=:0.1, however only ":0" allows the app to start (on the wrong head).
What argument parameter would force this to start on another head / display on the same workstation?
|
The syntax :0.NUMBER specifies a screen number on display 0. The concept of screen was intended to describe multiple monitors on the same display, but an application can't be moved from one display to another, so it's been pretty much abandoned. Your monitors are all on screen 0, i.e. :0.0, which is equivalent to :0 since the only screen is the default screen.
All the monitors are placed on a rectangular canvas. In a two-monitor configuration, one monitor has its top left corner at position 0x0 and the other has its top left corner at position 0xH (vertical arrangement) or Wx0 (horizontal arrangement) where WxH is the size of the first monitor.
Well-behaved application take an option -geometry or --geometry that allows the user to specify the position and size of the application's main window. For example, with two 1600x1200 monitors in a horizontal arrangement, xterm -geometry +1600+0 launches xterm at the top left of the right-hand monitor. Unfortunately Cheese is not well-behaved¹.
If you always want to run the application at a specific position, you can use Devil's Pie to make it reposition the window when it's created. With devilspie2, create a file ~/.config/devilspie2/cheese-geometry.lua containing
if (get_application_name() == "Cheese" and get_window_name() == "Cheese") then
set_window_position(1600, 0);
end
With the original Devil's Pie, create a file ~/.devilspie/cheese-geometry.ds containing
(if (and (is (application_name) “Cheese”) (is (window_name) “Cheese”))
(geometry "+100+1"))
Note that your window manager may override the position — the window manager has final say when positioning windows. If your window manager overrides the position, hopefully it has a way to configure it.
¹ It's a GNOME application. GNOME believes in removing any ability for users to choose how programs behave.
| How to start an app on specific head? |
1,498,585,056,000 |
I have problem with my console drawing lines. The problem is when I connect through ssh to server everything draws ok. But when I use up arrow key to show last used commands it often leaves few characters after $ sign:
user@host:~$ cd /var/www_vhosts/
user@host:/var/www_vhosts$ ls -la instalator-paczek/
razem 16
drwxrwxr-x 3 root root 4096 02-20 10:48 .
drwxr-xr-x 13 root root 4096 05-17 11:11 ..
-rwxrw-r-- 1 root developers 3380 05-29 11:03 instalator-paczek.sh
drwxrwxr-x 2 user developers 4096 05-29 11:03 logi
user@host:/var/www_vhosts$ cd instalator-paczek/
user@host:/var/www_vhosts/instalator-paczek$ nano instalator-paczek.sh
user@host:/var/www_vhosts/instalator-paczek$ cp -r inst
user@host:/var/www_vhosts/instalator-paczek$ nano /etc/issue
user@host:/var/www_vhosts/instalator-paczek$ uname
Linux
user@host:/var/www_vhosts/instalator-paczek$ uname -a
Linux host 2.6.32-5-amd64 #1 SMP Sun Sep 23 10:07:46 UTC 2012 x86_64 GNU/Linux
user@host:/var/www_vhosts/instalator-paczek$ cat /etc/issue
Debian GNU/Linux 6.0 \n \l
# here I use up arrow key to display last used commands and everything is ok until it show last: "cp -r in..." command
# "cp -r inst" is shown and next last used commands are shown after those 10 characters. I displays them like:
# $ cp -r instls -la
# $ cp -r instcd directory
# etc.
# I tried using backspace but it clear characters just until "cp -r inst", not even one more
# prompt is shown correctly only after i press enter or ^C
user@host:/var/www_vhosts/instalator-paczek$ cp -r instnano /etc/issue
user@host:/var/www_vhosts/instalator-paczek$ ls
instalator-paczek.sh logi
What is even more irritating is what happens when I type more characters than console width. Cursor somehow goes back to the beginning of line and overwrites prompt:
/d/asd/as/d/asd/asd/as/d/asd/asdww_vhosts/instalator-paczek$ ls -la asdkasdasdasd/asdasdasd/asdasdasdasdasdas/dasdsdaas/d/asd/as/d/asd/as
I can't use any long commands because it overwrites beginning and I can't see if I typed everything correctly. Long command when pasted inside console works even if display is corrupted. It's like only display is corrupted but command itself is ok.
I have ubuntu 12.10 and use Konsole as console application. On server there is Debian GNU Linux 6.0 and xterm.
user@host:/var/www_vhosts/instalator-paczek$ echo $TERM
xterm
It only happens with this server other servers I connect works good with Konsole.
How to fix this?
EDIT
Is this possible that those errors are occurring because there is no xterm installed on server and there is no resize command?
user@host:~$ stty -a
speed 38400 baud; rows 57; columns 151; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;
werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;
-parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon ixoff -iuclc -ixany -imaxbel -iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke
user@host:~$ stty -a
speed 38400 baud; rows 57; columns 172; line = 0;
intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V;
flush = ^O; min = 1; time = 0;
-parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts
-ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon ixoff -iuclc -ixany -imaxbel -iutf8
opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke
It seems like resizing console window also changes stty.
EDIT2
I've installed xterm package on server. I've logged out and log in again but problem is still the same. Should I restart server after installing xterm or something?
EDIT3
Solution here
|
OK finally I found solution. The problem was that $PS1 didn't have color sequences enclosed with \[ and \]. Before correcting $PS1 was:
export PS1='\e[1;32m\u@\h:\w$ \e[m'
after fix:
export PS1='\[\e[1;32m\]\u@\h:\w$ \[\e[m\]'
I found solution here: https://stackoverflow.com/questions/2024884/commandline-overwrites-itself-when-the-commands-get-to-long
| incorrect lines display in ssh console |
1,498,585,056,000 |
I'm trying to open netbeans as a different user, but isn't working.
I'm running a Kubuntu 12.04 LTS with KDE.
And I tried the following:
Open a terminal, type su - myotheruser, type the myotheruser password.
Then tried to open netbeans: /opt/netbeans/7.3/bin/netbeans (netbeans is installed on /opt/netbeans/7.3).
I got the following:
Erro: Can't connect to X11 window server using ':0' as the value of the DISPLAY variable.
I tried some commands like export DISPLAY=":0.0", xhost +, xhost +local:all and other commands related here and here. None worked.
Is important to me that myother user not be in the sudoers file.
If I end the session with mycurrentuser and logon with the myotheruser I can easily open netbeans.
I need open netbeans as myotheruser because I would like to work on a project that belongs to this user and just to it. If I changes permissions, looks like when I versionnig the project the user will loose the exclusivity.
So, how can I solve this? How can I open netbeans as a different user inside of another user display?
|
You could always try the following:
ssh -Y otheruser@localhost "/opt/netbeans/7.3/bin/netbeans"
:)
| How to open a program as another user inside a logged display? |
1,498,585,056,000 |
I am running a database server on my computer. Sometimes it takes up so much memory resources that the system stop responding to keyboard and mouse inputs. Although I can move the mouse pointer and turn on and off Caps Lock, I can't do anything else beyond that.
Surprisingly, I am able to SSH to the computer, run the top command, kill a few processes and shutdown the database server to reclaim memory space. But even having done those things, the display remains in a semi-frozen state (mouse still animated).
Having reclaimed most of my memory, is it possible to regain usage of the system without having to reboot?
|
Freeing up resources should generally return the system to a normal functioning state, so it sounds to me like the system is still struggling to free up resources or, hasn't fully followed through on killing these processes. I'd investigate it further to find out if that's in fact the case. You can see, for example, if something is still writing data to the HDD. There are several tools that can assist with this, I'd start with fatrace to see if you can identify a process that's trying to finish up writing data to the disk.
Example
$ sudo fatrace | head -10
chrome(29486): W /home/saml/.config/google-chrome/Default/Extension State/017912.log
chrome(29486): CW /home/saml/.config/google-chrome/Default/File System/000/p/.usage
chrome(29486): W /home/saml/.config/google-chrome/Default/Extension State/017912.log
chrome(29486): W /home/saml/.config/google-chrome/Default/Extension State/017912.log
chrome(29486): W /home/saml/.config/google-chrome/Default/History-journal
chrome(29486): W /home/saml/.config/google-chrome/Default/History
chrome(29486): W /home/saml/.config/google-chrome/Default/History
chrome(29486): W /home/saml/.config/google-chrome/Default/History
chrome(29486): W /home/saml/.config/google-chrome/Default/History-journal
chrome(29486): W /home/saml/.config/google-chrome/Default/Extension State/017912.log
You'll want to run that command without the | head -10, that's just to show you the example here.
So what's wrong?
If you've ever looked at the output of free you've likely noticed the columns buffers and cache.
$ free
total used free shared buffers cached
Mem: 7969084 6673652 1295432 0 118588 893916
-/+ buffers/cache: 5661148 2307936
Swap: 8011772 3104804 4906968
These are files that the system is aggressively loading into memory to maximize performance by using as much RAM as it can for this task. When the DB process (or which ever ones is consuming RAM) these files were pushed to swap (I'm assuming) and now cannot come back in since this other task is occupying the HDD's I/O.
What can be done to mitigate this?
One trick is to adjust the VM dirty ratio & VM dirty background ratio, which forces the system to start writing dirty pages of memory out to disk. This activity is often times what's causing a system to seemingly hang, especially in the UI. There are other reasons but this is one of the more frequented ones.
excerpt
By default the VM dirty ratio is set to 20% and the background dirty ratio is set to 10%. This means that when 10% of memory is filled with dirty pages (cached data which has to be flushed to disk), the kernel will start writing out the data to disk into the background, without interrupting processes. If the amount of dirty pages raises up to 20%, processes will be forced to write out data to disk and cannot continue other work before they have done so.
Here's how you can check on your system's current settings:
$ sudo sysctl -a | grep 'dirty.*ratio'
vm.dirty_background_ratio = 10
vm.dirty_ratio = 20
To override these settings you can create the following file, /etc/sysctl.d/dirty_ratio.conf with the following content:
vm.dirty_ratio = 10
vm.dirty_background_ratio = 5
This will cause your system to be more aggressive about writing changes out as the occur. You can activate these changes immediately like so:
$ sudo sysctl -p /etc/sysctl.d/dirty_ratio.conf
Will this resolve the issue?
In my experience you can tweak these values but the true issue is your system is likely just not up to the task(s) you're asking it to perform.
References
LINUX PERFORMANCE IMPROVEMENTS
| How to regain control of my computer? |
1,498,585,056,000 |
I am developing a personal project/idea for a headless Raspberry Pi that works without a GUI display. I am working on a text graphicsesque design.
As the Raspberry Pi allows one to plug into most any screen, I want to be able to determine the screens resolution so I can create an optimal display.
My problem lies in actually getting the screens resolution. I have tried a few methods in python, such as messing with xrandr and the the Tkinter module, but both have the same problem. There technically isn't a screen, just a console.
Text editors like vim seem to be able to fit themselves without any problem.
Can I get a screens resolution, without having to load up a display?
All suggestions welcome, but my proficiencies are in the Python, C++, Java and Bash range.
|
You can look in /sys/class/drm/card*/*/modes:
for card in /sys/class/drm/card*/* ; do
echo "$card: $(head -n 1 $card/modes)"
done
should output something like
/sys/class/drm/card0/card0-LVDS-1: 1024x768
/sys/class/drm/card0/card0-VGA-1: 1280x1024
| Finding screen resolutions without $DISPLAY through python or shell scripting |
1,498,585,056,000 |
Running Ubuntu based Os on my Laptop, I have an external HDMI display that is not detected. I tried lspci and xrandr but the display stay "disconnected".
How can I detect the display and extend my desktop ?
Thanks
lspci -d ::0300 -nnk
00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 620 [8086:5917] (rev 07)
DeviceName: Onboard IGD
Subsystem: Hewlett-Packard Company UHD Graphics 620 [103c:83b2]
Kernel driver in use: i915
Kernel modules: i915
lspci -d ::0302 -nnk
No result
sudo dmidecode -s system-product-name
HP EliteBook 850 G5
|
OK, so you have an HP Elitebook 850 G5, which does not seem to have the optional discrete Radeon R59 GPU installed, and so it's working with just the Intel GPU that's integrated into the processor. So it should be pretty straightforward, with none of the complications of multiple GPUs.
If the HDMI port is listed as "disconnected" in the xrandr output, (e.g. as HDMI-1), then please verify:
the cable is good (i.e. it works with some other computer/display)
the display is powered on and (if applicable) has the correct HDMI input selected
the cable is firmly plugged in at both ends
If xrandr still shows the HDMI display as disconnected, then there might be a hardware problem.
But if it shows the HDMI as connected but disabled, it's waiting for activation. In that case, you would need to do something like xrandr --output HDMI-1 --auto (or the equivalent action in the GUI configuration panel of your desktop environment) to enable it. If working on the command line, you might want to add an option like --left-of eDP-1 to specify the location of the new display relative to the laptop's built-in display, or --same-as eDP-1 if you want the same view on both displays.
| Ubuntu, detect new display |
1,498,585,056,000 |
I've installed Arch Linux and Plasma Desktop on one of my old PCs (12yrs or more), it's AMD Phenom II Quad Core, ATI Radeon 3000 and 4 GB RAM. On Samsung SyncMaster S19B150 (max res 1366x768) it works but fonts are choppy so I wanted to test it on a lenovo ThinkVision L1900p 19-inch Monitor BUT it displays a messege input signal out of range! lenovo actually works fine on another old (15 yrs or more) Windows machine with Intel Dual Core.
How can I get rid of that error on lenovo and make plasma desktop work on that monitor?
EDIT
I'm not sure whether it's an Arch specific problem. If I plug in live ubuntu usb-stick or live KDE neon usb-stick, it works, fonts everywhere seems alright BUT the screen resolution is low!
I've no problem with this monitor on windows with 1280x1024.
Edit
It must be an Arch specific problem or I don't know how to resolve this issue on Arch yet! Void linux with Plasma desktop works fine on this monitor except for that low 1024x768 resolution.
|
The solution for this problem is change the VGA connector. I don't know why did it come into my mind after so long: I just took off the VGA connector of SyncMaster and plugged that in to lenovo! Now I can see "Arch with plasma desktop" on its optimum resolution 1280x1024 and on this monitor I don't have the choppy font issue.
| Input Signal Out of Range |
1,498,585,056,000 |
I'm trying to unset or change the default Windows+P (or to be more exact Meta+P) "re-detect monitors" cinnamon keybinding via gsettings, but having a hard time finding it in the gsettings tree.
In the keyboard settings GUI it is shown under System > Devices > Re-detect display (roughly translated from german).
Already went through nearly all of the org.cinnamon* and also org.cinnamon.settings-daemon* trees, also looked at org.gnome.settings-daemon* but can't find it. (Gladly gsettings supports tab completion, otherwise this would be pretty difficult).
|
After searching a bit more using gsettings list-recursively seems I've found the shortcut path. This is it with the default shortcuts under cinnamon 2.8:
gsettings list-recursively org.cinnamon.desktop.keybindings.media-keys
| grep -i display
org.cinnamon.desktop.keybindings.media-keys video-outputs ['p', 'XF86Display']
Removed the pesky Super+P default shortcut via
gsettings set org.cinnamon.desktop.keybindings.media-keys video-outputs '["XF86Display"]' but the change is not reflected in the shortcuts GUI settings until you close it and reopen it.
For completeness sake, most of the default cinnamon keybindings sorted alphabetically by internal name/function can be obtained via
gsettings list-recursively org.cinnamon.desktop.keybindings.media-keys | sort -k2 and the ones related to muffin (cinnamon window manager) can be obtained via
gsettings list-recursively org.cinnamon.desktop.keybindings.wm | sort -k2
EDIT: Mappings of internal names to the names of the shortcuts shown in the GUI and the mapping of the names from the settings tree in the GUI to the gsettings paths and other valuable information can be obtained from the python script that actually does all the work under the hood:
/usr/share/cinnamon/cinnamon-settings/modules/cs_keyboard.py
| Unset or change default "re-detect monitors" windows+p cinnamon keybinding using gsettings |
1,498,585,056,000 |
I have two GPU's, GTX 1070 and GT 710. I have only one display and I would like this display to run off of the GT710 so that I can continue to work when I am training models using CUDA. I have been at this for quite a few hours and the furthest I have been able to get is to boot into Mint in "fallback mode" with the monitor connected to the GT 710.
I have been following the instructions here:
https://forums.developer.nvidia.com/t/how-do-i-set-one-gpu-for-display-and-the-other-two-gpus-for-cuda-computing/49113
My system information is as follows
I have tried two methods
1)First Attempt: As suggested by user "birdie" the link above, I created the file nvidia.conf in directory /etc/X11/xorg.conf.d, with the following content
Section "Device"
Identifier "GT710"
BusID "PCI:5:0:0" # my Bus ID for gt710
Driver "nvidia"
VendorName "NVIDIA"
EndSection
Then I went to xorg.conf in /etc/X11/ and modified the entry for screen as follows
Section "Screen"
Identifier "Screen0"
Device "GT710" #modified here
Monitor "Monitor0"
DefaultDepth 24
Option "Stereo" "0"
Option "nvidiaXineramaInfoOrder" "DFP-6"
Option "metamodes" "2560x1440_75 +0+0"
Option "SLI" "Off"
Option "MultiGPU" "Off"
Option "BaseMosaic" "off"
SubSection "Display"
Depth 24
EndSubSection
EndSection
By doing this I was able to boot into mint in fallback mode with my display connected to the GT710.
2)Second Attempt: I created a second device entry in /etc/X11/xorg.conf
Section "Device"
Identifier "Device1"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BusID "PCI:5:0:0"
BoardName "GeForce GT 710"
option "AllowEmptyInitialConfiguration"
EndSection
Then I edited the Screen entry in /etc/X11/xorg.conf as follows:
Section "Screen"
Identifier "Screen0"
Device "Device1" #edited here
Monitor "Monitor0"
DefaultDepth 24
Option "Stereo" "0"
Option "nvidiaXineramaInfoOrder" "DFP-6"
Option "metamodes" "2560x1440_75 +0+0"
Option "SLI" "Off"
Option "MultiGPU" "Off"
Option "BaseMosaic" "off"
SubSection "Display"
Depth 24
EndSubSection
EndSection
Again I was able to boot into mint but only in fallback mode when connected to the GT710.
I would appreciate any help in making this work.
thank you
|
I solved the problem. The solution is to use approach #2 and edit /etc/X11/xorg.conf/ to add second GPU as shown above. Then under Section "screen" change "MultiGPU" to "on"
more details can be seen hereenter link description here
I will post my new xorg.con in case it helps anyone in the future
# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings: version 440.82
Section "ServerLayout"
Identifier "Layout0"
Screen 0 "Screen0" 0 0
InputDevice "Keyboard0" "CoreKeyboard"
InputDevice "Mouse0" "CorePointer"
Option "Xinerama" "0"
EndSection
Section "Files"
EndSection
Section "Module"
Load "dbe"
Load "extmod"
Load "type1"
Load "freetype"
Load "glx"
EndSection
Section "InputDevice"
# generated from default
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/psaux"
Option "Emulate3Buttons" "no"
Option "ZAxisMapping" "4 5"
EndSection
Section "InputDevice"
# generated from default
Identifier "Keyboard0"
Driver "kbd"
EndSection
Section "Monitor"
# HorizSync source: edid, VertRefresh source: edid
Identifier "Monitor0"
VendorName "Unknown"
ModelName "Philips PHL 325E1"
HorizSync 114.0 - 114.0
VertRefresh 48.0 - 75.0
Option "DPMS"
EndSection
#makes gtx 1070 work on display
Section "Device"
Identifier "Device0"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BoardName "GeForce GTX 1070"
EndSection
#added to make GT710 run my Display doesnt work
Section "Device"
Identifier "Device1"
Driver "nvidia"
VendorName "NVIDIA Corporation"
BusID "PCI:5:0:0"
option "AllowEmptyInitialConfiguration"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Device1"
Monitor "Monitor0"
DefaultDepth 24
Option "Stereo" "0"
Option "nvidiaXineramaInfoOrder" "DFP-6"
Option "metamodes" "2560x1440_75 +0+0"
Option "SLI" "Off"
Option "MultiGPU" "on" #"Off" #CHANGE APPLIED HERE
Option "BaseMosaic" "off"
SubSection "Display"
Depth 24
EndSubSection
EndSection
| How do I use my secondary GPU for display output and primary only for computation? |
1,498,585,056,000 |
I'm trying to troubleshoot something for my infrastructure team around the use of EGL as a backend for the VirtualGL program. I believe my issues come from a missing /dev/dri/renderD128 device file on centos 7, what is supposed to be done to create this renderD128 file? All I see in the /dev/dri is card0.
The gpu we are using is Nvidia and the most current driver is installed for the Tesla P100. I see all of the typical GPU device files in /dev/nvidia* such as nvidia-uvm nvidiactl ... If more specifics are needed I can try to find them out from the rest of the team, such as what flags were passed when the Nvidia driver's .run file was executed.
I'm not 100% percent convinced there's a problem with the driver install because I read https://forums.unraid.net/topic/72829-hardware-transcoding-plex-transcoding-not-working-renderd128-missing/ that the BIOS settings are what needed amending before the renderD128 showed up.
|
I had this issue on Ubuntu 20.04 LTS and it turned out to be I was running an older version of the linux kernel (5.4) that didn't fully support my CPU (Rocket Lake). Only after updating the linux kernel to a newer version (5.11) along with a reboot did /dev/dri/renderD128 finally appear.
So I would recommend making sure you're on the latest linux kernel. Hope that helps.
| Unclear about why /dev/dri/renderD128 is missing |
1,498,585,056,000 |
I use Linux Mint. I did something and now my screen is too big and it moves as I move the mouse. How can I resize the screen such that it returns to the original size where everything fits onto the screen? I'm using XFCE.
Ctrl+- won't solve the issue,
neither Ctrl+ mouse scrollbar.
I was only able to change the size of www-pages on Firefox.
Ctrl+Alt+F6 gives a console with good size.
|
I found the solution from https://www.makeuseof.com/tag/refresh-linux-desktop-without-rebooting/. Just open the terminal and run commands xfce4-panel -r && xfwm4 --replace
| Linux Mint screen too big |
1,498,585,056,000 |
I'd like to have a redshift-style effect on my laptop screen but where I truly remove all green and blue light, leaving only red. I've experimented with redshift, which doesn't seem to take it all the way, only redder, but neither it seems does xcalib... I had thought I could get this by running this:
xcalib -blue 1 0 1 -a
xcalib -green 1 0 1 -a
But that seems to turn the screen orange rather than red -- perhaps it reduces the contrast in both directions (so makes the bottom end brighter too?). I'm confident that I'm right, and that that isn't a trick of the light, because if i run instead
xcalib -red 1 99 1 -a
...then my black terminal turns definitely the sort of red that I wanted.
Is there any way that I can achieve what I want -- effectively, cut out all blue and green light?
|
You can.
I'm using xrandr & the following script to adjust gamma and brightness on all connected monitors, depending of time of day.
#!/bin/bash
current_hour=$(date +%k)
red_gamma=1.0
green_gamma=1.0
blue_gamma=1.0
brightness=1
if (( current_hour >= 6 && current_hour <= 18 )); then
# Daytime: normal blue light emissions, slight reduction in brightness
red_gamma=1.0
green_gamma=1.0
blue_gamma=1.0
brigthness=0.9
else
# Nighttime: reduce blue light emissions & brightness
red_gamma=1.0
green_gamma=0.7
blue_gamma=0.4
brightness=0.5
fi
xrandr | grep " connected" | awk '{print $1}' | xargs -i xrandr --output {} --gamma $red_gamma:$green_gamma:$blue_gamma --brightness $brightness
You can also use "xgamma" for the same thing if you don't want to use xrandr.
usage: xgamma [-options]
where the available options are:
-display host:dpy or -d
-quiet or -q
-screen or -s
-version or -v
-gamma f.f Gamma Value
-rgamma f.f Red Gamma Value
-ggamma f.f Green Gamma Value
-bgamma f.f Blue Gamma Value
| How to remove *all* green and blue light: xcalib and redshift seem not to? |
1,498,585,056,000 |
I have installed libgtk-3-dev and wrote and compiled this code successfully(without errors I mean):
#include <gtk/gtk.h>
void destroy(void) {
gtk_main_quit();
}
int main (int argc, char** argv) {
GtkWidget* window;
GtkWidget* image;
gtk_init (&argc, &argv);
window = gtk_window_new(GTK_WINDOW_TOPLEVEL);
image = gtk_image_new_from_file(argv[1]);
g_signal_connect(G_OBJECT (window), "destroy",
G_CALLBACK (destroy), NULL);
gtk_container_add(GTK_CONTAINER (window), image);
gtk_widget_show_all(window);
gtk_main();
return 0;
}
But after trying to run the executable it says:
(process:5771): Gtk-WARNING **: Locale not supported by C library.
Using the fallback 'C' locale.
Failed to connect to Mir: Failed to connect to server socket: No such file or directory
Unable to init server: Could not connect: Connection refused
(img:5771): Gtk-WARNING **: cannot open display:
I should say I use Ubuntu-server 16/04 and installed xorg, xserver-xorg-video-fbdev, openbox packages too. I have a gray blank screen with a black mouse and right-click menu after boot.
EDIT:
I used this command to connect to my board: ssh [email protected] -X Then program worked and it opened the image by ./img 1.png but in my laptop that I used for ssh! I liked to open the image in my board's LCD, not in my laptop!
Also it gives me this message in terminal:
(process:1909): Gtk-WARNING **: Locale not supported by C library.
Using the fallback 'C' locale.
SOLUTION:I attached a keyboard to my board and opened it's terminal(by
right-click inside it's openbox window) and executed my program
successfully and it showed my picture in the SPI LCD!
|
I attached a keyboard to my board and opened it's terminal(by right-click inside it's openbox window) and executed my program successfully and it showed my picture in the SPI LCD!
| Failed to connect to Mir: Failed to connect to server socket: No such file or directory |
1,498,585,056,000 |
Is it possible to have xrandr at --right-of the screen in workspace1 and --same-as at workspace2? (I'm using openbox)
If you interested why I want to do this:
I'm having a presentation and I need to show my slides at --right-of the screen while at the same time I have access to my slides notes in my own monitor. The other thing is: I need to switch between slides and my IDE, so I can write codes and describe what it does while both me an audience can see what's happening.
|
So I have a simple solution.
You can add a key binding to your 'rc.xml' that executes the GoToDesktop command and also execute the xrandr command.
Here is the example for the key combination Windows key + F1 and it switches to desktop one and sets the monitors with --same-as.
<keybind key="W-F1">
<action name="GoToDesktop"><to>1</to></action>
<action name="Execute">
<command>xrandr --output DP1-2 --same-as eDP1</command>
</action>
</keybind>
Something similar can be done for Desktop two.
Sadly this only works when you press the specific key binding and not when you use another method for switching the desktop/workspace.
| Is it possible to have xrandr in different modes based on workspace? |
1,498,447,047,000 |
I am running an Ubuntu Server that boots to the console. After a long wile, the screen goes blank and I cannot type or unblank the screen. Even the caps lock / num lock do not change states. I am however, able to SSH into the computer.
Is there a way I can unblank the tty1 over SSH?
Is it possible to do this from the keyboard + not logged in + blank screen?
|
I found a working answer on Server Fault from reddit.
Even though I did disable blanking, the screen still goes blank, but I can bring it back with a key press. I can live with that for now.
The trick was to install console-tools and configure it from /etc/console-tools/config to have
BLANK_TIME=0
POWERDOWN_TIME=0
Added on 2013-11-15:
After an update, this broke down again! The display would blank and never unblank. This time I had to update the file in /etc/kbd/config. After fixing the file the computer remained on all night without blanking the display.
| Unblanking a Linux Terminal Display |
1,498,447,047,000 |
I have installed Fedora after using Windows 7. I have two screens connected to my computer: a TV and a 19" monitor.
I only want one screen active at a time. I'd like to be able to switch which screen is active using a simple mouse-click or keyboard combination. From the settings window, all I can see is how to clone/extend the screen, which is not what I want to do.
In Windows 7 when I wanted to switch between connected displays I typed Win+p.
|
It's not clear from your question that you know that you can toggle monitors on and off using the "on" slider in the "Displays" dialog you show. This is the standard way of enabling and disabling monitors in Gnome Shell.
You can create a quick key combination to toggle your monitors by using this script on github that was designed exactly for this purpose (see this article for more information). Copy the script and place it in your home directory. Make it executable by
chmod +x ~/toggle.sh
You can then bind this script to a custom keyboard shortcut using the keyboard system settings dialog. Click on the "Shortcuts" tab and then "Custom Shortcuts" in the left panel. Click the "+" sign to add a new shortcut and fill in the dialog like so:
Click "Apply" and then click on the "Disabled" word and press the key combination you want to map to toggling monitors.
| How to quickly switch between displays in fedora 18 |
1,498,447,047,000 |
I just installed Debian 11.6 and see that everything looks scaled up a lot.
I checked the size of the display and it is 800 x 600 which is quite big having it on my 4k monitor. But me having an NVIDIA RTX 3060 Ti should not trigger this because that is some good hardware
So obviously, I opened settings and went to the "Display" section to change the resolution but I can only see the resolution but not change it.
How can I change the resolution of the display?
|
I just needed to install the drivers for my graphics card, and I can get more display sizes in the settings display section, silly me!
| How to change Debian display resolution |
1,498,447,047,000 |
I pressed CtrlAltF1 to enter TTY1.
I worked a while mainly on vim, then my shell (bash) stops echoing.
If I type echo abc it does not get display.
Instead I get a screen like this:
How do I fix the problem?
|
ttys are complex beasts, which can work in several different modes. E.g. when running vi(1), you don't want characters typed to show up on screen, the editor is in charge of what is displayed. This is called "raw" mode. Usually you are in "echo" mode, in which the kernel sends what is typed to the screen directly. If a program which took over the details of the display crashes and doesn't restore the mode before exiting, all sorts of weird stuff gets displayed when typing. Another popular way to screw up the settings is to sed a binary file (e.g. an executable or an image) to the screen, they are prone to contain the key sequences to change settings...
The way to restore the tty settings to normal is to run the command reset, which is done by ^Jreset^J (the ^J is ctrl-J, press the ctrl and J keys simultaneously).
ctrl-J is what C calls '\n', NEWLINE, it ends the previous line the shell was reading (if any); reset is the command; ctrl-J ends the line and makes the shell run the command. This nonsense is needed since the return key generates '\r', CARRIAGE RETURN, which the normal mode translates into ´\n' for convenience.
Welcome to the intricacies of Unix roots.
| Why not display characters which I typed in tty? [duplicate] |
1,498,447,047,000 |
The machine I'm working from has many active X displays (one standard X server and many VNC displays). It is also running a handful of GUI applications, which appear on an X display.
Assuming I have the PID (using ps) is there a method to determine which X screen the process is using, or even the value the DISPLAY variable held when launched?
Even better if there's a method to show the value of DISPLAY for the process and all of its children process, in case some processes spawn their GUI as a child process.
|
If you have root access (or sudo ps) then you can display the environment of a process with the e option. Inside here you should be able to see the DISPLAY variable (if it's set). You probably need ww to ensure the output doesn't get truncated.
e.g.
% ps wwep $$ | tr ' ' '\012' | grep DISPLAY
DISPLAY=:0
So my current shell is talking to :0.
Many OS's protect the environment from other users (because it may leak sensitive information), so a normal user can only see their own process environments. root can see every user's.
| Check which display an application is using |
1,498,447,047,000 |
I have Ubuntu 14.04 and Fedora 20 as a test environment. I am trying to send X programs from Fedora to Ubuntu through an SSH session. My setup is as follows:
On Ubuntu ran:
Xhost +IP Address of Fedora
This returns:
10.10.24.153 being added to access control list
From the Fedora system I then run:
DISPLAY=10.10.25.168:0.0
This step redirects Fedora's Display to the Ubuntu system.
Then I ssh to Fedora from Ubuntu. At this point when I try to launch a program, such as gedit from the ssh prompt I get this: (and no gedit window appears)
Unable to init server: Could not connect: Connection refused (gedit:7358): Gtk-WARNING **: cannot open display:
HOWEVER, when I launch gedit directly from Fedora it appears on my Ubuntu system, albeit errors on the Fedora side:
** (gedit:7372): WARNING **: Couldn't connect to accessibility bus: Failed to connect to socket /tmp/dbus-77RFAr0MHI: Connection refused
I should be able to launch the X based program from the ssh session and it appear on screen. Anyone know what I am doing wrong?
|
If you want to run the X apps over your ssh session you just need to tell ssh to carry the traffic. Don't run xhost and don't override DISPLAY.
ssh -Yf remote-host some-x-application
The -Y flag tells ssh to create a DISPLAY environment variable on the remote-host and carry the resulting traffic across the encrypted connection. The -f flag tells ssh to do all this in the background so that it looks like the session has terminated immediately.
If you prefer to have an interactive shell visibly running on the end of the ssh connection you can also do this sort of thing:
ssh -Y remote-host
...log in as necessary...
some-x-application &
another-x-application &
...etc...
wait; exit
| Displaying X programs on remote systems |
1,498,447,047,000 |
I am running a Debian stable with Cinnamon graphical interface 3.6.7 and my computer is connected to a multimedia projector. I havea an Intel Graphic card.
The projected image is too big and I can't change neither the place of my multimedia projector nor the place of my wall to reduce the size of the projected image.
Thus I would like to find a command line so that the resolution of the projected image is the same but such that a black band is at the border of my screen (see Figures below). I expect then that the projected image will have a smaller size.
Solution (@Ipor Sircer)
xrandr --output HDMI1 --fb 1620x880
Current configuration:
Expected configuration
|
Use xrandr to detect the default output. Then you can make a black border:
xrandr --output LVDS --set underscan on --set "underscan vborder" 100 --set "underscan hborder" 100
(not working with intel graphic card)
| Reduce size of my screen with a command line |
1,498,447,047,000 |
I was using the amdgpu-pro driver for a while on my (incompatible) elementaryOS 0.4.1 Loki computer before it bricked my machine and I had to reinstall in an attempt to revert back to the Nouveau drivers.
However, running sudo lshw -c video shows that I'm still running the AMD driver for some reason.
*-display
description: VGA compatible controller
product: Advanced Micro Devices, Inc. [AMD/ATI]
vendor: Advanced Micro Devices, Inc. [AMD/ATI]
physical id: 0
bus info: pci@0000:09:00.0
version: cf
width: 64 bits
clock: 33MHz
capabilities: pm pciexpress msi vga_controller bus_master cap_list rom
configuration: driver=amdgpu latency=0
resources: irq:238 memory:e0000000-efffffff memory:f0000000-f01fffff ioport:e000(size=256) memory:fe900000-fe93ffff memory:c0000-dffff
Most of my programs like Blender 3D are completely unusable due to this, despite the rest of the operating system displaying just fine.
When I reinstalled eOS, I was prompted to wipe my old drive partition and do a completely clean install, which I did, so maybe the computer still thinks I'm using the AMD driver from some data that isn't located on the main OS partition.
What went wrong? What should I do?
|
lshw says you’re using the amdgpu driver, which is the non-proprietary driver for recent AMD GPUs. It’s not the same as AMDGPU PRO, the proprietary driver.
Nouveau is the non-proprietary driver for NVIDIA GPUs, not for AMD GPUs.
| lshw says I'm using proprietary driver even though I reinstalled my OS to revert back to nouveau |
1,498,447,047,000 |
When I try to export the remote display I do:
In my Ubuntu 17.04:
xhost +
access control disabled, clients can connect from any host
Then I make the connection to the server:
ssh user@server
Once in the server get this error:
user@server:~$ export DISPLAY=my_ip:0.0
user@server:~$ xeyes
Error: Can't open display: my_ip:0.0
My gdm configuration is:
root@my_ip:/etc/gdm3# cat custom.conf
# GDM configuration storage
#
# See /usr/share/gdm/gdm.schemas for a list of available options.
[daemon]
# Uncoment the line below to force the login screen to use Xorg
#WaylandEnable=false
# Enabling automatic login
# AutomaticLoginEnable = true
# AutomaticLogin = user1
# Enabling timed login
# TimedLoginEnable = true
# TimedLogin = user1
# TimedLoginDelay = 10
[security]
DisallowTCP=false
[xdmcp]
Enable=true
DisplaysPerHost=10
[chooser]
[debug]
# Uncomment the line below to turn on debugging
# More verbose logs
# Additionally lets the X server dump core if it crashes
#Enable=true
GDM3 Version
gdm3 --version
GDM 3.24.0
netstat -puta
Conexiones activas de Internet (servidores y establecidos)
Proto Recib Enviad Dirección local Dirección remota Estado PID/Program name
tcp 0 0 0.0.0.0:x11-1 0.0.0.0:* ESCUCHAR 1477/Xorg
ps fax | grep X
1211 tty1 Sl+ 0:01 | \_ /usr/lib/xorg/Xorg vt1 -displayfd 3 -auth /run/user/120/gdm/Xauthority -background none -noreset -keeptty -verbose 3
1477 tty2 Sl+ 0:35 \_ /usr/lib/xorg/Xorg vt2 -displayfd 3 -auth /run/user/1000/gdm/Xauthority -listen tcp -background none -noreset -keeptty -verbose 3
tcpdump
11:41:20.065425 IP server.41874 > my_ip.x11: Flags [S], seq 1478700027, win 29200, options [mss 1460,sackOK,TS val 22197350 ecr 0,nop,wscale 7], length 0
11:41:20.065447 IP my_ip.x11 > server.41874: Flags [R.], seq 0, ack 1478700028, win 0, length 0
|
Solved. The problem was with my DISPLAY number. I supposed that my DISPLAY was 0.0 but :
xdpyinfo | grep display
name of display: :1
So, the export DISPLAY must be:
export DISPLAY=my_ip:1
| Export Display is not working on Ubuntu Gnome (gmd3) |
1,498,447,047,000 |
On Ubuntu MATE 20.04, which I use as a live TV laptop OS, there seems to be a problem with some settings, upon which the screensaver is run or the screen turns off; I don't know if both apply, or which one.
What is the goal:
To disable the screensaver forever or generally leave the screen on all the time.
I normally ssh to this machine, so I prefer script or command solutions.
What I did already: I looked everywhere in the standard settings, but no clue, nor any solution.
|
Autostart script solution
Before you start reading, make sure, that in your Power Management preferences the display is set to Never to be put to sleep when inactive.
So, I did a lot of digging, and thanks to the official forum (source link) I got my answer:
Place the below script to this location:
~/.config/autostart/
Script:
#!/bin/sh
sleep 10 && xset -dpms s off s noblank s 0 0 s noexpose
Obviously, you can name it to your liking, e.g. disable_screensaver and you need to chmod 775 it.
Note, that the sleep 10 can be adjusted to how fast your desktop loads, mine is slow, so...
| How to completely disable screensaver / turning off screen on Ubuntu MATE 20.04? |
1,498,447,047,000 |
I have 2 laptops, one Linux and one Windows. I want to use the Linux one as a second display but I can't find anything about it. I found this:
Using Linux machine as a monitor for a Windows machine
but Spacedesk dosent support linux anymore.
|
It looks to me that you've two different computers, one is running Windows and the other is running Linux. And you want to share the mouse and keyboard between them. Synergy looks like the application you need. You may find more applications which can share mouse, keyboard and clipboard between computers.
| Use Linux laptop as second display |
1,498,447,047,000 |
Should I imagine it like a TCP/UDP port (per-machine rather than per-user)?
Can I connect to an other users DISPLAY? Is it protected somehow?
Can I list the currently used DISPLAY numbers for one user?
Is it possible to find one free DISPLAY number that I can still use?
Where could I find out more about these?
|
Should I imagine it like a TCP/UDP port (per-machine rather than per-user)?
It actually is a unix domain socket for local users, and a TCP port (if enabled, on modern X servers it's disabled by default).
Can I connect to an other users DISPLAY?
Yes, with proper authorization. See xauth and xhost.
Can I list the currently used DISPLAY numbers for one user?
Display numbers are per X server, and not per user.
Is it possible to find one free DISPLAY number that I can still use?
ps axu | grep Xorg should list all X servers, you can see which display number they use. Or look at /tmp/.X11-unix/ to see the unix domain sockets. Possibly there are variations for this among distros.
In general you should have an idea how many X servers are running on your system, if you have root rights and configured it ...
| What are Xorg DISPLAY numbers? |
1,498,447,047,000 |
I want to write a linux driver which maps my specified memory address space to /dev/fb0.
the driver should be specified by what part of linux? drm or frame buffer or server X or somthing else? Which properties should I have in my driver?
|
The driver is a linux kernel module.
Download the source of the linux kernel, have a look at the code of the existing framebuffer drivers in drivers/video/fbdev (github here) and the documentation in Documentation/fb (github). Google for tutorials how to write kernel modules, practice with a simple module first.
Just mapping memory won't be enough, you'll have to implement a few ioctls.
Writing kernel drivers is not easy. If you have to ask this kind of questions (and you asked a lot in the past few days), you probably won't be able to do it.
X is a server for the X protocol. It can use hardware via the DRM kernel modules, and it can also use hardware via framebuffer drivers (with the fbdev X driver). Details about that are easy to find online, google. /dev/fb0 is a framebuffer device, so you don't need to concern yourself with X or DRM.
| mapping linux /dev/fb0 to DDR for displaying |
1,498,447,047,000 |
Sorry if I used wrong subject, but my problem is explained completely here: https://forum.armbian.com/topic/5561-how-to-configure-scriptbincustomfex-for-spi-lcd/?tab=comments#comment-42545
I did create .conf files within etc/modprobe.d and etc/modules-loud.d , also in share/X11/xorg.conf.d directories, and after booting system I can see my LCD turns on (only blank black screen), like when I ran this command:
sudo modprobe fbtft_device custom name=fb_ili9341 gpios=reset:1,dc:201,led:6 speed=16000000 rotate=90 bgr=1
But nothing more... I should run `startx' command manually to get a gray blank screen with a black mouse and right-click ability!
But I like this happens automatically after booting! I searched my OS docuements and found this:
script.bin/fex file
The settings in the [disp_init] section of the script.bin/fex file define the display output enabled at boot.
An example configuration for HDMI:
[disp_init]
disp_init_enable = 1
disp_mode = 0
screen0_output_type = 3
screen0_output_mode = 4
fb0_framebuffer_num = 2
fb0_format = 10
fb0_pixel_sequence = 0
fb0_scaler_mode_enable = 0
disp_mode selects single-screen output or different dual screen modes. Generally this is 0, which means use screen0 with fb0 (one screen).
screen0_output_type = 3 means HDMI output.
screen0_output_mode selects the video/monitor mode to use (resolution and refresh rate). See the table in the Fex guide.
fb0_framebuffer_num selects the number of buffers for fb0, generally you need 2 or more for video acceleration or Mali (3D), 3 is better.
fb0_format and fb0_pixel_sequence determine the pixel format in the framebuffer. The above example (values of 10 and 0) selects the most common variant of 32bpp truecolor (ARGB).
fb0_scaler_mode_enable selects whether the scaler should be enabled. Enabling it does not really scale pixels, it configures the scaler to scale pixels 1-to-1 which can fix screen refresh-related problems at 1080p resolution. See the section below.
Similar parameter are defined for screen1 (which is usually disabled in practice).
But I don't know how to change it? Also I know my LCD uses fb8 as framebuffer. Also my OS is armbian-5.30(Ubuntu server 16.04 ported for allwinner-h3 nanopi-m1 board)
Also there is a guide here but I couldn't understand it truely: http://linux-sunxi.org/Fex_Guide#spi_configuration
|
Sunxi is not ubuntu. Just put startx command into /etc/rc.local to run at boot time.
| How to force "startx" at startup? |
1,498,447,047,000 |
I have an ARM-based board(http://wiki.friendlyarm.com/wiki/index.php/NanoPi_M1) and use Ubuntu-server 16.04 on it. I have a 2.2" TFT-LCD with SPI connection, and use this framebuffer driver(https://github.com/notro/fbtft) to launch it. I can setup my LCD with this command:
sudo modprobe fbtft_device custom name=fb_ili9341 gpios=reset:1,dc:201,led:6 speed=16000000 rotate=90 bgr=1
And before, when I had Ubuntu desktop, I could change display by this command:
FRAMEBUFFER=/dev/fb8 startx
But in Ubuntu-server I get this error message:
X.Org X Server 1.18.4
Release Date: 2016-07-19
X Protocol Version 11, Revision 0
Build Operating System: Linux 4.4.0-97-generic armv7l Ubuntu
Current Operating System: Linux nanopim1 3.4.113-sun8i #16 SMP PREEMPT Tue Jun 13 14:15:57 CEST 2017 armv7l
Kernel command line: root=UUID=10b3b795-f372-4ea9-b78a-93ae9355c20c rootwait rootfstype=ext4 console=tty1 console=ttyS0,115200 hdmi.audio=EDID:0 disp.screen0_output_mode=1920x1080p60 panic=10 consoleblank=0 loglevel=1 ubootpart=bd75a2d6-01 ubootsource=mmc sunxi_ve_mem_reserve=0 sunxi_g2d_mem_reserve=0 sunxi_fb_mem_reserve=16 cgroup_enable=memory swapaccount=1
Build Date: 13 October 2017 01:59:44PM
xorg-server 2:1.18.4-0ubuntu0.7 (For technical support please see http://www.ubuntu.com/support)
Current version of pixman: 0.33.6
Before reporting problems, check http://wiki.x.org
to make sure that you have the latest version.
Markers: (--) probed, (**) from config file, (==) default setting,
(++) from command line, (!!) notice, (II) informational,
(WW) warning, (EE) error, (NI) not implemented, (??) unknown.
(==) Log file: "/var/log/Xorg.0.log", Time: Thu Oct 26 16:44:04 2017
(==) Using config directory: "/etc/X11/xorg.conf.d"
(==) Using system config directory "/usr/share/X11/xorg.conf.d"
(EE)
Fatal server error:
(EE) no screens found(EE)
(EE)
Please consult the The X.Org Foundation support
at http://wiki.x.org
for help.
(EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information.
(EE)
(EE) Server terminated with error (1). Closing log file.
And have only a blank screen. I want to know how can I set this LCD as my boards default LCD and make it to run.
In addition I must say I have installed xorg and openbox too.
EDIT:
I found this configuration file of raspberryPi and tried to replace it in this file: /etc/X11/xorg.conf.d/01-armbian-defaults.conf :
# FBTFT xorg config file
#
# startx -- -layout TFT
# startx -- -layout HDMI
#
# startx
# When -layout is not set, the first is used: TFT
#
Section "ServerLayout"
Identifier "TFT"
Screen 0 "ScreenTFT"
EndSection
Section "ServerLayout"
Identifier "HDMI"
Screen 0 "ScreenHDMI"
EndSection
Section "Screen"
Identifier "ScreenHDMI"
Monitor "MonitorHDMI"
Device "DeviceHDMI"
Endsection
Section "Screen"
Identifier "ScreenTFT"
Monitor "MonitorTFT"
Device "DeviceTFT"
Endsection
Section "Monitor"
Identifier "MonitorHDMI"
Endsection
Section "Monitor"
Identifier "MonitorTFT"
Endsection
Section "Device"
Identifier "DeviceHDMI"
Driver "fbturbo"
Option "fbdev" "/dev/fb0"
Option "SwapbuffersWait" "true"
EndSection
Section "Device"
Identifier "DeviceTFT"
Option "fbdev" "/dev/fb1"
EndSection
But didn't work.
This is the contents before replacing:
Section "Monitor"
Identifier "Monitor0"
Option "DPMS" "false"
EndSection
Section "ServerFlags"
Option "BlankTime" "0"
Option "StandbyTime" "0"
Option "SuspendTime" "0"
Option "OffTime" "0"
EndSection
|
If the command dpkg -l '*xserver-xorg-video*' | grep ^ii had no results, that means you didn't install the necessary video drivers. I just looked up that the modesetting driver is part of the core, so it should already be installed.
So install the frame buffer window driver favorite package manager, e.g. from the command line as root:
apt-get install xserver-xorg-video-fbdev
Also make sure /usr/lib/xorg/modules/drivers/ contains the modesetting driver (just in case).
Restart X, and see if the log output changes.
| How can I set my LCD as default LCD? |
1,498,447,047,000 |
I need to run a Java JAR Swing GUI executable in a Raspbian Wheezy Debian distribution inside an ARM device during boot time.
I am following this as reference with myapp,myapp-start.sh and myapp-stop.sh, and this with possible solutions (and others more quite similar). But here is not reference to the DISPLAY variable.
Ive checked a lot of alternatives, with
Tried and not applicable Options:
/usr/bin/java -jar -Djava.awt.headless=true $myapp.jar
unset DISPLAY (inside myapp-start.sh, above the java -jar sentence)
Errors:
java.awt.HeadlessException: No X11 DISPLAY variable was set, but this program performed an operation which requires it.
Tried Options (inside myapp-start.sh, above the java -jar sentence):
export DISPLAY=:0
export DISPLAY=:0.0
export DISPLAY=localhost:0.0
Errors:
Can't connect to X11 window server using ':0.0' as the value of the DISPLAY variable...
Client is not authorized to connect to ServerException in thread stack...
Untried Options
ssh - X localhost: How should i do an ssh to the X11 server? Where should i execute that under an init.d process?. Is that the standard solution for running a Java program with GUI?.
USER=root inside myapp-start.sh: The init.d stops, and request password. So smart, the process don`t start.
Should any of the options above to be included in another place than the myapp-start.sh code? Where?
Should not be simpler to run a single piece of code at startup?.
Any other option, will be appreciated.
EDIT 2015-04-12
New Options
In the following options, i am adding a code inside this location /etc/xdg/lxsession/LXDE-pi/autostart for execution after the default user pi logs and X11 starts (see jlliagre suggestion):
usr/bin/java -jar /home/pi/Embedded/bin/PowerBar.jar (no ampersand)
export DISPLAY=:0.0
usr/bin/java -jar /home/pi/Embedded/bin/PowerBar.jar (no ampersand)
/bin/bash /home/pi/Embedded/bin/powerbarstart.sh (no ampersand)
All them start the application in the background, that is, the background music is played, and the graphics are available only through a VNC at :0 (using TightVNC).
As side effect, the screensaver activates, and the application freezes, each 60 seconds approx. Please note this same location is also used to disable the screensaver.
Is there a missing option, or symbol, I am not including?.
Solution
The device was configured as :1.0 instead of :0.0. Changing this on the myapp-start.sh solved the issue.
|
If your application is not interactive, you might launch a virtual X11 server and set the DISPLAY variable for your application to use it.
Possible X11 servers that can be used that way are:
Xvfb
Xdummy
Xvnc
The latter allows you to connect later to see and interact with the screen with a VNC client (vncviewer).
If you Raspberry pi (or similar) is configured to autologin the pi user under a graphical environment, you can start your application as the pi user and use the :0 display. Beware that you have to make sure X11 has completed its startup before doing it.
Edit: It looks like your configuration is launching a Xvnc server first as the pi user then is launching the frame buffer main X server as the root user. In that case, as you figured out, your application has to be started as root and using :1 as its display.
Alternatively, if what you really want is not to start your application once at boot time but whenever a user (typically pi) logs in under a graphic environment, add it to the rc file applicable to this graphic environment. For example /etc/xdg/lxsession/LXDE-pi/autostart.
| How to setup DISPLAY to run a Java JAR Swing Executable from Init.d |
1,498,447,047,000 |
Let assume following:
main-playbook.yml
- name: Play-1
hosts: localhost
connection: local
gather_facts: no
roles:
- role: my-role
vars:
newhost: 192.168.1.1
generated_playbook.yml
- name: Play-1
hosts: newhost
gather_facts: yes
tasks:
- name: Task1
- name: Task2
- name: Task3
main task from the Role:
- name: "Role MAIN-1"
add_host:
name: newhost
ansible_host: "{{newhost}}"
- include: generated_playbook.yml
Error:
ERROR! conflicting action statements: hosts, tasks
The error appears to be in 'generated_playbook.yml': line 1, column 3, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- name: Generated Playbook
^ here
I create new role and include this role in the main playbook.
In this role I add new host to in-memory inventory and than, I generate new playbook with j2 template example output generated_playbook.yml
Question:
Is there a way to run this new generated playbook only on the new added host but not adding anything else to main playbook?
I was trying to use import-playbook or include inside the role but this fails
|
It's not possible. Quoting from ansible.builtin.import_playbook
Files with a list of plays can only be included at the top level.
The example explicitly shows this
- name: This DOES NOT WORK
hosts: all
tasks:
- debug:
msg: task1
- name: This fails because I'm inside a play already
import_playbook: stuff.yaml
Notes
There is no include_playbook. See what include_* and import_* modules are available.
shell> ansible-doc -t module -l | grep include_
include_role Load and ...
include_tasks Dynamically inclu...
include_vars Load variables from files, dynamically...
shell> ansible-doc -t module -l | grep import_
import_playbook Imp...
import_role Import a ro...
import_tasks Impo...
Quoting note from import_module:
This is a core feature of Ansible, rather than a module, and cannot be overridden like a module.
| Ansible: Run new playbook from task |
1,498,447,047,000 |
My question is related to a python error, but I suspect that it is more a Linux question than a python one. Thus I post it first here.
I am running a python script which does a calculation and then produces a plot and saves it in a PDF file. The script runs through on my local machine (Mac OS), but when I run it on the cluster of my workplace (Linux) it crashes when trying to produce the plot on the PDF with the following error:
Traceback (most recent call last):
File "<my_python_script>.py", line 496, in <module>
if __name__ == "__main__": main()
File "<my_python_script>.py", line 487, in main
plot(model, obsdata, popt, pdf_file)
File "<my_python_script>.py", line 455, in plot
plt.figure(figsize=(11.69, 8.27))
File "/usr/lib/python3/dist-packages/matplotlib/pyplot.py", line 535, in figure
**kwargs)
File "/usr/lib/python3/dist-packages/matplotlib/backends/backend_tkagg.py", line 81, in new_figure_manager
return new_figure_manager_given_figure(num, figure)
File "/usr/lib/python3/dist-packages/matplotlib/backends/backend_tkagg.py", line 89, in new_figure_manager_given_figure
window = Tk.Tk()
File "/usr/lib/python3.5/tkinter/__init__.py", line 1880, in __init__
self.tk = _tkinter.create(screenName, baseName, className, interactive, wantobjects, useTk, sync, use)
_tkinter.TclError: no display name and no $DISPLAY environment variable
Here I substituted <my_python_script> to abbreviate the unimportant path and name of my script.
It may or may not be related to the problem, but I should also mention, that the script is not run manually from the command line, but submitted to a slurm queue.
Unfortunately I really don't know enough about Unix/Linux to make this work on the cluster. Since it runs on through on my local machine, I suspect that it must have to do something with the settings on the cluster, and in particular with the settings related to my used. The latter I know because I have colleagues for which the script also runs on the very same cluster.
Does anyone have an idea?
|
This is an error with the python code using a library called “tk”. That’s a library usually used for showing a GUI so it expects to be able to access your display (xserver or similar).
If you are running your code on a “headless” server then this just won’t work because there’s no monitor and your session can’t talk to an xserver.
It looks like this is a known problem with the matplot library. See here https://github.com/matplotlib/matplotlib/issues/7115/#issuecomment-378288788
Apparently it’s as simple as setting an environment variable to change the matplot backend before you run your python script:
export MPLBACKEND=agg
Obviously you could set this in python via
os.environ["MPLBACKEND"] = "agg"
| Python error only when I run script on Linux cluster: _tkinter.TclError: no display name and no $DISPLAY environment variable |
1,498,447,047,000 |
I have a Debian 10 system with a desktop environment installed and running. When I open a terminal and try to run any GUI application , such as gedit, from the command line, it fails to open with the following messages:
# gedit
Unable to init server: Could not connect: Connection refused
(gedit:3575): Gtk-WARNING **: 12:26:48.311: cannot open display:
This happens with any user, not just root.
I have tried running the following based on suggestions to no avail:
export DISPLAY=:0
export DISPLAY=:1
export DISPLAY=:2
export DISPLAY=:3
export DISPLAY=:4
export DISPLAY=:5
export DISPLAY=:6
xhost +
Anyone have any idea whats wrong here?
EDIT:
If I run export DISPLAY=:0 as a normal user, then the normal user can run GUI programs from the command line, however whenever I try the same with root it fails with the messages:
No protocol specified
Unable to init server: Could not connect: Connection refused
No protocol specified
Unable to init server: Could not connect: Connection refused
No protocol specified
Unable to init server: Could not connect: Connection refused
(gedit:3609): Gtk-WARNING **: 12:33:16.307: cannot open display: :0
|
The display belongs to the user. So, if you want to allow another user to draw on it (think of it as a printer) you have to grant permissions.
There are many ways to do that, but the simplest is probably to open the graphic terminal and run:
$ xhost +
That will allow connections to the server from other users.
Then, from the other user you can run:
$ export DISPLAY=:0
It could be another display, such as :1 ...
If you want to avoid those two steps, you can ssh into the other user, with the -X flag (that forwards the display):
$ ssh -X -l other_user localhost
| Debian 10 cannot open display: |
1,498,447,047,000 |
Upgraded lubuntu 17.04 to 17.10 on an EeePC 900a. Appears to work fine except that the left side of the display is junk. The full screen looks fine before linux is booted, the Asus EeePC splash.
System has 2GB RAM, 32 GB SSD & wireless USB mouse. EeePC 900a link's specification incorrectly refers to the model as 900 in one spot nor the system does not have a webcam.
Per the instructions in a popup when attempting to do the upgrade that said there was not enough space on /boot, in /etc/initramfs-tools/initramfs.conf I changed COMPRESS from gzip to xz.
I am able to ssh into the system.
Note that even though the left side of the display is garbled, while the mouse pointer is clear on the entire screen.
Booting the same system on 17.04 lubuntu from thumb drive works fine.
|
Exact same problem an no solution so far. There's a workaround: if you suspend the machine and resume it the display works fine again.
You can suspend the computer using the power menu on the login screen (top right corner icon).
| Upon upgrade of lubuntu 17.10 from 17.04 display messed up on an eeepc 900a |
1,498,447,047,000 |
I have been struggling for the past couple of days attempting to hook up my 1920x1080 external monitor to my 3200x1800 laptop.
When I run xrandr, it outputs:
Screen 0: minimum 320 x 200, current 5120 x 1800, maximum 8192 x 8192
eDP-1 connected 3200x1800+1920+0 (normal left inverted right x axis y axis) 294mm x 165mm
3200x1800 59.98*+ 47.99
2048x1536 60.00
1920x1440 60.00
1856x1392 60.01
1792x1344 60.01
1920x1200 59.95
1920x1080 59.93
1600x1200 60.00
1680x1050 59.95 59.88
1600x1024 60.17
1400x1050 59.98
1280x1024 60.02
1440x900 59.89
1280x960 60.00
1360x768 59.80 59.96
1152x864 60.00
1024x768 60.04 60.00
960x720 60.00
928x696 60.05
896x672 60.01
960x600 60.00
960x540 59.99
800x600 60.00 60.32 56.25
840x525 60.01 59.88
800x512 60.17
700x525 59.98
640x512 60.02
720x450 59.89
640x480 60.00 59.94
680x384 59.80 59.96
576x432 60.06
512x384 60.00
400x300 60.32 56.34
320x240 60.05
DP-1 connected primary 1920x1080+0+720 (normal left inverted right x axis y axis) 527mm x 296mm
1920x1080 60.00 + 50.00 59.94
1920x1080i 60.00* 50.00 59.94
1600x1200 60.00
1600x900 60.00
1280x1024 75.02 60.02
1152x864 75.00
1280x720 60.00 50.00 59.94
1024x768 75.03 60.00
800x600 75.00 60.32
720x576 50.00
720x480 60.00 59.94
640x480 75.00 60.00 59.94
720x400 70.08
HDMI-1 disconnected (normal left inverted right x axis y axis)
DP-2 disconnected (normal left inverted right x axis y axis)
HDMI-2 disconnected (normal left inverted right x axis y axis)
So, I figured if I run, xrandr --output DP-1 --mode 1920x1080, then the display would show on the external monitor... I was wrong: the monitor claimed to have no signal. I followed this comment which allowed the monitor to detect the HDMI signal, but I could only use a resolution lower than 1024x768. I played around a bit more, and the monitor detected 1920x1080i as well, but the borders around the screen were cutoff.
I did some research and figured out about something called overscan and used xrandr --output DP-1 --set underscan on, but that caused the following output:
X Error of failed request: BadName (named color or font does not exist)
Major opcode of failed request: 140 (RANDR)
Minor opcode of failed request: 11 (RRQueryOutputProperty)
Serial number of failed request: 38
Current serial number in output stream: 38
I also tried to add a new mode via xrandr and cvt and also tried changing the display settings via the settings panel in Ubuntu. There does not seem to be a problem with the monitor because it works fine when I boot Windows 10.
Is there anything else I could try?
Machine: Dell XPS 13 9350 (no hardware changes)
OS: Ubuntu 16.04 LTS
External Monitor: Dell S2415H
|
A year later, and I have somehow managed to fix the problem, though I don't know exactly how. It is important to note that my monitor does not have any settings for disabling overscan or any related.
Graphics Drivers
I thought I needed to update my graphics drivers, so I ran the following commands:
sudo apt-get update
sudo apt-get install xserver-xorg-core xserver-xorg-video-intel
Then, I decided to reboot the machine with:
shutdown -r now
But, when I tried to login, the screen froze.
Recovery Mode
I went into recovery mode from my bootloader and went to tty1 (by pressing crtl + alt + f1), logged in, and and the following commands:
sudo apt-get purge xorg lightdm
sudo apt-get autoremove
sudo apt-get install xorg lightdm
So, if I understand these commands correctly, I essentially removed all existing configurations of xorg and lightdm from my machine and reinstalled the packages. During the installation process, I decided not to use lightdm as my display manager but rather gdm3.
I then rebooted the machine (not in recovery), and plugging in my monitor worked as expected - no cutoff display borders. I am not quite sure what it was exactly that caused this behavior, but I wanted to document my steps to fixing this problem. It could just be as simple as changing the default display manager from lightdm to gdm3.
| xrandr: display borders are cutoff |
1,498,447,047,000 |
I'm having an issue with my Ubuntu Server 16.04 installation. I have it running on a Zotac Z-Box CI23 Nano. It installed fine, but on its first boot all I had was a blank screen. I edited /etc/default/grub and changed GRUB_CMDLINE_LINUX_DEFAULT="nomodeset"
This let me see the startup, but the text is all garbled white blocks:
I first thought it might be a bad cable, so I switched cables with no change. I changed monitors with no effect. I switched to a TV using HDMI with no effect.
Any ideas would appreciated.
|
I finally got this figured out. At first I thought it was bad hardware so I RMA'd the machine. The same exact thing was happening on the replacement. So I spent a few hours fiddling with grub settings. This is what ended up working for this machine:
in /etc/default/grub:
Comment out GRUB_CMDLINE_LINUX_DEFAULT="splash quiet" entirely.
Uncomment GRUB_TERMINAL=console
Uncomment GRUB_GFXMODE=640x480 I set it to GRUB_GFXMODE=1280x800 because that's the monitor I'm using's default.
Save it then sudo update-grub and reboot and it is showing as expected. Only the combination of those three changes seem to make it work for me, but YMMV.
| Garbled text on startup |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.