date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,355,581,038,000 |
Why linux /proc/meminfo show:" 1 MemTotal: 7038920 kB " (proc most likely is to mean Kibibyte) in a PC of 8 GB memory RAM, although its Kibibyte is 7812500 ?
|
BIOS may reserve some RAM which the OS cannot use.
The iGPU does reserve a decent chunk of RAM.
PCI Express devices may ask the BIOS to reserve some RAM for them (I'm not totally sure about that but I've heard something like that).
sudo dmesg | grep -i reserv will show you a lot.
Here I have more than 1GB of RAM reserved for various things:
Memory: 65703592K/67015300K available (10240K kernel code, 1319K rwdata, 2148K rodata, 1268K init, 1400K bss, 1311448K reserved, 0K cma-reserved)
So while I have 64*1024*1024 which is 67,108,864 kB of RAM, I only have 65,782,988 kB of RAM available.
Memory (RAM/VRAM) is counted in powers of two in Linux like in most other OSes. So 1kB = 1024 bytes, 1mB = 1024 kB and so on.
| Memory total shown on Linux on 8GB memory PC is only 7038920 kB |
1,355,581,038,000 |
vmalloc(size) allocates a memory of size long which is virtually contiguous but the physical mapping would not be contiguous. Does that mean the the virtually allocated size long memory actually lies in different page frames of physical memory?
|
Yes, it means just exactly that.
| Is vmalloc() allocate bytes of memory which is virtually contiguous maps to memory area from different physical pages? |
1,355,581,038,000 |
The excerpt below is from the OS text by Galvin et. al.
When we use a paging scheme, we have no external fragmentation: any free frame can be allocated to a process that needs it. However, we may have some internal fragmentation. Notice that frames are allocated as units. If the memory requirements of a process do not happen to coincide with page boundaries, the last frame allocated may not be completely full. For example, if page size is 2,048 bytes, a process of 72,766 bytes will need 35 pages plus 1,086 bytes. It will be allocated 36 frames, resulting in internal fragmentation of 2,048 − 1,086 = 962 bytes. In the worst case, a process would need n pages plus 1 byte. It would be allocated n + 1 frames, resulting in internal fragmentation of almost an entire frame.
The above screenshot is from "High-Performance Computer Architecture" by Georgia Tech... Here the instructor says, that the size of the process is till that much as shown by the grey brace in the right. And our system is such that we are allocating say 2 pages to the process, then the dashed portion is the internal fragmentation.
The problem which I am having is something as follows. I drew the situation as shown above. The virtual address space of the process is shown in green. On the left, I show the virtual address bits. Now, in computers, page sizes are usually in powers of 2. So the page offset I guess, however long, if the page size is less than the virtual address space size, then it shall equally divide the virtual address space of the process. Now if it is equally divided, the last portion of the virtual address space shall have the stack section, then how shall there be a [internal] fragmentation [in the last part of the last page as shown in the above pictures]?
Suppose if we use a page size of 4 MB then :
The picture might be something like this I guess. Note that the portions shown in blue I guess are internal fragmentation. While the huge gap between heap and stack is not allocated frames in the main memory, so we need not bother about them... But I guess it depends on the size of the stack and the code, data, and heap portion. Whether they are aligned properly as per the page or not, to have internal fragmentation and I feel we can't just simply say that only the last part of the last frame shall not be occupied and it is the only internal fragmentation. Moreover, how is the Galvin text calculating the size of the process?
|
The answer is that you set yourself up exactly in the situation that does not create internal fragmentation, as defined by the text, which says,
If the memory requirements of a process do not happen to coincide with page boundaries
But above, you set things up so that they did coincide. So... no fragmentation. Except that, as you noticed, the unused heap and stack do occupy space, and those spaces ought to be considered "fragmented" (however harmlessly). This is why you don't want to have huge pages.
However, I believe that memory is not allocated that way. Pages are not just memory chunks, they have properties that can be set. So, for example, you might put the code in a bunch of pages, and then inform the OS that those pages are read only (and/or can be shared with other processes). You can't do that with the heap. On the contrary you might request for heap pages not to be executable, so that several security exploits will not work.
To enjoy these goodies, you need to allocate separate pages to the various types of process memory.
So, it would be extremely unlikely for the code, static data and heap sections to be exactly aligned to whatever the page size is; and there, internal fragmentation would occur.
| Understanding how internal fragmentation occurs in systems using only paging with huge page size |
1,355,581,038,000 |
If this question depends on the linux distribution, please answer it in a "general way" (i.e. the most common implementation on linux distributions).
In the page table of a process we can find the physical direction where the page we are looking for is mapped in main memory or a pointer to disk if the page we are looking for isn't present in main memory and we have to catch it from the disk. But my question is: if the page we are looking for is placed in the swap area, what will we find on the page table of that process? We will find a pointer to disk (but pointing to the page in the swap area) or will we find a physical direction but being this physical direction a "virtual direction" which makes mainMemory + swapArea a unified memory (i.e. if we have 16GB main memory + 2GB swap memory we can see in the page table that the page we are looking for is in the direction X (being X a direction that corresponds to 17GB) and this mean that we will find it in the swap memory (because is >16GB the direction)) ? Remember that we are suposing 16GB main memory + 2GB swap memory.
|
The page table entry for a page which is swapped out contains bits to indicate that fact (at least one; the details depend on the architecture), and a two-part pointer to the information describing the swapped page. Each swap device or file has a corresponding swap_info structure, and each of those has a map which links a page table entry to a location in the swap device or file.
See How does the kernel address swapped memory pages on swap partition\file? and the “Swap Management” chapter of Mel Gorman’s Understanding the Linux Virtual Memory Manager for details.
| Page table content when the physical page we are looking for is in swap area |
1,382,943,583,000 |
Some might say this doesn't work, but it does, this website does what I want. Can you do this with any common tool like ffmpeg? Or maybe there is a python script somewhere? I couldn't find anything on the net.
|
With waon:
waon -i inputfile -o outputfile.mid
| Is it possible to convert audio to midi with the shell? |
1,382,943,583,000 |
How do I print and connect jack-audio and midi ports from the command line, similar to aconnect -io or aconnect 20:0 132:1 for inputs and outputs of ALSA MIDI?
|
jack_lsp [options] [filter string]
is able to print all jack ports (Audio and MIDI).
From the help-text:
List active Jack ports, and optionally display extra information.
Optionally filter ports which match ALL strings provided after any options.
Display options:
-s, --server <name> Connect to the jack server named <name>
-A, --aliases List aliases for each port
-c, --connections List connections to/from each port
-l, --latency Display per-port latency in frames at each port
-L, --latency Display total latency in frames at each port
-p, --properties Display port properties. Output may include:
input|output, can-monitor, physical, terminal
-t, --type Display port type
-h, --help Display this help message
--version Output version information and exit
For more information see http://jackaudio.org/
to connect the ports from the command line, you can use jack_connect.
with jack_lsp you could get an output like this showing all current jack ports:
system:capture_1
system:capture_2
system:playback_1
system:playback_2
system:midi_capture_1
system:midi_playback_1
amsynth:L out
amsynth:R out
amsynth:midi_in
system:midi_playback_2
system:midi_capture_2
as an example you could connect system:midi_capture_1 with amsynth:midi_in by running: jack_connect system:midi_capture_1 amsynth:midi_in
To see which ports are connected you could use jack_lsp -c and get an output similar to this:
system:capture_1
system:capture_2
system:playback_1
amsynth:L out
system:playback_2
amsynth:R out
system:midi_capture_1
amsynth:midi_in
system:midi_playback_1
amsynth:L out
system:playback_1
amsynth:R out
system:playback_2
amsynth:midi_in
system:midi_capture_1
system:midi_playback_2
system:midi_capture_2
| Print and connect Jack Audio and MIDI ports from the command line |
1,382,943,583,000 |
I have a physical MIDI keyboard that also features some control keys, such as "play" and "stop". When pressed, they send the MIDI codes 115 and 116 respectively over the MIDI bus. Is it possible to hook up these commands to the usual media controls (play and pause) of Linux, so that when I press "play", the playback starts?
Is it furthermore possible to hook up other MIDI keys (e.g., up/down) to their respective keyboard counterparts (e.g., arrow up/down)?
|
In the comments, dirkt suggested to write a custom program. So, I wrote a short working proof of concept script in Python that reads inputs from a MIDI controller and then simulates the required key press. I tested it on Ubuntu 20.04. Unfortunately, it requires superuser privileges to run, otherwise /dev/uinput cannot be opened for writing.
import mido
from evdev import uinput, ecodes as e
def press_playpause():
"""Simulate pressing the "play" key"""
with uinput.UInput() as ui:
ui.write(e.EV_KEY, e.KEY_PLAY, 1)
ui.syn()
def clear_event_queue(inport):
"""Recent events are stacked up in the event queue
and read when opening the port. Avoid processing these
by clearing the queue.
"""
while inport.receive(block=False) is not None:
pass
device_name = mido.get_input_names()[0] # you may change this line
print("Device name:", device_name)
MIDI_CODE_PLAY = 115
MIDI_VALUE_ON = 127
with mido.open_input(name=device_name) as inport:
clear_event_queue(inport)
print("Waiting for MIDI events...")
for msg in inport:
print(msg)
if (hasattr(msg, "value")
and msg.value == MIDI_VALUE_ON
and hasattr(msg, "control")
and msg.control == MIDI_CODE_PLAY):
press_playpause()
Requirements:
evdev
mido
python-rtmidi
python-uinput
| Use MIDI signals for media control |
1,382,943,583,000 |
I need a simple way to connect the midi keyboard to pulse audio and leave it active. ( i'm not worried about low latency.)
So far, I've looked at Ted's Linux MIDI Guide and followed all of that, but I reverted to normal latency kernel, when the low-latency caused trouble with my input devices. Following Ted's instructions, I can run /usr/bin/audio start and then the vmpk script, which is nice, but then I can't use pulse (for watching tutorial on youtube.)
Is it best in the long run to use jack audio for everything, even on a normal 250hz kernel?
|
For beginners who don't need to fuss with studio-grade settings...
executable file pulsepiano, adapted from Ted's Linux Midi Guide to use Pulse instead of Jack.
So far I only can't get the script to hook up the MIDI-out from the keyboard, but that might be another topic.
You have to install fluidsynth, vmpk, and get the soundfont: FluidR3_GM.sf2. The trailing ampersand runs the command in the background. The aconnect info also adapted from Ted's guide.
If you have problems,
use: kill -9 [PID of vmpk|fluidsynth|qsynth]
or: killall fluidsynth, killall vmpk, and so on.
Hope it isn't too much info. Without opening each app manually, this is about as beginner as it gets for midi.
#!/bin/bash
fluidsynth --server \
--no-shell \
--audio-driver=pulseaudio \
--gain=1.0 \
--reverb=0.42 \
--chorus=0.42 \
/usr/share/sounds/sf2/FluidR3_GM.sf2 &>/tmp/fluidsynth.out &
sleep 2
vmpk &
sleep 2
vmpkport=$(aconnect -i |grep "client.*VMPK Output" | cut -d ' ' -f 2)0
synthport=$(aconnect -i |grep "FLUID Synth" | cut -d ' ' -f 2)0
echo "vmpk on ${vmpkport} & synth on ${synthport}"
| Simple way to connect midi keyboard to pulseaudio without using Jack |
1,382,943,583,000 |
I have connected a MIDI device to my UART RX / serial port /dev/ttyAMA0 using some electronics as described here.
I have properly configured the right baud setting (31250 baud, etc.).
It works: I can open the serial port, read some data, and I see the data coming when I play notes on the MIDI keyboard.
How to redirect this serial port into Linux's MIDI system ? (ALSA / rtMidi or something else?)
Indeed, I would like that this MIDI input is handled by ALSA, instead of managing the raw data myself.
|
There is a driver that replaces the standard driver for the ISA UART 16550 chip on IBM-compatible PCs (documentation), but this does not work on different architectures.
To connect an existing /dev/tty* device with ALSA, try a daemon such as ttyMIDI.
| See a serial port as a MIDI IN device |
1,382,943,583,000 |
I seem to be having issues with my Alsa sequencer. I am using Parabola (Arch variant) and I don't use Pulseaudio, I use Alsa directly. I am trying to play a game via Wine that has MIDI audio. I have fluidsynth installed and it works - I can play a midi file and it sounds fine. However, if I start the fluidsynth server and run aplaymidi -l, I get the following error:
$ aplaymidi -l
ALSA lib seq_hw.c:466:(snd_seq_hw_open) open /dev/snd/seq failed: No such file or directory
I have no /dev/snd/seq file, which seems like it is something that should be there, relating to the Alsa sequencer. Does anyone have any idea why that file might not be present and what solutions I can try?
Edit:
To answer the question in the comments, here is the output of /proc/config.gz for the section dealing with the sequencer:
$ zgrep -A 5 -B 5 SEQUENCER /proc/config.gz
# CONFIG_SND_CTL_VALIDATION is not set
# CONFIG_SND_JACK_INJECTION_DEBUG is not set
CONFIG_SND_VMASTER=y
CONFIG_SND_DMA_SGBUF=y
CONFIG_SND_CTL_LED=m
CONFIG_SND_SEQUENCER=m
CONFIG_SND_SEQ_DUMMY=m
CONFIG_SND_SEQUENCER_OSS=m
CONFIG_SND_SEQ_HRTIMER_DEFAULT=y
CONFIG_SND_SEQ_MIDI_EVENT=m
CONFIG_SND_SEQ_MIDI=m
CONFIG_SND_SEQ_MIDI_EMUL=m
CONFIG_SND_SEQ_VIRMIDI=m
So it appears the Alsa sequencer was compiled as a module and I probably just need to load that module.
|
If the /dev/snd/seq special file does not exist it is most probably because your system does not get the appropriate driver loaded.
The appropriate driver is part of any linux distribution and is built at kernel make time depending on the CONFIG_SND_SEQUENCER config option.
Say yes to Sequencer Support (Location: Device Drivers / Sound card support / Advanced Linux Sound Architecture) to build the driver in kernel. (Alsa used to recommend that). Rebuild your kernel and that's it, it will be automatically loaded (and special files created) at boot time.
BTW, I warmly recommend selecting "Use HR-timer as default sequencer timer" as well.
I discover that this driver can be built as a module (saying M). If it is your choice then you should not forget to explicitly modprobe it before willing to make use of the /dev/snd/seq special file.
| Alsa sequencer issue - no file /dev/snd/seq |
1,382,943,583,000 |
I have a MIDI keyboard "Impulse", and a raspi3. I want to connect the keyboard to the raspi, and having sounds without touching anything.
So I use fluidsynth with jack as audio driver. Fluidsynth is launched by systemd service. There is no problem with it.
So I made a script, where I "jack_connect" the ports, and "aconnect" the midi port of my keyboard to the synthesizer fluidsynth in this way :
impulseport=$(aconnect -i|grep -i "IMPULSE" | cut -d ' ' -f 2)0
synthport=$(aconnect -o |grep -i "FLUID" | cut -d ' ' -f 2)0
# some verifications of existence and exit if one port is missing
aconnect ${impulseport} ${synthport}
The thing is it does not really work as I wanted. My udev rule that exec this script is fired before the loading of "snd-usb-audio" interface driver. As a consequence, the variable $impulseport is empty (actually =0 with the concatenation at the end).
Here is my udev rule :
ACTION=="add", ENV{DEVTYPE}=="usb_device", ATTRS{idVendor}=="1235", ATTRS{idProduct}=="001a", RUN+="/bin/su -c /home/pi/piano_connect - pi"
ACTION=="remove", ENV{DEVTYPE}=="usb_device", ATTRS{idVendor}=="1235", ATTRS{idProduct}=="001a", RUN+="/usr/bin/aconnect -x"
In the syslog :
1 systemd[410]: Started Sound Service.
2 fluidsynth[423]: fluidsynth: Jack sample rate mismatch, adjusting. (synth.sample-rate=44100, jackd=48000)
3 kernel: [162.772916] usb 1-1.2: new full-speed USB device number 4 using dwc_otg
4 kernel: [162.905473] usb 1-1.2: New USB device found, idVendor=1235, idProduct=001a, bcdDevice= 0.00
5 kernel: [162.905491] usb 1-1.2: New USB device strings: Mfr=1, Product=2, SerialNumber=0
6 kernel: [162.905500] usb 1-1.2: Product: Impulse
7 kernel: [162.905509] usb 1-1.2: Manufacturer: Focusrite A.E. Ltd
8 mtp-probe: checking bus 1, device 4: "/sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2"
9 mtp-probe: bus: 1, device: 4 was not an MTP device
10 systemd[1]: Started Session c2 of user pi.
11 pi: [Piano] Jackd : connecting system to fluidsynth ports
12 pi: [Piano] Error: Port Impulse unknown, exiting
13 systemd-udevd[473]: Process '/bin/su -c /home/pi/piano_connect - pi' failed with exit code 1.
14 systemd[1]: session-c2.scope: Succeeded.15
15 kernel: [165.876060] usbcore: registered new interface driver snd-usb-audio
16 mtp-probe: checking bus 1, device 4: "/sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2"
17 mtp-probe: bus: 1, device: 4 was not an MTP device
The problem is at the line 12 : $impulseport is empty ! So it makes the script exiting (line 13). And you can see line 15 that the driver is loaded AFTERWARDS.. So in my script I made a dirty thing like a while loop where I check if the directory /sys/bus/usb/drivers/snd-usb-audio exists with a sleep but it hangs the system and the module never loads.. Do you guys have any idea about a way out ?
NB: If I unplug the USB MIDI keyboard, and plug it again it works well, but I'd like it to work correctly on the first plug after boot... Or even more to work after boot while the keyboard is already plugged :D
|
Ok after a few research, I found out a way out.
First, use udevadm monitor before plugging and plug it. Here is the output :
KERNEL[126.555200] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2 (usb)
KERNEL[126.555888] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0 (usb)
KERNEL[126.556508] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.1 (usb)
KERNEL[126.557829] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.2 (usb)
KERNEL[126.558188] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.3 (usb)
KERNEL[126.558548] bind /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2 (usb)
KERNEL[129.399848] add /module/snd_seq_dummy (module)
UDEV [129.406959] add /module/snd_seq_dummy (module)
UDEV [129.493966] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2 (usb)
UDEV [129.501782] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.2 (usb)
UDEV [129.505609] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.3 (usb)
KERNEL[129.508606] add /module/snd_rawmidi (module)
UDEV [129.510396] add /module/snd_rawmidi (module)
KERNEL[129.513459] add /module/snd_usbmidi_lib (module)
UDEV [129.515605] add /module/snd_usbmidi_lib (module)
KERNEL[129.516631] add /module/snd_hwdep (module)
UDEV [129.518617] add /module/snd_hwdep (module)
KERNEL[129.535977] add /module/snd_usb_audio (module)
UDEV [129.537881] add /module/snd_usb_audio (module)
KERNEL[129.538452] bind /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.1 (usb)
KERNEL[129.538547] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1 (sound)
KERNEL[129.539135] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1/midiC1D0 (sound)
KERNEL[129.539260] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1/seq-midi-1-0 (snd_seq)
KERNEL[129.539985] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1/controlC1 (sound)
KERNEL[129.541172] bind /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0 (usb)
UDEV [129.541345] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.1 (usb)
KERNEL[129.541428] add /bus/usb/drivers/snd-usb-audio (drivers)
UDEV [129.542776] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0 (usb)
UDEV [129.542917] add /bus/usb/drivers/snd-usb-audio (drivers)
KERNEL[129.549489] add /module/snd_seq_midi_event (module)
UDEV [129.551648] add /module/snd_seq_midi_event (module)
KERNEL[129.552118] add /module/snd_seq_midi (module)
KERNEL[129.552213] bind /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1/seq-midi-1-0 (snd_seq)
KERNEL[129.552288] add /bus/snd_seq/drivers/snd_seq_midi (drivers)
UDEV [129.554430] add /module/snd_seq_midi (module)
UDEV [129.554572] add /bus/snd_seq/drivers/snd_seq_midi (drivers)
UDEV [129.561871] bind /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2 (usb)
UDEV [129.564936] bind /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.1 (usb)
UDEV [129.566125] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1 (sound)
UDEV [129.568261] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1/seq-midi-1-0 (snd_seq)
UDEV [129.570243] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1/midiC1D0 (sound)
KERNEL[129.573057] change /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1 (sound)
UDEV [129.607147] add /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1/controlC1 (sound)
UDEV [129.609881] bind /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0 (usb)
UDEV [129.611644] bind /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1/seq-midi-1-0 (snd_seq)
UDEV [129.618931] change /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1 (sound)
After noticing the last line UDEV [129.611644] bind /devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1/seq-midi-1-0 (snd_seq), I plugged the USB MIDI keyboard, and did :
devadm info -a /sys/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2\:1.0/sound/card1/seq-midi-1-0/> /tmp/udev`
Among the outputs, there is in the first lines :
looking at device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1/seq-midi-1-0':
KERNEL=="seq-midi-1-0"
SUBSYSTEM=="snd_seq"
DRIVER=="snd_seq_midi"
looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0/sound/card1':
KERNELS=="card1"
SUBSYSTEMS=="sound"
DRIVERS==""
ATTRS{number}=="1"
ATTRS{id}=="Impulse"
looking at parent device '/devices/platform/soc/3f980000.usb/usb1/1-1/1-1.2/1-1.2:1.0':
KERNELS=="1-1.2:1.0"
SUBSYSTEMS=="usb"
DRIVERS=="snd-usb-audio"
ATTRS{bInterfaceProtocol}=="00"
ATTRS{bInterfaceSubClass}=="01"
ATTRS{interface}=="Novation Impulse"
ATTRS{bInterfaceNumber}=="00"
ATTRS{supports_autosuspend}=="1"
ATTRS{bAlternateSetting}==" 0"
ATTRS{authorized}=="1"
ATTRS{bNumEndpoints}=="00"
ATTRS{bInterfaceClass}=="01"
The interesting infos are SUBSYSTEM and DRIVERS. So I changed my udev rule to :
ACTION=="bind", SUBSYSTEM=="snd_seq", DRIVERS=="snd-usb-audio", RUN+="/bin/su -c /home/pi/piano_connect - pi"
ACTION=="remove", ENV{DEVTYPE}=="usb_device", ATTRS{idVendor}=="1235", ATTRS{idProduct}=="001a", RUN+="/usr/bin/aconnect -x"
Here, I changed the ACTION to "bind" because of the first command (udevadm monitor) and matched with SUBSYSTEM and DRIVERS instead of the ids of the product. And it works like a charm !
Hope it helped. :)
| How to fire a udev rule after the kernel has registered a driver? |
1,382,943,583,000 |
I have created a MIDI file using Anvil Studio in Windows. When playing the file with the MIDI editor Rosegarden in Linux (openSUSE Tumbleweed), the MIDI file sounds different.
Now, I know that MIDI files don't contain any music themselves and that it depends on the device which sound is played.
From what I have read so far it looks like Anvil Studio is using the Microsoft GS Wavetable Software Synthesizer and that this is what makes the MIDI file sound the way it does.
Is there a way to make the MIDI file sound the same in Linux? E.g. by specifying a soundfont or using a certain Software Synthesizer?
|
Get the gm.dls file out of \windows\system32\drivers\;
convert it into a .sf2 sound font with a tool like Viena or Awave;
configure the Linux synthesizer (probably TiMidity++ or FluidSynth) to use that file.
| How do I make MIDI files sound the same in Linux as in Windows? |
1,382,943,583,000 |
I am having trouble getting a midi controller (piano keyboard) to work on a raspberry pi. It works on my linux laptop, and another midi keyboard does work on the pi. It is listed under lsusb, so I know the vendor/model id's, but not under amidi -l or aconnect -i. The pi also has a version of a few years old. So I am guessing that the udev does not know this usb device, yet. I already found out that udev uses some internal database with a lot of usb devices. But I could not find, yet, a how-to on adding a new usb device to the udev database.
I only see a lot of tutorials on how to add an udev rule, but that is, I guess, something else. I need to tell the system that this vender/model id is a midi controller.
How does this work?
|
udevd is just responsible for creating symlinks in /dev, running additional programs on creation or removal of devices etc. If you can't see the device in ALSA, no matter what you do with udevadm, you won't be able to get it recognized that way.
Hardware recognition by the kernel is baked into the corresponding modules. For USB in particular, there are patterns that encode the vendor and device id, and other things. You can find out what patterns a particular module will trigger on using modinfo.
So in your case, the RaspPi very likely doesn't have an up-to-date module for your piano keyboard - either the module already exists, but doesn't contain your piano keyboard identifiers, or maybe even the module isn't present.
So upgrade the kernel on the RaspPi to the newest version. If that doesn't solve the problem, identify the module that's reacting to your keyboard on your laptop (for that you can use udevadm, or just lsmod). Then have a look at what modinfo says for the corresponding module on your RaspPi.
| How to use udevadm to fix unrecognized usb device |
1,382,943,583,000 |
I am trying to use ALSA for MIDI purposes in C.
My problem is, snd_rawmidi_open() sort of "crashes" (waits forever like a while loop) when using valuable arguments :
#include <stdio.h>
#include <stdlib.h>
#include <alsa/asoundlib.h>
int main(int argc,char** argv)
{
snd_rawmidi_t *handle_in = 0;
int err;
fprintf(stderr, "TEST 1\n");
err = snd_rawmidi_open(&handle_in,NULL,"hw:1,0,0",0);
fprintf(stderr,"TEST 2\n");
if (err) {
fprintf(stderr,"snd_rawmidi_open failed: %d\n",err);
}
fprintf(stderr, "TEST 3\n");
exit(0);
}
"hw:1,0,0" is a MIDI keyboard. When I use an invalid value like "foo", it gives an error. With a valid one, the program displays "TEST 1" and pauses.
Any idea?
Many thanks!
|
By default, snd_rawmidi_open waits until the requested port is available.
If you do not want this, add the SND_RAWMIDI_NONBLOCK flag (and reset it afterwards with snd_rawmidi_nonblock() if you want the read/write calls to be blocking).
| snd_rawmidi_open() waits forever - no error message |
1,502,555,900,000 |
I've got a partition P1 (which contains my Linux OS) on a drive A.
I've just gotten a completely new drive B (that is larger than partition P1 AND the entire drive A).
I'd like to copy across the partition from drive A to drive B, and possible resize it later on.
Can this be done with dd? I could easily create a new parition table on drive B, and just cp the files across - but this seems like it might be slightly slower due to the filesystem overhead.
Output of `parted --list`:
It would be the partition 4 that I want to copy to another drive.
Model: ATA Samsung SSD 850 (scsi)
Disk /dev/sdb: 250GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 17.4kB 1049kB 1031kB BIOS boot partition bios_grub
2 1049kB 1074MB 1073MB fat32 EFI System boot, esp
3 1075MB 183GB 181GB ext4 Linux filesystem
4 183GB 250GB 67.5GB ext4 Basic data partition
Please ignore any reference to the boot drive / functionality (which I'll worry about later on) - keeps this question concise.
|
Yes ,that's what dd is for. Assuming:
sxb is the drive to copy from
sxc is the drive to copy to
sxb4 is the fourth partition on the second drive that you want to copy from
sxc1 is the partition you've created to be of equal size to sxb4
do :
parted /dev/sxc
GNU Parted 3.2
Using /dev/sxc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel GPT
Warning: The existing disk label on /dev/sxc will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? Yes
(parted) mkpart primary 0.0 67.5GB
(parted) quit
dd if=/dev/sxb4 of=/dev/sxc1 bs=16M
As that will duplicate the UUID of the partition, in order to change the UUID of the new partition (make sure the partition is not mounted) run the following:
e2fsck -f /dev/sxc1
tune2fs /dev/sxc1 -U random
if sxbis an old drive and you expect it to have read errors, use ddrescue instead.
Note: as dd is known as disk destroyer, and creating a partition table is dangerous, not putting the exact command in the above as some random idiot on the Internet might see this question and copy-paste the codez without understanding what it does...
| How can I move a single partition to another empty drive? |
1,502,555,900,000 |
I somehow messed up my partitions when I was trying to clean reinstall Linux Mint. Now whenever I type sudo fdisk -l, it would always give me warnings:
$ sudo fdisk -l
[sudo] password for sneknotsnake:
Disk /dev/sda: 465,78 GiB, 500107862016 bytes, 976773168 sectors
Disk model: ST500DM009-2DM14
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xdb92a920
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 104447 102400 50M 7 HPFS/NTFS/exFAT
/dev/sda2 104448 72919039 72814592 34,7G 7 HPFS/NTFS/exFAT
/dev/sda3 72921086 598581903 525660818 250,7G f W95 Ext'd (LBA)
/dev/sda4 598581904 976773119 378191216 180,3G 7 HPFS/NTFS/exFAT
/dev/sda5 72921088 219478015 146556928 69,9G 83 Linux
/dev/sda6 219480032 598581903 379101872 180,8G 7 HPFS/NTFS/exFAT
Partition 3 does not start on physical sector boundary.
Partition table entries are not in disk order.
AFAIK of my problem, it's because I'm using the "newer" HDD format that uses 4096 instead of the old 512 and my 3rd partition is not perfectly aligned. I'm not really sure, but I think it's because 72921086 % 8 equal 6 instead of 0 like the other partitions (72921086 is from /dev/sda3)
If that really is the case, then how do I realign my 3rd partition? Note that it's a container partition (IDK what it's called) for my 5th and 6th partitions. If I'm not mistaken, I only need to move the start sector by 6 so that it's perfectly aligned.
|
It's a non-issue.
Your sda3 is an extended partition that holds logical partitions sda5 and sda6. The only non-aligned number points to the first extended boot record (EBR). This record takes 512 bytes, one logical sector. In no circumstances this can span over two physical sectors. There is no alignment problem here.
The alignment matters for partitions that hold filesystems or other structures. You can call sda5 and sda6 structures inside sda3. The point is they are "misaligned" with respect to the beginning of sda3 (you don't see this misalignment directly) and this perfectly compensates for the misalignment of sda3 itself (the misalignment that bothers you); so they are aligned with respect to the beginning of the disk (and therefore fdisk raises no warning about them), this is what matters. In your case all partitions that need to be aligned are aligned.
If you insist on "fixing" the "problem", you should remove partitions 6, 5 and 3 (in this exact order) and re-create 3, 5 and 6 (in this exact order), so the new partition table is identical to the old, except the starting sector for sda3 is 72921080 instead of 72921086 (and consequently the number of sectors is 525660824 instead of 525660818). The end of the preceding partition (sda2) is further to the left, so there's space to do this.
This can be done without destroying the filesystems. The partitions holding the filesystems will stay at their old places and they will keep their old sizes. No resizing nor moving of any filesystem will be required.
The procedure is safe, unless you manage to destroy the filesystem(s) with some over-zealous tool. AFAIK fdisk is not over-zealous (although it will probably warn you about signatures of existing filesystems, do not destroy the signatures).
There's a remote possibility something uses the unpartitioned space between the partitions 2 and 3. By moving the beginning of sda3 you may destroy some data. It would be uncommon (and in fact suspicious) if anything used this space though.
In practice the "fix" will improve nothing. The safest thing is to do nothing.
| Extended partition not aligned to physical sectors. All other partitions aligned. Is this an issue? How can I fix this? |
1,502,555,900,000 |
I am trying to resize an encrypted physical volume so that I have free space to make a dual boot system.
So far, I've successfully decrypted the PV, resized the filesystem and the LVs but when I try to resize the PV I run into issues relating to the sector extents of my LVs.
root@ubuntu:/home/ubuntu# pvresize --setphysicalvolumesize 174G /dev/mapper/cryptdisk
/dev/mapper/cryptdisk: cannot resize to 44543 extents as later ones are allocated.
0 physical volume(s) resized / 1 physical volume(s) not resized
On the physical volume I have two logical volumes: root (first) and swap (second). I've resized (shrunk) the root volume to create free space, but the extents of the free space lay between the root and swap volumes. Because the extents of the swap volume are located at the end of the physical volume, I cannot shrink the physical volume using pvresize.
How can I change the extents of the swap volume so that the free space is located after it?
I've had a look at pvmove, but I don't think it's what I need - in this case I need something like lvmove (but it doesn't exist).
root@ubuntu:/home/ubuntu# pvdisplay --maps /dev/mapper/cryptdisk
--- Physical volume ---
PV Name /dev/mapper/cryptdisk
VG Name elementary-vg
PV Size 222.59 GiB / not usable 0
Allocatable yes
PE Size 4.00 MiB
Total PE 56983
Free PE 12546
Allocated PE 44437
PV UUID DkBRl8-3gAq-Ewzv-7kjB-AHZI-dhqC-69gaZo
--- Physical Segments ---
Physical extent 0 to 40354:
Logical volume /dev/elementary-vg/root
Logical extents 0 to 40354
Physical extent 40355 to 52898:
FREE
Physical extent 52899 to 56980:
Logical volume /dev/elementary-vg/swap_1
Logical extents 0 to 4081
Physical extent 56981 to 56982:
FREE
|
pvmove is the right tool:
pvmove /dev/mapper/cryptdisk:52899:56980 /dev/mapper/cryptdisk:40355
will move the extents.
However since we’re talking about swap I’d just remove the LV and recreate it...
| How to change extents of logical volume on LVM physical volume |
1,502,555,900,000 |
I have one issue.
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.8G 5.8G 3.5G 63% /
udev 11M 0 11M 0% /dev
tmpfs 1.3G 9.5M 1.3G 1% /run
tmpfs 3.2G 74M 3.1G 3% /dev/shm
tmpfs 5.3M 4.1k 5.3M 1% /run/lock
tmpfs 3.2G 0 3.2G 0% /sys/fs/cgroup
/dev/sda6 66G 2.8G 60G 5% /home
tmpfs 628M 4.1k 628M 1% /run/user/117
tmpfs 628M 37k 628M 1% /run/user/1000
/dev/sr0 3.6G 3.6G 0 100% /media/cdrom0
Above is my list of free space and usage on my HDD.
I need more space for web development in /var/www and I want to somehow change size of /home partition /dev/sda1 and made new partition only for /var/www
Is that possible?
|
As @jordanm pointed out in a comment, it's much safer to symlink into a new location in an existing partition than to try to resize partitions while keeping their current content.
In your situation, this would be achieved by something like:
cd /var
sudo mv -i www /home/
sudo ln -s /home/www .
(as a side note: it's usually a good habit to include the -i flag to mv in order to get a warning message if you're about to overwrite something, especially if you're issuing the mv with sudo)
| Change size of /home partition and move /var/www on new partition |
1,502,555,900,000 |
I made a separate partition for /home, but during installation process I forgot to mount it and hence no entry was made in fstab.
I had everything in partition under the root ( well not the swap and efi system partition). I realised what I did, very late and by that time I had already installed packages and wrote data in the home directory.
Now what I want to know is “is there any way possible to move my home directory to a separate partition with out losing any data?”
I was thinking of doing something like mounting the root directory in /mnt and than mount a new partition(for home) in /mnt/home from a liveUSB and than generate the fstab.
But I am like 79% sure that this will wipe out my home directory.
SPEC: Arch Linux x86_64 latest kernel (5.0.4)
|
Because you already have an home partition, we should be able to do this with out a live OS.
mount the new home on /mnt
move files from old-home (/home), to new home (/mnt). (/home should now be empty).
remount new-home to /home (bind mount sudo mkdir -p /home && sudo mount --bind /mnt /home (you can also use --move, in place of --bind), or unmount then mount).
It is not as you want, but the mount is not persistent.
edit /etc/fstab (There may be tools to help you with this, I can't remember).
| Add (already created) partition for /home after OS installation |
1,502,555,900,000 |
Background: I am running Manjaro Linux in a 750 GB HDD Laptop with 30 GB root. I had a Win installation but I've removed it now.
Now I want to move my root in a larger space, say 60GB, in another place of my HDD. Now what is the most efficient way to do that? I have separate /home and /boot
Note: I've searched the internet a lot, but they mostly say about resizing the partition , or about lvm or for server people who want their server running, or trying to move to a whole new drive. All my operation are in one drive and I don't have enough space before and after / to expand.
My fdisk -l /dev/sda output
Disk model: ST750LM022 HN-M7
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: XXXX
Device Start End Sectors Size Type
/dev/sda6 1109680128 1416878079 307197952 146.5G Microsoft basic data
/dev/sda9 1465137152 1465147391 10240 5M BIOS boot
/dev/sda10 732700672 733749247 1048576 512M EFI System
/dev/sda11 733749248 764999679 31250432 14.9G Linux swap
/dev/sda12 764999680 836302847 71303168 34G Linux filesystem
/dev/sda13 836302848 1109680127 273377280 130.4G Linux filesystem
Partition table entries are not in disk order.
|
I don't know about the most efficient way, but a way that is easy for the end-user is something like this:
Have a Linux system which can boot from an external medium (e.g. CD, USB). – Many installers have a "Try Linux" mode. They are fine.
Start gparted as root and have a tool with a nice GUI. You can literally cut and paste your partition.
Please be aware: Resizing a partition is gererally safe. However, moving a partition with the source and destination overlapping is dangerous. If the progress is interrupted, i.e. due to power loss, your data will be lost.
| Moving the root partition to another partition |
1,502,555,900,000 |
My aging computer will need a replacement. That will happen in a couple of months but, in the mean time, I would like to learn techniques related to improving performance and resilience of data.
I also have an empty 2 TB usb external HDD, which is like 3-4 times the size of my current laptop's HDD. So.... I would like to do two things:
use the external HDD to mirror the data from my current laptop (assume /home is in a separate partition which is what I would like to experiment with.... I could setup a partition on the external HDD to hold a copy before I start the experiment) and use it along the laptop's HDD to improve performance when both HDDs are connected. Should be able to recover from disconnecting the external HDD and be able to sync up when I reconnect the external HDD.
when I get my new computer, be able to connect the external HDD and mirror the data into the new computer and be able to use it to improve performance while the external HDD is connected and be able to continue working if the external HDD is disconnected and sync up if I connect it again.
We could assume that once I start using the new computer, the external HDD won't be connected to the old computer, if that makes the scenario a little simpler to handle.
What would be possible ways to achieve it? If you need details about the current setup of the /home partition or about the drives involved, let me know, though I think that it could be laid down in the air.
As a tip, partitions in my laptop's HDD are setup using LVM (of which I know some things but want to take advamtage of this experiment to learn more).
|
Unfortunately, this problem can be solved literally in millions of ways.
KISS approach would be:
format the new drive as completely new device (you don't need to bother with lvm really)
use some "manual" syncing mechanism like cp -a, tar|tar or rsync, to sync the old /home to new drive manually, or something like lsyncd to sync in near-real-time while external drive is connected
once new computer arrives ensure uid:gid of you user is same as on old computer and cp -a/rsync/tar|tar from external drive back into the home.
All the user level application data will get transferred this way, but you should not forget you also have backup a list of application packeges so you can recreate original application "loadout".
More complex solution is to go at it at block level:
dmraid/mdraid (depending who you ask - userland control program is mdadm)
lvm
Because you already have a LVM setup, I believe mdadm RAID is inapplicable to your situation. But dmraid would be a simple "no crap" setup.
Most complex setup would be to mess with LVM where you can go about it two ways (I think - if you count LVM snaphosts): mirrors and snaphots. In my experience both LVM ways are sub-optimal for you. Probably saner one would LVM mirror. Unfortunately due to my dislike of LVM, I will refrain from any advice in that matter.
Keep in mind, that KISS options listed have added benefit, that they will re-defragment (not much of an issue on SSDs - but still effective) the files during transfer. Files will be recreated on target volume in linear fashion, getting defragmented in process.
On the other hand, you might lose some space like "holes" in sparse files (but if I am not mistaken, modern rsync should have options even for that). If you know, that you are using lot of sparse files, you can use du --apparent-size to calculate worst maximum size for the volume in question.
Unfortunately, these days, it's very hard to say, just like with "virtual memory" how much space will be effectively consumed by your files on new volume media (the size actually used depends on too great numbers of factors).
Sadly, block level transfers like LVM mirror, are suboptimal as "new" volume will inherit "bad state", if there was any, from the mirrored volume, so in this case I would be against it.
It's always safer and saner to recreate filesystem, than to drag it around.
If you are feeling paranoid, you should ensure /home is not accessed during transfer process (you can even go into single user mode) and if you are really mad, you can calculate sha256sum hashes of all the files (beware this can take days, and files cannot be written into!) and compare them with transfer.
As always, archlinux wiki has pretty good introductory articles:
generic system backup
generic system transfer
I would suggest against exprimenting with redundancy on your own crucial data, use VMs for such experimentations, it is much simpler, and you can break and learn much more.
| Move around data between boxes taking advantage of an external HDD |
1,502,555,900,000 |
TL;DR
I have a dd image disk.dd which has multiple partitions. The end goal is to reduce the size of this dd image file.
After deleting and recreating a higher numbered partition with a start sector offset lower than it was before, (i.e expanding the partition to the left) I have a partition which has a filesystem in it and whose primary superblock is somewhere inside this partition, and I know the sector at which this primary superblock resides.
How can I e2fsck this filesystem so that it moves to the beginning of the partition ?
So that afterwards I can shrink this filesystem with resize2fs and then shrink this partition from right, i.e (recreating this partition with a lower end sector offset)
Then I'll repeat this process with the partitions after that until the last partition, effectively shrinking all partitions and hence reducing the size of dd image
Please do not suggest gparted . I'm looking for a command line solution
Also, I know this would've been easier with LVM . But this legacy system
Long version
I have a dd image disk.dd that I took using the following
dd if=/dev/sda of=/path/to/disk.dd
of a system which has the following layout
Disk /dev/loop15: 465.78 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x54093dd5
Device Boot Start End Sectors Size Id Type
/dev/loop15p1 * 2048 81922047 81920000 39.1G 83 Linux
/dev/loop15p2 81922048 143362047 61440000 29.3G 82 Linux swap / Solaris
/dev/loop15p3 143362048 163842047 20480000 9.8G 83 Linux
/dev/loop15p4 163842048 976773167 812931120 387.7G 5 Extended
/dev/loop15p5 163844096 976773119 812929024 387.7G 83 Linux
Now, on a different system, I'm accessing disk.dd through a loop device using
losetup --find --partscan disk.dd
I resized all of the ext4 filesystems with
resize2fs -M /dev/loopNpartX
resize2fs /dev/loopNpartX FSsize
i.e the partitions p1, p3 and p5
With dumpe2fs, I can see logical block size of filesystem which is 4096 bytes for all ext4 filesystems which in my case as I shown above are hosted on 3 partitions
Now if I'm verbally reading this correctly (correct me if I'm wrong here)
The primary superblock of a filesystem is "usually expected" to be located at block 0 of the partition
So, I can dump superblock information with
dumpe2fs -h -o superblock=0 -o blocksize=4096 /dev/loopNpartX
Now it's time to shrink partitions in order to reduce the size of disk.dd file
I got the block count for each file system again using dumpe2fs
fdisk works on physical block size OR sectors of device which in my case is 512 bytes
So, in order to find how many sectors should be enough to accommodate the filesystem, I used the following formula
Required Sectors = ( ( Block Count + 100 ) * Logical Block Size ) / Physical Block Size
100 acting as a buffer just in case I'm missing something about the organization of filesystem which should be enough
I did this for every filesystem
Now
With lsblk -f, I get the UUIDs of existing filesystems
With fdisk -l, I get which partition to keep the boot flag on
Now to shrink partitions, I would delete and recreate them using fdisk
-- First partition
start sector offset = 2048
last sector offset = 2048 + "Required Sectors" for this filesystem
-- Second partition
Second partition on existing disk is swap, so I'll not shrink it, just move it left
start sector offset = "last sector offset" of first partition + 1
last sector offset = "start sector offset" + Total sectors as as on existing partition
I then change it's type to Swap
And then with tune2fs -U change the UUID back to what was on dd image
-- Third partition
start sector offset = "last sector offset" of second partition + 1
last sector offset = "start sector offset" + "Required Sectors" for this filesystem
Here is where I'm stuck
After expanding third partition to the left, this partition has a filesystem whose starting sector I know (i.e sector having the primary superblock)
But I don't know how to e2fsck this filesystem to correct it on the partition so that the filesystem is moved left to the beginning of the partition
|
It's not possible with fsck. In a filesystem, everything has offsets and if you change the start sector, all of these offsets change. fsck simply has no facility to re-write all offsets for everything (superblocks, journals, directories, file segments, etc.). And even if you could do that, it would only work if the new start sector aligns with internal filesystem structures.
So this is not done.
Instead, you'd have to shift all data to the left with dd (essentially what gparted does). Only by shifting the filesystem entirely, would the offsets within it remain intact.
In principle the dd command could work like this. It reads and writes to the same device, at different offsets. This can only work for shifting to the left, so seek (write to) must be smaller than skip (read from). All units in 512b sectors (if you specify bs=1M, your partitions must be MiB aligned and all units in MiB instead)
dd if=/dev/sdx of=/dev/sdx \
seek=newpartitionstart \
skip=oldpartitionstart \
count=filesystemsize
However, this is very dangerous. Use it at your own risk. Do take the time to backup your data first.
Shifting to the right would be more complicated. You'd have to work backwards, otherwise you overwrite data that has yet to be read, and corrupt everything in the process.
The only tool I know that does it (more or less) without shifting data is blocks --lvmify, which achieves it by converting the existing filesystem partition to LVM. With LVM, you can logically expand to the right while it's physically stored on the left. Without LVM, you could also set up a linear device mapping manually, but then you are stuck with a non-standard solution.
The most sensible approach to this type of problem (if you don't want to use gparted) would be to backup all data, then make new partitions and filesystems in any layout you like, and then restore your data.
If this dd image is your approach to a backup solution, consider backing up files instead. Disk images can be hard to handle, especially if you want to transform them afterwards.
If your main goal is reduce the storage requirement of the image file, what you could do is fstrim (for loop mounted filesystem - losing all free space), or blkdiscard (for loop swap partition - losing all data).
Provided the filesystem that stores the image supports sparse files and hole punching, it would make the dd image use less storage space w/o changing any layout, as any free space within the image would also be freed for the backing filesystem.
Similarly, this is dangerous, if you discard the wrong parts of the image file, the image file is irrecoverably damaged. The simple act of creating a loop device for an image file, and mounting it, already modifies/damages the image file.
If the source disk is SSD, and it's already using fstrim regularly, and reads trimmed areas as binary ZERO, you can create an already sparse dd image in the first place using dd conv=sparse if=/dev/ssd of=ssd.img. This way any binary zero area would not take up space in the ssd.img file. Note that conv=sparse can lead to corrupt results when used in the other direction when restoring to a non-zero target drive.
| Move filesystem to the left after expanding partition to the left |
1,502,555,900,000 |
I have this setup of Ubuntu 18:
two identical disks. In one (sda) there is / and /boot/efi. In the other disk (sdb) there is /home.
The free space in sda is enough to hold what is used in sdb.
I would like to move /home to sda. It could be the same partition sda1, or it could be a new partition, say sda3.
Here is what gparted shows me:
Is it possible to make this moving?
I don't know if what I'm saying makes sense, but my plan is:
resize sda1 to say, 200GB
format the empty space in sda to ext4 (creating sda3)
copy the content of sdb1 to sda3
tell linux that /home now is in sda3 (this part I have no idea how to do)
|
If you use sudo or su instead of logging in directly, you will still lock the home directory and you will be unable to umount /home. If you cannot login directly, you should probably use a live CD.
Another option however would be:
create a user safemove, with a home directory /tmp.
add safemove to the sudoers file with visudo
log in with safemove at the terminal (typically control-shift-f1). This will prevent the creation of the complete graphical desktop under /tmp.
make sure everybody else is logged off.
as safemove, sudo -s
mkdir /mnt/home
umount /home
mount /dev/sdb1 /mnt/home
copy /mnt/home/* to /home if you do not want a separate partition, including ownership and permission flags.
remove /home from /etc/fstab
take a deep breath and reboot.
You should now have everything in /dev/sda1. Check this first before removing the user safemove.
If you are going to mess with /home, it is always good to have, at least temporarily, a userid that does not have its homedir under /home and that can sudo and/or su.
If you want to
| Is it possible to move /home mount point from one disk to the disk where / mount point is? |
1,502,555,900,000 |
I am having the famous warning "low disk space" on my Linux Mint in var/cache/apt/archive directory, and I want a permanent solution ( aka not apt-get autoclean because the mentioned directory is not enough for all the packages so cleaning it doesn't solve the problem)
What are the possible solutions to this problem?
Also is there a way to solve the problem without using partitioning or copying data to live disks like making a directory in another partition where apt can use it in the future installs?
I tried to create another directory that has enough space needed and used the command ln var/cache/apt/archive /home/new/directory but didn't work
|
The easiest way is to move the directory to another partition and then create a symlink from the old location to the new.
You seem to have lots of space free on /home, so use that:
apt-get clean
This step is optional, but highly recommended. It will delete all of the downloaded .deb files currently in /var/cache/apt/archives. If you don't want to delete them, that's fine but the third step (mv) will take a while - however long it takes to copy and delete the archives from one partition to another.
Make a convenient directory for the .deb archives:
mkdir -p /home/var/cache/apt/
You can put it anywhere you like under /home (or any other partition with lots of free space available) e.g. /home/apt will do, but it's useful and convenient to keep the same directory structure.
mv /var/cache/apt/archives /home/var/cache/apt
ln -s /home/var/cache/apt/archives/ /var/cache/apt/
That's it, done.
The next time you use apt, apt-get, etc it will follow the symlink from /var/cache/apt/archives to /home/var/cache/apt/archives - all of the downloaded files will be stored there.
BTW, if you're still short of space on /, you might want to do the same thing with /usr/share/doc - mv and symlink it to, e.g., /home/usr/share/doc.
| Low disk space “apt/archives” |
1,502,555,900,000 |
Good day,
I have a 120GB SSD, and 1TB HDD on my system. And on my SSD I have Windows installed. Afterwards, I added Linux Mint as dual boot. Before I install, I shrinked space from the HDD and created all /,/home,swap etc. on HDD. But when I launch Windows vs. Mint, it makes me feel the difference a lot, so I thought it would be cool to move my Mint to SSD. I have few questions about it.
Will it increase my boot and general speed if I somehow move / to SSD or will I have to move all files to there?
Would it really be a good idea to shrink a 120GB SSD?
If the 2nd question's answer is yes, would it make difference to do this operation (moving Mint to SSD) as dual boot rather than normal installation?
(following this guide http://blog.oaktreepeak.com/2012/03/move_your_linux_installation_t.html)
OS: Windows 10 64bit, Linux Mint 18.1 64bit
|
Ok, since you might get confused from the comments, I've decided to write an answer. Though what I'm suggesting is not a simple procedure and if you're not experienced enough, you will end up with unbootable linux, or even worse - broken partitions and lost data.
I would not suggest you to follow it unless you're experienced enough and know how to recover from boot failures later. I.e. booting from USB, mounting, chrooting, etc...
These steps are not a copy/paste howto, so if you have doubts or question on any of these steps, do not start with this.
You can create one new partition (5GB for example) on your SSD and move some parts of your linux there.
Then format it with Ext4 or whatever FS you prefer.
Copy all folders except "/home", "/var", "/media", "/run", "/opt", "/boot", "/mnt", "/proc", "/dev", "/sys".
Actually you should be copying "/lib*" folders, "/bin", "/sbin" folders, "/usr", "/etc" folders and some more probably.
Then create "/sys", "/dev", "/proc" empty folders on the SSD.
You should update the ROOT in your bootloader config and fstab.
Here you should find a way to get the rest of the folders mounted, but since they are on a single partition on HDD, it's not that easy.
you can mount them in /storage folder for example and make symlinks to the root fs.
or mount them in /storage folder and then bind mount them to their root fs folders (mount -o bind)
in both cases you should later update fstab to do the mounting.
Note: there are probably many other ways achieve what you want.
On my linux, I have everything on the SSD and a (HDD mounted partition) /storage folder to hold my /home/user/[some sub folders] /var/cache and some other data-huge folders with symlinks to the root fs.
| Dual Boot System, Moving Linux Mint to SSD |
1,502,555,900,000 |
I am using Ubuntu 22.04 LTS in a windows dual boot setup. This is the state of the partitions at the moment. (Windows Screenshot)
On my Ubuntu, I have the following
df -H
Filesystem Size Used Avail Use% Mounted on
tmpfs 3.4G 3.0M 3.4G 1% /run
efivarfs 263k 138k 120k 54% /sys/firmware/efi/efivars
/dev/nvme0n1p6 51G 42G 6.2G 88% /
tmpfs 17G 1.1M 17G 1% /dev/shm
tmpfs 5.3M 4.1k 5.3M 1% /run/lock
/dev/nvme0n1p7 160G 57G 96G 38% /home
/dev/nvme0n1p1 101M 35M 67M 34% /boot/efi
tmpfs 3.4G 177k 3.4G 1% /run/user/1000
Partition 6 (/) and partition 8 (/home) are the ones I am using for Ubuntu and want to expand them to un-allocated spaces.
How can I safely resize my partition 6 to take up available space on the left?
|
There are multiple things you need to understand:
a partition contains a file-system, usually but not necessarily with the same size as the partition
partitions can be expanded (or reduced) on their end only, because otherwise the file-system always has to start at the beginning of the partition. If you change the startpoint of the partition, the filesystem cannot be found anymore.
file-system can be expanded to a increased size of the partition very easy and on current filesystems even in live-mode
you must never decrease the size of a partition without decreasing the filesystem size before doing so, or you will loose data.
That being said, you can move your p6 (/) to the “left”, i.e. move not only the boundaries of the partition but also all it's data, including the filesystem. This means, the whole partition will be copied, no matter how many data it contains (This may be incorrect if you are using modern partitioning tools on SSDs, because SSDs have unlike harddrives another abstraction but the behaviour is the same, it's only faster).
After you moved your partition to the left, you can increase its size to the right. Depending on your partitioning tool, it will automatically increase the size of the file-system as well.
Long story short: If you want to increase a partition with a filesystem to the “left”, you have to move it so you can extend it to the “right” :-).
Because Windows can't handle Linux filesystems and you cannot move a filesystem used in a linux-system (only extend to the right), I recommend you to use an ubunut live-session and a tool like gnome disks or gparted.
| Ubuntu resize partition in in backward direction |
1,502,555,900,000 |
This question relates to a Windows/Linux dual boot system on a DOS-partitioned SSD (i.e. none with GPT/UEFI). Originally, the computer only had Windows 10 on an HDD. Then I managed to transfer this system to the SSD, resize the partitions and install Xubuntu 20.04 alongside Windows 10, and it all worked fine.
There always was an EFI partition on the drive. I don't know what it is good for since this is not an UEFI device. But I did not change this partition. - There is no swap partition on this system.
I wanted to create more space for my Linux system partition. To be more flexible in case the requirements change later, I moved the Windows home partition between the two Linux partitions. The partition layout looks like this now:
NAME FSTYPE PARTTYPE PARTFLAGS LABEL
sda
├─sda1 ntfs 0x7 0x80 System-reserviert
├─sda2 ntfs 0x7 SSD-Windows-Sys
├─sda3 vfat 0xef SSD-EFI
├─sda4 0x5
├─sda6 ext4 0x83 SSD-Linux-Sys
├─sda5 ntfs 0x7 SSD-Windows-Home
└─sda7 ext4 0x83 SSD-Linux-Home
(sda-numbers not strictly ascending, the partitions are shown in SSD-storage order sda6-sda5-sda7.)
I did not alter any of the partitions 1 to 3. I succeeded to maintain the original UUIDs and LABELs using gparted to move and resize the ntfs partition SSD-Windows-Home. There is no equivalent to tune2fs -U <UUID> for vfat and ntfs and I did not want to exchange the serial numbers by fiddeling with dd as proposed in this discussion, therefore I did it with resizing and moving the ntfs-partition SSD-Windows-Home.
For the Linux partitions, I used gparted to create an new ext4 partition at the end of the SSD, rsync to copy the contents from the old to the new partition, then I deleted the old one and resized the other partitions to fill the gaps. Finally I applied tune2fs to achieve the shown state, especially in maintaining the same LABELs and UUIDs for all partitions as they were before all this.
During the partition change work, I encountered a warning that boot problems might show up after my changes. I did not care, since the LABELs and the UUIDs for each partition remained the same. But this was a keen assumption, as I had to notice, when I tried to reboot:
The boot process stopped in grub rescue> rather than in the GRUB2 menu asking me which operating system to boot.
I succeeded to boot the computer by issuing these commands:
grub rescue> set prefix=(hd0,6)/boot/grub
grub rescue> set root=(hd0,6)/
grub rescue> insmod linux
grub rescue> insmod normal
grub rescue> normal
Then the GRUB2 menu was shown and I could select between Linux and Windows 10. Both operating systems worked as before all my partition changes (of course I had to shut down the computer in between and go through grub rescue> again).
I was advised to run the following command after booting into the Linux system in order to permanently recover from the grub rescue problem:
$LC_ALL=C sudo grub-install --target=/boot/grub/i386-pc /dev/sda
grub-install: error: /usr/lib/grub/i386-pc/modinfo.sh doesn't exist. Please specify --target or --directory.
$
There is a file /boot/grub/i386-pc/modinfo.sh (and there are two more of them in /boot/grub/x86_64-efi and in /usr/lib/grub/x86_64-efi).
Therefore I have tried
$LC_ALL=C sudo grub-install --target=/boot/grub/i386-pc /dev/sda6
grub-install: error: /usr/lib/grub/boot/grub/i386-pc/modinfo.sh doesn't exist. Please specify --target or --directory.
$
It searches in the wrong directory. Therefore I have added --directory=/boot/grub/i386-pc:
$ LC_ALL=C sudo grub-install --directory=/boot/grub/i386-pc /dev/sda6
Installing for i386-pc platform.
grub-install: error: cannot open `/boot/grub/i386-pc/moddep.lst': No such file or directory.
$ ls -l /boot/grub/i386-pc/moddep.lst
-rw-r--r-- 1 root root 5416 2019-12-10 10:34 /boot/grub/i386-pc/moddep.lst
$
As you can see from the ls command, this error message is definitely wrong, because /boot/grub/i386-pc/moddep.lst exists and root can also access it! Now I'm at the end of my wits.
After the computer was able to start with the grub rescue commands (rather than from a live stick and using chroot), it shouldn't be that difficult to permanently apply exactly the information I entered, but without having to enter it on each boot process.
How do you do this correctly?
|
The presence of /usr/lib/grub/x86_64-efi and /usr/lib/grub/x86_64-efi-signed indicates that the Xubuntu was installed as an UEFI-booting system, suggesting that your system is in fact UEFI-capable.
The fact that your sda disk includes a Windows installation and is partitioned in MBR style indicates your Windows must boot using the classic BIOS style. To achieve the ability to easily choose the OS to boot from the GRUB menu, you would want Xubuntu to boot in legacy BIOS style too. But it appears the packages required for legacy-style GRUB are not currently installed in your system.
You said grub-efi is not installed on your system: such a package does not exist on Debian/Ubuntu. The UEFI equivalent of grub-pc is named grub-efi-amd64. The name grub-pc is legacy from when GRUB used to be an x86/BIOS-only bootloader; now it supports many other architectures and firmware types.
The commands suggested in the comments had some typos: the --target option of grub-install does not take a pathname, but a GRUB platform identifier: in your case, i386-pc.
First, install the legacy BIOS support packages for GRUB:
sudo apt install grub-pc grub-pc-bin
Then, install GRUB into the Master Boot Record of the system:
sudo grub-install --target=i386-pc /dev/sda
Check the contents of /etc/default/grub. If there is a line
GRUB_DISABLE_OS_PROBER=true
you may want to change it to false, make sure the os-prober package is installed, and then run sudo update-grub to create a GRUB configuration that includes both your OSs.
Then mount sda3 and inspect its contents:
sudo mount /dev/sda3 /mnt
sudo ls -l /mnt
If it includes an EFI directory, rename it to something else (in case your system firmware has uncontrollable preference towards UEFI booting):
sudo mv /mnt/EFI /mnt/NO-EFI
sudo umount /mnt
Now it's time to reboot. After verifying that you can now successfully boot both your operating systems, you can remove the UEFI boot support packages with:
sudo apt purge grub-efi-amd64 grub-efi-amd64-bin grub-efi-amd64-signed efibootmgr shim-signed shim-helpers-amd64-signed shim-signed-common
| How do I permanently recover from grub rescue? |
1,502,555,900,000 |
This is what my partitions look like. The first one is the /root partition that I need to extend, second one is the /home partition and the third is the unallocated space that I want to add to the /root. How can I do that?
|
It’s a good idea to take backups of any data on your computer that you cannot afford to lose before doing this. It’s not dangerous but it is possible to mess up and lose data.
You need to boot your system into a live environment that has GParted included. An Ubuntu installation iso is ideal or there is also a GParted live iso available. Once booted run GParted
Once you have done that you need to move partition 2 all the way to the right and click apply. This will take some time so be patient. That step moves the unallocated space between partitions 1 and 2.
Next you will be able to expand partition 1 and click apply, making use of all the unallocated space. This should be quite quick.
That’s it! Good luck
Link to similar question: https://askubuntu.com/questions/126153/how-to-resize-partitions
| How can I extend the root/filesystem to take unallocated space? |
1,502,555,900,000 |
I installed Linux Mint in a dual boot alongside Windows 11, however now I am trying to get completely rid off Windows 11, however I haven't been able to combine the two partitions that I used for personal data storage (one of them is empty, so no need of keeping files).
This is what my partitions look like on lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 238,5G 0 disk
├─nvme0n1o1
│ 259:1 0 260M 0 part /boot/efi
├─nvme0n1p5
│ 259:2 0 27,9G 0 part /
├─nvme0n1p6
│ 259:3 0 5,7G 0 part
├─nvme0n1p7
│ 259:4 0 65G 0 part /media/user/
262f
└─nvme0n1p2
259:5 0 139,6G 0 part
I would like to combine the 'nvme0n1p2' and 'nvme0n1p7', I don't know if that could be possible as I haven't been able to do it on GParted.
|
You can merge two partitions only if they are adjacent. If they are not adjacent, you can always use LVM to initialize each partition as a Physical Volume, add the two Physical Volumes to a Volume Group, and create a Logical Volume from the Volume Group; then format the Logical Volume as usual.
The commands would be more or less this:
pvcreate /dev/nvme0n1p7
pvcreate /dev/nvme0n1p2
vgcreate myvg /dev/nvme0n1p7
vgextend myvg /dev/nvme0n1p2
lvcreate -L 204G -n mylv myvg
mkfs -t xfs /dev/myvg/mylv
Note that this will wipe the content of both partitions, so if they contain any data you want to keep, backup the data onto another device before proceeding.
| How to combine two partitions? |
1,502,555,900,000 |
I am trying to expand the size of my root directory as I am running low on space. I have tried resizing it from a Live USB and it won't let me.
The text in red is the mounting point (according the partition manager) when booting from the drive. /dev/sdc5 mounts to /boot/efi and /dev/sdc6 mounts to /
fdisk -l /dev/sdc yields:
Disk /dev/sdc: 29.3 GiB, 31406948352 bytes, 61341696 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x4e13a3a7
Device Boot Start End Sectors Size Id Type
/dev/sdc1 2048 15628287 15626240 7.5G c W95 FAT32 (LBA)
/dev/sdc2 37490686 61339647 23848962 11.4G 5 Extended
/dev/sdc5 * 37490688 38539263 1048576 512M ef EFI (FAT-12/16/32)
/dev/sdc6 38541312 61339647 22798336 10.9G 83 Linux
|
You don't have any unallocated space after the extended partition into which you can resize it. You can either reinstall or if you want to take the arguably more fun approach, perform the following gymnastics:
Create two primary partitions using the unallocated space: one will be your new ESP, which you can make 256MB (or even 128MB) and the other will be new your rootfs. Create them in that order.
At this point you'll have two ESP's, so remove the original ESP to avoid any conflicts. Note that deleting the partition will not wipe out the data; The partition can be revived with the information I had you add to your post.
Format the new partitions accordingly and copy your files over.
Check the /etc/fstab in the new rootfs, and update it if needed.
Boot from the USB drive to ensure everything is working.
Delete the extended partition. You'll now have unallocated space after your rootfs partition.
Resize the rootfs partition into the unallocated space. You'll need to boot from another system to do this.
Resize the rootfs to grow into the now larger partition, using resize2fs.
| Resize extended partition containing /boot/EFI and root |
1,487,617,670,000 |
I was just introduced to multipathing in our production environment and had never heard of the concept prior. After some digging I think I'm starting to get a handle on how the concept works in theory but I'm having some trouble extrapolating that to what I'm seeing on the box I'm working on.
From multipath -ll I get output like:
mpath0 (36000d3100088060000000000000000b9) dm-0 COMPELNT,Compellent Vol
size=60G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=1 status=active
|- 0:0:0:0 sda 8:0 active ready running
|- 0:0:1:0 sdd 8:48 active ready running
|- 1:0:0:0 sdi 8:128 active ready running
`- 1:0:1:0 sdl 8:176 active ready running
From fdisk -l I know that those are all 60GB disks, with the same partition setup:
Disk /dev/sda: 64.4 GB, 64424509440 bytes
255 heads, 63 sectors/track, 7832 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 13 104391 83 Linux
/dev/sda2 14 7832 62806117+ 8e Linux LVM
What is confusing to me though is how the partitions are actually mounted on the server:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
30G 26G 3.8G 87% /
/dev/mapper/mpath0p1 99M 49M 46M 52% /boot
tmpfs 16G 232M 16G 2% /dev/shm
/dev/mapper/mpath2p1 493G 226G 242G 49% /u02
Just considering /boot for now: It is mounted to mpath0p1, I can see that much. But how does this correspond to the physical disk/LVM behind the multipath?
|
Your multipath'ed device is just an abstraction of multiple paths to one disk. So the corresponding relationship you are asking about is that the mpathN device is the same as the underlying device at the far end of whatever fabric you have.
As you saw, you can view the partition table on the mpath device and it's constituent members and see the same layout.
Some folks see a similarity between the concepts of multipath and RAID1. They are not related, but I've found it a useful comparison. The underlying devices of a multipath device are not duplicate copies as in RAID1. They are just parallel attachments to the same, typically remote, disk/LUN.
Regarding your question about how the partitions are mounted, they are mounted as they could be without multipath (assuming devices aren't hardcoded in fstab and lvm.conf). So you have mpath0p1 mounted at /boot. In your case -- if these devices were not managed by multipathd -- this is the same as mounting /dev/sda1 at /boot (and in your example, sdi1, sdd1, or sdl1 could be substituted for sda1). The difference is that if your fiber (or whatever) connection that presents sda1 is disconnected, your disk will still be accessible, using the multipath driver, via sdd, sdi and sdl.
In this case, you have the first partition of the remote disk mpath0 mounted at /boot, the first partition of disk mpath2 at /u02. The second partition in your fdisk output of sda is marked as an LVM physical partition. Presumably this contains the volume group VolGroup00 and in turn the logical volume LogVol00, which is mounted at /
| Understanding multipath and mountpoints |
1,487,617,670,000 |
SERVER:~ # pvs
Found duplicate PV Wb0b2UTCKtpUtSki0k2NnIB24qNj4CEP: using /dev/mapper/36005076304ffc2500000000000004903 not /dev/mapper/36005076304ffc2990000000000004903
PV VG Fmt Attr PSize PFree
/dev/mapper/36005076304ffc2500000000000004903 application lvm2 a-- 50.00g 35.00g
/dev/sda4 system lvm2 a-- 133.24g 100.39g
SERVER:~ #
OS is a SLES 11 SP3.
Question: Could this be a problem? If yes, how to solve the duplicate PV message? :) The disks are coming from SAN/multipath.
|
In my personal experience, "duplicate PV" is usually caused by the system having multipath access to a particular SAN LUN but LVM hasn't been configured to filter out the block devices for the individual paths. The device mapper name even looks like a WWNN/WWPN (although I don't have enough experience with SLES to know if that could be something else). Not sure why a PV would itself be served out of a DM device, though.
In RHEL I would look in /dev/disk/by-path and see if these are the same LUN's.
Could this be a problem?
If you're supposed to be on a multipath setup it could be an issue. For example if the underlying device is supposed to be /dev/mapper/mpathf but LVM found /dev/sdf first and decided to activate that, then your access to storage isn't as redundant as you were spec'd out to be. For example if the path /dev/sdf goes down the VG and all its LV's could become inaccessible.
If yes, how to solve the duplicate PV message?
With LVM, each PV has an "LVM header" that tells you the UUID of this PV, the name of the VG it's in, and the UUID's of all the other PV's in the same VG (which is how it can tell if there's a missing PV). All this error means is that it found another PV out there that had the same UUID.
So there isn't really a single cause for this so it's hard to propose a solution with the information you've provided.
It sounds like your lvm.conf just needs its filter set up to ignore the individual paths (as stated earlier) but you'd have to do more research to confirm that since that's pretty much a WAG (wild-ass guess).
For an example of an lvm filter:
filter = [ "r/block/", "r/disk/", "r/sd.*/", "a/.*/" ]
The above filter skips ("removes") any device with the words "block" or "disk" in the name. It also removes any device that begins with "sd" (such as sdf, sdg, etc, etc) and finally "allows" all other devices (".*").
You probably don't want to go that far though (since you use /dev/sda4 for the root VG). I would just remove the specific block devices that are for the individual paths.
But again, verify. It could be a million other things too (SAN Admin cloned a LUN and presented it to your system for some reason, unlikely random collision between UUID's, cosmic rays, bad luck, etc).
UPDATE:
I should also mention that any time you update /etc/lvm/lvm.conf (RHEL path) you should rebuild any initramfs you have. It looks like you're using these as storage outside the root VG (which is best practice) but any time you modify that file you should make sure the kernel sees the same file at boot as it does thereafter just so you get consistent results.
| "Found duplicate PV" |
1,487,617,670,000 |
I am setting up multipaths for Veritas backup on SAN storage. I noticed that lsblk shows duplicate disks, which is quite confusing.
For example, both sdc and sdd represent the same disk, and similarly, sde and sdf represent the same device.
sdc 8:32 0 50G 0 disk
├─sdc1 8:33 0 50G 0 part
└─san69 253:10 0 50G 0 mpath
└─san69p1 253:11 0 50G 0 part
sdd 8:48 0 50G 0 disk
├─sdd1 8:49 0 50G 0 part
└─san69 253:10 0 50G 0 mpath
└─san69p1 253:11 0 50G 0 part
sde 8:64 0 69G 0 disk
├─sde1 8:65 0 69G 0 part
└─mpathb 253:12 0 69G 0 mpath
└─mpathb1 253:13 0 69G 0 part /mnt
sdf 8:80 0 69G 0 disk
├─sdf1 8:81 0 69G 0 part
└─mpathb 253:12 0 69G 0 mpath
└─mpathb1 253:13 0 69G 0 part /mnt
multipath -ll output is as follow
mpathb (360050763808106804800000000000001) dm-12 IBM,2145
size=69G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 11:0:1:1 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
`- 11:0:3:1 sdf 8:80 active ready running
san69 (360050763808106804800000000000000) dm-10 IBM,2145
size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=enabled
| `- 11:0:3:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
`- 11:0:1:0 sdd 8:48 active ready running
|
sdc1, sdd1, sde1, sdf1 are paths to storage device, known as LUN, for a single LUN, you can have many of them (usualy 2 or 4, sometime 8, depending on SAN configuration).(*)
mpathb and san69 are "logical" devices, you build up LVM, or in your case partition on them.(*)
mpathb's data can be access either on sdc1 or sdd1.
multipath driver will take care both of load balancing and failure.
From you data, we can tell you IBM device is connected to SAN with only two ports (see illustration in Switched Fabric on wikipedia)
you sould have /mnt mount over /dev/mapper/mpathb1, that's all you need to worry about.
should a failure occur, you will see
mpathb (360050763808106804800000000000001) dm-12 IBM,2145
size=69G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| `- 11:0:1:1 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=10 status=fail
`- 11:0:3:1 sdf 8:80 unknown
san69 (360050763808106804800000000000000) dm-10 IBM,2145
size=50G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=50 status=fail
| `- 11:0:3:0 sdc 8:32 unknown
`-+- policy='round-robin 0' prio=10 status=enabled
`- 11:0:1:0 sdd 8:48 active ready running
(I no longer have a multipath device at hand, I make status from memory)
(*) people often use LVM, that is create a physical volume, a volume group above to allow easy increasing of capacity.
Also, if there is only one partition on the disk, often people use the whole disk (same as above, it is easier to increase size).
| Multipath Duplicate Drives |
1,487,617,670,000 |
I have a multipath device I'm interested in:
[root@xxx dm-7]# multipath -ll mpathf
mpathf (3600601609f013300227e5b5b3790e411) dm-7 DGC,VRAID
size=3.0T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='round-robin 0' prio=130 status=active
| |- 7:0:1:5 sdl 8:176 active ready running
| `- 8:0:1:5 sdx 65:112 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
|- 7:0:0:5 sdf 8:80 active ready running
`- 8:0:0:5 sdr 65:16 active ready running
So it looks like the block devices backing this path are sdf, sdr, sdl, and sdx. Just taking sdf as an example I've set it's I/O scheduler as being noop:
[root@xxx dm-7]# cat /sys/block/sdf/queue/scheduler
[noop] anticipatory deadline cfq
The mpathf device maps to /dev/dm-7 for the actual block device. I've just noticed that this has an I/O scheduler as well:
[root@xxx dm-7]# cat /sys/block/dm-7/queue/scheduler
noop anticipatory deadline [cfq]
Question: which one takes precedence? The scheduler on the multipath device or on the device it ends up relaying the I/O through?
I'm of course assuming that IOPs aren't scheduled twice (once for the mpath device and another for the individual block device the I/O is redirected into).
|
Short Answer:
Device mapper in kernel versions after 2.6.31 (released September 9th 2009) includes support for "request-based" dm targets. So far only the only request-based dm target is dm-multipath.
For the targets that remain BIO (i.e everything except multipath) scheduler selection is still present but irrelevant as the DM target hands off the IOP prior to that point.
For request-based targets the scheduler selection supersedes what is set on the individual block device as multipathd will be communicating the requests directly to the underlying SCSI device (/dev/sg4, /dev/sg5, etc).
Additional information:
User application I/O is referred to as BIO (block I/O). BIO is sent to the scheduler/elevator for request merging/ordering and then is sent as a "request" to the lower level devices.
Historically, dm-multipath has been solely at the BIO level. This created a problem where traffic from separate BIO's would be merged by the block device (sdb, sdf, etc) resulting in some request queues being shorter/less used than other possible paths. BIO dm-multipath was also unable to have visibility on things like retry events or the like, as it was hidden by the block device (/dev/sda, /dev/sdb, etc).
The sysfs object for multipath block device prior to the change (RHEL 5):
[root@xxxsat01 dm-1]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.10 (Tikanga)
[root@xxxsat01 ~]# uname -r
2.6.18-371.8.1.el5
[root@xxxsat01 dm-1]# cat dev
253:1
[root@xxxsat01 dm-1]# ll
total 0
-r--r--r-- 1 root root 4096 Jan 29 13:54 dev
drwxr-xr-x 2 root root 0 Apr 29 2014 holders
-r--r--r-- 1 root root 4096 Jan 29 13:54 range
-r--r--r-- 1 root root 4096 Jan 29 13:54 removable
-r--r--r-- 1 root root 4096 Jan 29 13:54 size
drwxr-xr-x 2 root root 0 Jan 25 06:25 slaves
-r--r--r-- 1 root root 4096 Jan 29 13:54 stat
lrwxrwxrwx 1 root root 0 Jan 29 13:54 subsystem -> ../../block
--w------- 1 root root 4096 Jan 29 13:54 uevent
Post-Change (RHEL 6):
[root@xxxlin01 dm-1]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.5 (Santiago)
[root@xxxlin01 ~]# uname -r
2.6.32-431.3.1.el6.x86_64
[root@xxxlin01 dm-1]# cat dev
253:1
[root@xxxlin01 dm-1]# ll
total 0
-r--r--r-- 1 root root 4096 Jan 29 13:58 alignment_offset
lrwxrwxrwx 1 root root 0 Jan 29 13:58 bdi -> ../../bdi/253:1
-r--r--r-- 1 root root 4096 Jan 29 13:58 capability
-r--r--r-- 1 root root 4096 Jan 29 13:58 dev
-r--r--r-- 1 root root 4096 Jan 29 13:58 discard_alignment
drwxr-xr-x 2 root root 0 Jan 29 13:58 dm
-r--r--r-- 1 root root 4096 Jan 29 13:58 ext_range
drwxr-xr-x 2 root root 0 Jan 29 13:58 holders
-r--r--r-- 1 root root 4096 Jan 29 13:58 inflight
drwxr-xr-x 2 root root 0 Jan 29 13:58 power
drwxr-xr-x 2 root root 0 Jan 29 13:58 queue
-r--r--r-- 1 root root 4096 Jan 29 13:58 range
-r--r--r-- 1 root root 4096 Jan 29 13:58 removable
-r--r--r-- 1 root root 4096 Jan 29 13:58 ro
-r--r--r-- 1 root root 4096 Jan 29 13:58 size
drwxr-xr-x 2 root root 0 Jan 29 13:58 slaves
-r--r--r-- 1 root root 4096 Jan 29 13:58 stat
lrwxrwxrwx 1 root root 0 Jan 29 13:58 subsystem -> ../../../../class/block
drwxr-xr-x 2 root root 0 Jan 29 13:58 trace
-rw-r--r-- 1 root root 4096 Jan 29 13:58 uevent
Since the kernel is unaware of what individual targets do it presents the same sysfs attributes regardless of what type of device mapper device it is. Since the request is then relayed to the block-level devices, the device mapper's scheduler is never invoked and so this setting is essentially a noop with other dm targets.
Further Reading:
Why /sys/block/dm-0/queue/scheduler exist on my Linux system?
Ottawa Linux Symposium 2007 Presentation on Request-based Device Mapper
| Does dm-multipath schedule I/O? |
1,487,617,670,000 |
How do I configure multipath in testing VM (purpose is purely academical)?
I made new logical volume, modified multipath.conf to be as follows:
defaults {
udev_dir /dev
user_friendly_names yes
}
blacklist {
}
blacklist_exceptions {
device {
vendor "VMware,"
product "VMware Virtual S"
}
}
and multipath -v3 says:
Apr 22 03:22:24 | sdb: rev = 1.0
Apr 22 03:22:24 | sdb: h:b:t:l = 2:0:1:0
Apr 22 03:22:24 | (null): (VMware,:VMware Virtual S) vendor/product whitelisted
Apr 22 03:22:24 | sdb: serial =
Apr 22 03:22:24 | sdb: get_state
Apr 22 03:22:24 | sdb: path checker = directio (config file default)
Apr 22 03:22:24 | sdb: checker timeout = 180000 ms (sysfs setting)
Apr 22 03:22:24 | sdb: state = running
Apr 22 03:22:24 | directio: starting new request
Apr 22 03:22:24 | directio: io finished 4096/0
Apr 22 03:22:24 | sdb: state = 3
Apr 22 03:22:24 | sdb: getuid = /lib/udev/scsi_id --whitelisted --device=/dev/%n (config file default)
Apr 22 03:22:24 | /lib/udev/scsi_id exitted with 1
Apr 22 03:22:24 | error calling out /lib/udev/scsi_id --whitelisted --device=/dev/sdb
Apr 22 03:22:24 | sdb: state = running
Apr 22 03:22:24 | /lib/udev/scsi_id exitted with 1
Apr 22 03:22:24 | error calling out /lib/udev/scsi_id --whitelisted --device=/dev/sdb
Apr 22 03:22:24 | sdb: detect_prio = 1 (config file default)
Apr 22 03:22:24 | sdb: prio = const (config file default)
Apr 22 03:22:24 | sdb: const prio = 1
Apr 22 03:22:24 | dm-0: device node name blacklisted
Apr 22 03:22:24 | dm-1: device node name blacklisted
Apr 22 03:22:24 | dm-2: device node name blacklisted
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/prod/rev dev_st
2:0:0:0 sda 8:0 1 undef ready VMware,,VMware Virtual S running
2:0:1:0 sdb 8:16 1 undef ready VMware,,VMware Virtual S running
[root@localhost ~]#
I want to configure multipath for logical volume on /dev/sdb.
My blacklist is empty, why does it say that dm-0/1/2 are blacklisted?
Also, when I run lib/udev/scsi_id --whitelisted --device=/dev manually I got no errors. no output or changes either, though...
|
try this
multipathd -k
show config
On my system it seems that an empty blacklist is ignored and it contains, in addition to vendors blacklisted devices, these devnodes paterns:
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^dcssblk[0-9]*"
It matches "dm-"
you could try to add the "dm-1, dm-2 .. " devnodes into the blacklist exception. I never tried. I don't know the impact if you put an exception on a multipath dm file for instance.
| Multipath to a logical volume in a staging VM |
1,487,617,670,000 |
My goal is to make,for testing purpose
a multipath lvm iscsi
I have setup two debian
iscsi server,workings
I have setup the multipath on
a debian client and i can
create dirs,etc on ext4 fs.
But if server1 goes down
the fs is stucked(hang)
Why?
This is my multipath.conf
defaults {
udev_dir /dev
polling_interval 5
path_grouping_policy multibus
path_checker directio
prio const
rr_min_io 100
rr_weight priorities
failback immediate
no_path_retry fail
}
blacklist {
devnode "^(ram|sda|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][[0-9]*]"
devnode "^vd[a-z]"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}
multipaths {
multipath {
wwid 149455400000000009d1b03a0217052c8b19b0fa6e5bfe7bd
alias iscsi_storage
}
}
|
The answer is: not possible
Dual-primary DRBD, iSCSI, and multipath: Don’t Do That!
“Dual-primary” iSCSI targets for multipath: does not work. iSCSI is a
stateful protocol, there is more to it that than just reads and
writes. To run multipath (or multi-connections per session) against
distinct targets on separate nodes you’d need to have cluster aware
iSCSI targets which coordinate with each other in some fashion. To my
knowledge, this does not exist (not for Linux, anyways).
| Linux and iscsi multipath |
1,487,617,670,000 |
I had a RHEL machine booting from local disk.
Later I removed local disk & booted from a SAN disk and installed RHEL on it.
Now I have read about a grub.conf file:
root (hd0,0)
kernel /boot/vmlinuz-2.6.18.2-34-default root=/dev/hda1 vga=0x317 showopts
initrd /boot/initrd-2.6.18.2-34-default
Grub stage1 boots from MBR and then when it goes to Stage2 it takes these entry parameters.
What is the simplest way to set and choose which OS the machine will boot from?
|
When grub goes to stage 2, it will present the kernel selection menu.
The best way to configure this is to use either the SAN disk or the local disk as your MBR (master boot record) then update the /boot/grub/grub.conf file to include both stanza entries from the local HDD and the SAN disk.
Then use the default=0 entry to set the default OS kernel to load.
default=0
timeout=5
splashimage=(hd0,0)/grub/splash.xpm.gz
root (hd0,0)
kernel /boot/vmlinuz-2.6.18.2-34-default root=/dev/hda1 vga=0x317 showopts
initrd /boot/initrd-2.6.18.2-34-default
### SAN stanza entry ###
root (sd0,0)
kernel /boot/vmlinuz-2.6.18.2-34-default root=/dev/sda1 vga=0x317 showopts
initrd /boot/initrd-2.6.18.2-34-default
The second entry will be default=1.
Note
It may be better to boot from the SAN disk as you will be using the UEFI to load the fibre storage.
Manually editing grub
If you are unsure the local device number then you can go to the command-line entry when presented with the grub menu at boot time:
If you have the hiddenmenu setting in your grub.conf then it will say something similar to:
Booting from Red Hat 2.6.18.2-34... in 3 seconds ....
Press Esc to get to the menu:
Use the ^ and v keys to select which entry is highlighted.
Press enter to boot the selected OS, 'e' to edit the
commands before booting, 'a' to modify the kernel arguments
before booting, or 'c' for a command-line.
At this point you can enter c and enter various root settings to get the correct disk setting:
grub> root (hd0,0)
Filesystem type is ext2fs, partition type 0x83
References
Using-grub-to-overcome-boot-problems
| RHEL 5 Booting from Local Hardisk & SAN Harddisk via grub |
1,487,617,670,000 |
I found a script to create alias for multipath for Nimble volumes and I modified it to work in freenas
the content of it is
#/bin/bash
# This script will scan the system for attached Nimble Volumes
# and output a list of multipath alias statements for the linux
# /etc/multipath.conf. This will allow for the volume to be
# referenced by the volume name in place of the normal mpathX
#
# To use the script, just run it. If Nimble volumes are present
# it will output the confiugration data to standard out
# Just copy and paste that output in to /etc/multipath.conf
# Take care when adding these lines to make sure another alias
# is not present or if there are other multipath statements
# Start by checking to see if we have any Nimble volumes connected
ls -l /dev/disk/by-path | grep freenas > /dev/null
if [ $? -eq 0 ]
then
#Build list of Nimble devices
DEV_LIST=$(ls -l /dev/disk/by-path | grep freenas | awk '{print $NF'} | sed 's/..\/..\///')
# Output the first line of the config
echo "multipaths {"
# For each device found we determine the name and the mpathid
for i in $DEV_LIST
do
SUBSTRING=$(ls -l /dev/disk/by-path | grep $i | awk -F: '{print $4}')
# This uses pattern matching to find the name of the volume
OFFSET=$(echo $SUBSTRING | awk --re-interval 'match($0, /\-[v][a-z0-9]{16}/) { print RSTART-1 }')
NIMBLEVOL=${SUBSTRING::$OFFSET}
# Here we collect the MPATHID
MPATHID=$(multipath -ll /dev/$i | grep FreeBSD | awk '{print $2}' | sed -e 's\(\\g' | sed -e 's\)\\g')
# Enable for debug
#echo "Volume name for $device is $nimblevol with multipath ID is $mpathid"
# Putting it all together with proper formatting using printf
MULTIPATH=$(printf "multipath {\n \t\twwid \t\t%s \n \t\talias\t\t %s\n \t}" $MPATHID $NIMBLEVOL)
MATCH='multipaths {'
echo "$MULTIPATH"
done
# End the configuration section
echo "}"
else
# If no Nimble devices found, exit with message
echo "No Nimble Devices Found, have you met leeloo?"
exit 1
fi
exit 0
when I run it it get
multipaths {
multipath {
wwid 36589cfc000000e9f2e24f431339ec7b0
alias
}
multipath {
wwid 36589cfc00000026c07d6caed9e43aa22
alias
}
multipath {
wwid 36589cfc000000f051b0d5718e0b46b2f
alias
}
multipath {
wwid 36589cfc000000af38b5be525e3cf1cb4
alias
}
multipath {
wwid 36589cfc000000824684e211c61d58fc5
alias
}
multipath {
wwid 36589cfc000000f0f579280a94ef72125
alias
}
multipath {
wwid 36589cfc000000e9f2e24f431339ec7b0
alias
}
multipath {
wwid 36589cfc00000026c07d6caed9e43aa22
alias
}
multipath {
wwid 36589cfc000000f051b0d5718e0b46b2f
alias
}
multipath {
wwid 36589cfc000000af38b5be525e3cf1cb4
alias
}
multipath {
wwid 36589cfc000000824684e211c61d58fc5
alias
}
multipath {
wwid 36589cfc000000f0f579280a94ef72125
alias
}
}
without any alias
do you have any suggestions??
|
Comment the lines starting with OFFSET= and NIMBLEVOL= and insert
NIMBLEVOL=$(echo $SUBSTRING | sed -e 's/^\(.*\)-lun.*/\1/')
right below the commented lines.
...
#OFFSET=$(echo $SUBSTRING | awk --re-interval 'match($0, /\-[v][a-z0-9]{16}/) { print RSTART-1 }')
#NIMBLEVOL=${SUBSTRING::$OFFSET}
NIMBLEVOL=$(echo $SUBSTRING | sed -e 's/^\(.*\)-lun.*/\1/')
...
Not sure if that will really will create a valid configuration, assuming you want to have data1, data2 etc as alias.
| Script to Create multipath.conf Alias |
1,487,617,670,000 |
I have Debian 9 with successfully configured iSCSI and multipath:
# multipath -ll /dev/mapper/mpathb
mpathb (222c60001556480c6) dm-2 Promise,Vess R2600xi
size=10T features='1 retain_attached_hw_handler' hwhandler='0' wp=rw
|-+- policy='service-time 0' prio=1 status=active
| `- 12:0:0:0 sdc 8:32 active ready running
`-+- policy='service-time 0' prio=1 status=enabled
`- 13:0:0:0 sdd 8:48 active ready running
/dev/mapper/mpathb is a part of LVM group vg-one-100:
# pvs
PV VG Fmt Attr PSize PFree
/dev/dm-2 vg-one-100 lvm2 a-- 10,00t 3,77t
# vgs
VG #PV #LV #SN Attr VSize VFree
vg-one-100 1 17 0 wz--n- 10,00t 3,77t
vg-one-100 group contains several volumes:
# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv-one-0-1 vg-one-100 -wi-a----- 20,00g
lv-one-1-0 vg-one-100 -wi-a----- 2,41g
lv-one-10-0 vg-one-100 -wi------- 20,00g
lv-one-11-0 vg-one-100 -wi------- 30,00g
lv-one-12-0 vg-one-100 -wi------- 2,41g
lv-one-13-0 vg-one-100 -wi------- 2,41g
lv-one-14-0 vg-one-100 -wi------- 2,41g
lv-one-15-0 vg-one-100 -wi------- 2,41g
lv-one-16-0 vg-one-100 -wi------- 2,41g
lv-one-17-0 vg-one-100 -wi------- 30,00g
lv-one-18-0 vg-one-100 -wi------- 30,00g
lv-one-23-0 vg-one-100 -wi------- 20,00g
lv-one-31-0 vg-one-100 -wi------- 20,00g
lv-one-8-0 vg-one-100 -wi------- 30,00g
lv-one-9-0 vg-one-100 -wi------- 20,00g
lvm_images vg-one-100 -wi-a----- 5,00t
lvm_system vg-one-100 -wi-a----- 1,00t
My lvm.conf includes the next filters:
# grep filter /etc/lvm/lvm.conf | grep -vE '^.*#'
filter = ["a|/dev/dm-*|", "r|.*|"]
global_filter = ["a|/dev/dm-*|", "r|.*|"]
lvmetad is disabled:
# grep use_lvmetad /etc/lvm/lvm.conf | grep -vE '^.*#'
use_lvmetad = 0
If lvmetad is disabled, then lvm2-activation-generator will be used.
In my case lvm2-activation-generator generated all needed unit files and execute it during the boot:
# ls -1 /var/run/systemd/generator/lvm2-activation*
/var/run/systemd/generator/lvm2-activation-early.service
/var/run/systemd/generator/lvm2-activation-net.service
/var/run/systemd/generator/lvm2-activation.service
# systemctl status lvm2-activation-early.service
● lvm2-activation-early.service - Activation of LVM2 logical volumes
Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
Active: inactive (dead) since Thu 2019-03-28 17:20:48 MSK; 3 weeks 4 days ago
Docs: man:lvm2-activation-generator(8)
Main PID: 897 (code=exited, status=0/SUCCESS)
systemd[1]: Starting Activation of LVM2 logical volumes...
systemd[1]: Started Activation of LVM2 logical volumes.
root@virt1:~# systemctl status lvm2-activation-net.service
● lvm2-activation-net.service - Activation of LVM2 logical volumes
Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
Active: inactive (dead) since Thu 2019-03-28 17:21:24 MSK; 3 weeks 4 days ago
Docs: man:lvm2-activation-generator(8)
Main PID: 1537 (code=exited, status=0/SUCCESS)
systemd[1]: Starting Activation of LVM2 logical volumes...
lvm[1537]: 4 logical volume(s) in volume group "vg-one-100" now active
systemd[1]: Started Activation of LVM2 logical volumes.
root@virt1:~# systemctl status lvm2-activation.service
● lvm2-activation.service - Activation of LVM2 logical volumes
Loaded: loaded (/etc/lvm/lvm.conf; generated; vendor preset: enabled)
Active: inactive (dead) since Thu 2019-03-28 17:20:48 MSK; 3 weeks 4 days ago
Docs: man:lvm2-activation-generator(8)
Main PID: 900 (code=exited, status=0/SUCCESS)
systemd[1]: Starting Activation of LVM2 logical volumes...
systemd[1]: Started Activation of LVM2 logical volumes.
The problem in: I can't automatically activate all LVM volumes during the boot because lvm2-activator-net.service activate volumes after it has been attached (logged-in) over iSCSI instead of multipath device (journalctl fragment):
. . .
kernel: sd 11:0:0:0: [sdc] 21474836480 512-byte logical blocks: (11.0 TB/10.0 TiB)
kernel: sd 10:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA
kernel: sd 11:0:0:0: [sdc] Write Protect is off
kernel: sd 11:0:0:0: [sdc] Mode Sense: 97 00 10 08
kernel: sd 11:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA
kernel: sd 10:0:0:0: [sdb] Attached SCSI disk
kernel: sd 11:0:0:0: [sdc] Attached SCSI disk
iscsiadm[1765]: Logging in to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.0.151,3260] (multiple)
iscsiadm[1765]: Logging in to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.1.151,3260] (multiple)
iscsiadm[1765]: Login to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.0.151,3260] successful.
iscsiadm[1765]: Login to [iface: default, target: iqn.2012-07.com.promise:alias.tgt0000.2000000155588d75, portal: 172.16.1.151,3260] successful.
systemd[1]: Started Login to default iSCSI targets.
systemd[1]: Starting Activation of LVM2 logical volumes...
systemd[1]: Starting Activation of LVM2 logical volumes...
multipathd[884]: sdb: add path (uevent)
systemd[1]: Started Activation of LVM2 logical volumes.
systemd[1]: Started Activation of LVM2 logical volumes.
systemd[1]: Reached target Remote File Systems (Pre).
systemd[1]: Mounting /var/lib/one/datastores/101...
systemd[1]: Mounting /var/lib/one/datastores/100...
multipathd[884]: mpathb: load table [0 21474836480 multipath 1 retain_attached_hw_handler 0 1 1 service-time 0 1 1 8:16 1]
multipathd[884]: mpathb: event checker started
multipathd[884]: sdb [8:16]: path added to devmap mpathb
multipathd[884]: sdc: add path (uevent)
multipathd[884]: mpathb: load table [0 21474836480 multipath 1 retain_attached_hw_handler 0 2 1 service-time 0 1 1 8:16 1 service-time 0 1 1 8:32 1]
. . .
Conditions to start lvm2-activation-net.service is correct:
# grep After /var/run/systemd/generator/lvm2-activation-net.service
After=lvm2-activation.service iscsi.service fcoe.service
How can I properly activate all logical volumes during the boot?
|
Since you seem to have a single physical volume, I really wonder how partial activation can happen in your case. It should be all or nothing. But here are a couple of issues to take care of anyway:
You need persistent multipath device names. I'm not sure where mpathb comes from, but I recommend against enabling user_friendly_names in /etc/multipath.conf for clarity. Either configure the alias manually or use the WWID as provided by your storage.
The LVM filters are regular expressions, not shell globs, so you need to change the syntax to something like
filter = ["a|^/dev/mapper/222c60001556480c6$|", "r|.|"]
(global_filter is optional for proper functionality, but it may make a difference for bootup times.)
You have to delay activation until the multipath devices of all your physical volumes appear. One possibility is adding
Requires = dev-mapper-222c60001556480c6.device
After = dev-mapper-222c60001556480c6.device
to /etc/systemd/system/lvm2-activation-net.service.d/wait_for_storage.conf. Another is creating a dedicated activation service.
iSCSI storage devices (and their multipath devices) can take a long time to appear. You might need to create /etc/systemd/system/dev-mapper-222c60001556480c6.device containing
[Unit]
JobTimeoutSec=3min
to make sure systemd does not time out too quickly waiting for it. Use symbolic links to a common file if you've got several such devices.
Even if the above does not immediately fix your problem, it will make debugging more tractable. Good luck!
| Proper way to activate LVM partition on multipath during the boot |
1,487,617,670,000 |
I have two multipath devices configured
mpathb (36005076300808b3e9000000000000007) dm-1 IBM,2145
size=16T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 1:0:1:1 sde 8:64 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 1:0:0:1 sdc 8:32 active ready running
mpatha (36005076300808b3e9000000000000006) dm-0 IBM,2145
size=16T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| `- 1:0:0:0 sdb 8:16 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
`- 1:0:1:0 sdd 8:48 active ready running
For each of them I created a PV/VG/LV
$ sudo pvs
PV VG Fmt Attr PSize PFree
/dev/mapper/mpatha vg0 lvm2 a-- <16.00t 0
/dev/mapper/mpathb vg1 lvm2 a-- <16.00t 0
After rebooting, my VG/LV is not activated.
$ sudo systemctl status lvm2-pvscan@254:0.service
● lvm2-pvscan@254:0.service - LVM event activation on device 254:0
Loaded: loaded (/lib/systemd/system/[email protected]; static)
Active: failed (Result: exit-code) since Mon 2022-04-11 21:58:53 MSK; 14min ago
Docs: man:pvscan(8)
Process: 803 ExecStart=/sbin/lvm pvscan --cache --activate ay 254:0 (code=exited, status=5)
Main PID: 803 (code=exited, status=5)
CPU: 10ms
Apr 11 21:58:53 cephnode-1 systemd[1]: Starting LVM event activation on device 254:0...
Apr 11 21:58:53 cephnode-1 lvm[803]: pvscan[803] PV /dev/mapper/mpatha is duplicate for PVID un8VgmPbM5dheccMCCmmMzr4UGcO3Gau on 254:0 and 8:16.
Apr 11 21:58:53 cephnode-1 lvm[803]: pvscan[803] PV /dev/mapper/mpatha failed to create online file.
Apr 11 21:58:53 cephnode-1 systemd[1]: lvm2-pvscan@254:0.service: Main process exited, code=exited, status=5/NOTINSTALLED
Apr 11 21:58:53 cephnode-1 systemd[1]: lvm2-pvscan@254:0.service: Failed with result 'exit-code'.
Apr 11 21:58:53 cephnode-1 systemd[1]: Failed to start LVM event activation on device 254:0.
/etc/lvm/lvm.conf:
filter = [ "a|/dev/mapper/mpath.*|", "r|.*|" ]
What do I have to do to make the VG/LV activation work when I boot up the system?
Thanks in advance.
|
The error is because LVM had opened the disk device for access first (before the multipath subsystem had a chance to do it), and still holds a handle for accessing the PV of vg0 as just /dev/sdb (= major:minor device 8:16, as indicated by the error message) as opposed to /dev/mapper/mpatha (= major:minor device 254:0).
When a multipath device is opened, the multipath subsystem will attempt to get an exclusive lock for its component /dev/sd* devices, to prevent this from happening. But it won't be able to do so if LVM got to the /dev/sdb disk first and already has the disk device open when multipathing is started.
On Debian 11, if one of your LVM volume groups contains your root filesystem, you should make sure the multipath-tools-boot package is also installed; if your root filesystem is not on a multipathed disk, you should not install this package.
If you have not already done so after activating multipathing, you should update your initramfs file (with sudo update-initramfs -u), so your /etc/lvm/lvm.conf filter will also apply within initramfs.
To get rid of the error, you would have to deactivate and re-activate your volume group(s) now that multipathing is running, so LVM will start using the multipathed device instead of one of the individual path devices (/dev/sd*). But if your root filesystem is located on a multipathed disk, you cannot unmount it and will have to reboot.
| LVM VG/LV is not activated at system startup |
1,487,617,670,000 |
On CentOS 7 trying to create physical volume using mpath device. But it throwing this error message.
How to resolve this issues?
pvcreate /dev/mapper/mpathb1
Device /dev/mapper/mpathb1 excluded by a filter.
# grep 'filter =' /etc/lvm/lvm.conf |grep -v "#"
filter = [ "a|^/dev/sda[1-9]$|", "a|/dev/mapper/.*|" ]
fdisk -l /dev/mapper/mpathb
Disk /dev/mapper/mpathb: 1099.5 GB, 1099511627776 bytes, 2147483648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 8192 bytes / 33553920 bytes
Disk label type: dos
Disk identifier: 0x00086488
Device Boot Start End Blocks Id System
/dev/mapper/mpathb1 65535 2147483647 1073709056+ 5 Extended
Partition 1 does not start on physical sector boundary.
Update 1:
I re-created the partition and update the lvm filter. still same error.
# sgdisk -p /dev/mapper/mpathb
Disk /dev/mapper/mpathb: 2147483648 sectors, 1024.0 GiB
Logical sector size: 512 bytes
Disk identifier (GUID): 4AEDF958-9100-48BF-817E-01200483FA3A
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 2147483614
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 2147483614 1024.0 GiB 8E00 Linux LVM
# grep 'filter =' /etc/lvm/lvm.conf |grep -v "#"
filter = [ "a|^/dev/sda[1-9]$|","a|^/dev/sdd[1-9]$|","a|^/dev/dm*|", "a|^/dev/mapper/*|", "r|^/dev/*|" ]
disk locally attached
# lsscsi
[0:0:0:0] disk DGC VRAID 4201 /dev/sda
[0:0:0:1] disk DGC VRAID 4201 /dev/sdb
[0:0:1:0] disk DGC VRAID 4201 /dev/sdc
[0:0:1:1] disk DGC VRAID 4201 /dev/sdd
# multipath -l
mpathb (36006016072b0460093c3485ba71944fa) dm-4 DGC ,VRAID
size=1.0T features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| `- 0:0:0:1 sdb 8:16 active undef unknown
`-+- policy='service-time 0' prio=0 status=enabled
`- 0:0:1:1 sdd 8:48 active undef unknown
mpatha (36006016072b046008c5c305bcc0c5bf1) dm-0 DGC ,VRAID
size=100G features='2 queue_if_no_path retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=0 status=active
| `- 0:0:0:0 sda 8:0 active undef unknown
`-+- policy='service-time 0' prio=0 status=enabled
`- 0:0:1:0 sdc 8:32 active undef unknown
|
This post gave me the information to resolve my issue.
https://serverfault.com/questions/87710/debian-lenny-san-lvm-fail
I have use the slice 1 to create the PV.
# pvcreate /dev/mapper/mpathb1
Physical volume "/dev/mapper/mpathb1" successfully created.
FYI:
used this script to create the disk partition
# cat a.sh
#!/bin/bash
sgdisk -og $1
partprobe $1
ENDSECTOR=`sgdisk -E $1`
sgdisk -n 1:2048:$ENDSECTOR -c 1:"Linux LVM" -t 1:8e00 $1
partprobe $1
sgdisk -p $1
| Device /dev/mapper/mpathb1 excluded by a filter |
1,450,910,572,000 |
I use Linux Mint 13 MATE, and I'm trying to set up notifications when I plug/unplug devices.
First of all, I found udev-notify package, but unfortunately it almost doesn't work for me: it works for very little time (1-2 minutes), and then, if I connect/disconnect any device, it crashes:
Traceback (most recent call last):
File "./udev-notify.py", line 319, in <module>
notification.show()
glib.GError: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name :1.1061 was not provided by any .service files
I haven't found any solution, so I had to remove it. (filed bugreport also)
Surprizingly, there're no similar utilities I've found so far. Then I tried to write udev rules that should match all devices. I have added new file /etc/udev/rules.d/notify.rules :
ACTION=="add", RUN+="/bin/bash /home/dimon/tmp/device_plug.sh"
ACTION=="remove", RUN+="/bin/bash /home/dimon/tmp/device_unplug.sh"
And two scripts:
device_plug.sh:
#!/bin/bash
export DISPLAY=":0"
notify-send "device plugged"
/usr/bin/play -q /path/to/plug_sound.wav &
device_unplug.sh:
#!/bin/bash
export DISPLAY=":0"
notify-send "device unplugged"
/usr/bin/play -q /path/to/unplug_sound.wav &
It works, but it works very dumb. My questions are:
How to get actual title of the device attached, the same as I can see in lsusb output? Currently, I just got notifications like "plugged" and "unpugged", and I can't find how can I retrieve the name of the device in my udev rule (If I can, then I'd pass it to my script as a parameter)
Currently, too many notifications are activated. Say, when I attach my USB stick, I got about 15 notifications! But, if I run lsusb, attached USB stick is displayed as just single device. So, it seems I should add some more argument to rule filter, but I can't find it out.
Probably there's some better solution for device plug/unplug notifications? Please suggest if you know something.
|
Well, after many hours of googling and asking on forums, I got it working (it seems). Anyone who wants to get nice visual and/or audio notification when some USB device is plugged/unplugged can install my script, see installation details below.
First of all, answers on my own questions.
1. How to get actual title of the device attached, the same as I can see in lsusb output?
There's no such titles in the kernel (in common case). There is a database file with titles for many pairs vendor_id:product_id, it's usually /usr/share/hwdata/usb.ids file. This database can be updated by /usr/sbin/update-usbids.sh. Thanks to guys from linux.org.ru for that info.
I don't know if there is some special tool for getting device title by pair vendor_id:product_id, so I had to hack a bit with lsusb and grep: for example, lsusb | grep '0458:003a'
2. Currently, too many notifications are activated. Say, when I attach my USB stick, I got about 15 notifications!
I must admit I haven't figured out how to write rule for this, but I found another way I could filter it.
udev allows us to use some substitutions for RUN+="...": say, we can get bus number and device number by $attr{busnum} and $attr{devnum} respectively. Firstly, in my script I store list of attached devices in the special file, so that if script got new "plug" event, and this device's busnum and devnum are already stored in our file, then notification isn't generated. And secondly, these substitutions $attr{busnum} and $attr{devnum} are usually available only for one of the devices from the "series" of events. But anyway, explained algorithm should sort it out in any case.
Current project page: my-udev-notify.
It looks like this:
Installation details.
Tested on Linux Mint 13, I believe it should work on Ubuntu and other Ubuntu's derivatives, and I hope it will work on any *nix system with udev.
Go to project page, get sources from there and put them somewhere. There's just one main script in it: my-udev-notify.sh, but archive also contains sounds for plug/unplug notifications, plus some more info, see readme.txt for details.
Create file /etc/udev/rules.d/my-udev-notify.rules with the following contents: (don't forget to modify path to your real path where you unpacked my-udev-notify.sh!)
ACTION=="add", RUN+="/bin/bash /path/to/my-udev-notify.sh -a add -p '%p' -b '$attr{busnum}' -d '$attr{devnum}'"
ACTION=="remove", RUN+="/bin/bash /path/to/my-udev-notify.sh -a remove -p '%p' -b '$attr{busnum}' -d '$attr{devnum}'"
After this, it should work for newly attached devices. That is, if you unplug
some device, you won't get notification. But when you plug it back, you will.
(yes, for me it works without any udev restarting. If it doesn't for you, try
rebooting)
To make it work for all devices, just reboot your system. NOTE that there might
be many notifications during first boot (see known issues in the readme.txt). On second
boot, there will be no notifications (unless you plug in new device when
system is off)
You can customize it (turn on/off visual and sound notifications, or change sounds), check readme.txt in the archive for details.
| Call notify-send from an udev rule |
1,450,910,572,000 |
Let's say I use notify-send with this long messages:
notify-send 'd: title, up/down: zoom, w: win_to_img, </>: rotate, *: orig, Enter/0: blah blah blah'
But it truncates the message, showing only a part of it with no option to view the full message:
With Fedora 21 I was able to view the full message (pop up at bottom with scrollbar), but not with Fedora 24.
Version of notify-send is libnotify-0.7.6-8.fc24.i686.
Is there anyway to display full messages in Fedora 24 ?
|
notify-send works like this:
notify-send [OPTION...] <SUMMARY> [BODY]
Now, as you only have one (quoted) string, that's being used for the SUMMARY and the BODY is empty. Just use blank or whatever for the SUMMARY and the BODY will display the whole message (but only when you hover over the pop-up with your mouse)1:
notify-send ' ' 'd: title,up/down: zoom,w: win_to_img,</>: rotate,*: orig,Enter/0: blah blah blah'
or if you prefer gdbus:
gdbus call --session --dest org.freedesktop.Notifications --object-path \
/org/freedesktop/Notifications --method org.freedesktop.Notifications.Notify \
my_app_name 42 '' "" 'd: title, up/down: zoom, w: win_to_img, </>: rotate, \
*: orig, Enter/0: your very long message should now span over multiple lines \
and stuf blah blah blah blah whatever...' '[]' '{}' 20
1: this is on gnome 3, other DEs might actually display the whole message without the need to hover over it
| notify-send - How to display full message when message is longer than one line? |
1,450,910,572,000 |
I am running linux mint and use the notify-send command for various purposes, and of course also receive notifications from regular applications e.g. discord or MS Teams
When using Cinnamon DE, they look pretty normal, and I can even add icons to my custom notify-send calls to make it clear what is going on
However, I recently started using XMonad WM, and I'm finding that not only are the regular application notifications ugly, but my custom ones which have nice icons in them also follow that same ugly style (please excuse blurry screenshot):
For example, the above notification should contain an icon as per this command:
notify-send --hint=int:transient:1 'Connecting to VPN... Check 2FA Device.' -i myicon
Where is this configured?
|
Systems that do not use a desktop environment usually require installing a separate notification daemon to handle notifications. It appears that you already have the dunst notification daemon installed.
To configure its appearance, you can edit ~/.config/dunst/dunstrc. If it is not available, you can create a copy from /etc/dunst/dunstrc.
You can modify many different settings such as width, height, font, background and foreground, etc.
Once you have modified the config file, you will have to restart dunst by killing the process (pkill dunst) and starting dunst again as a background process (dunst & disown $!). Generating a new notification will also usually start the dunst daemon but it is recommended to explicitly start dunst in case there are multiple notification daemons.
See man 'dunst(5)' for details on the configuration file.
| How to customise the appearance of notify-send? |
1,450,910,572,000 |
I am using Linux Mint 17.
I want to be informed every 50 min, at every hour for small break.
Here is cron job:
nazar@desktop ~ $ crontab -l
DISPLAY=:0.0
XAUTHORITY=/home/matrix/.Xauthority
00 13 * * * /home/nazar/Documents/scripts/lunch_break_job.sh # JOB_ID_2
50 * * * * /home/nazar/Documents/scripts/pc_break.sh # JOB_ID_1
* * * * * /home/nazar/Documents/scripts/cron_job_test.sh # JOB_ID
Here is script for /home/nazar/Documents/scripts/cron_job_test.sh:
#!/bin/bash
export DISPLAY=0.0
export XAUTHORITY=/home/matrix/.Xauthority
if [ -r "$HOME/.dbus/Xdbus" ]; then
. "$HOME/.dbus/Xdbus"
fi
/usr/bin/notify-send -i "hello"
This snippet of function:
if [ -r "$HOME/.dbus/Xdbus" ]; then
. "$HOME/.dbus/Xdbus"
fi
Checks DBUS_SESSION_BUS_ADDRESS and uses it.
According to this answer I executed script, and now my Dbus is saved to $HOME/.dbus/Xdbus:
nazar@desktop ~ $ cat $HOME/.dbus/Xdbus
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-flm7sXd0I4,guid=df48c9c8d751d2785c5b31875661ebae
export DBUS_SESSION_BUS_ADDRESS
All should work. I couldn't find what is missed. Because notification doesn't work now.
From terminal it works fine:
How to solve this issue?
SOLUTION:
Now my crontab looks as follows:
DISPLAY=":0.0"
XAUTHORITY="/home/nazar/.Xauthority"
XDG_RUNTIME_DIR="/run/user/1000"
00 13 * * * /home/nazar/Documents/scripts/lunch_break_job.sh # JOB_ID_2
50 * * * * /home/nazar/Documents/scripts/pc_break.sh # JOB_ID_1
# * * * * * /home/nazar/Documents/scripts/cron_job_test.sh # JOB_ID
and cron_job_test.sh looks now:
#!/bin/bash
/usr/bin/notify-send -i /home/nazar/Pictures/icons/Mail.png "hello" "It is just cron test message"
pc_break.sh:
#!/bin/bash
/usr/bin/notify-send -i /home/nazar/Pictures/icons/download_manager.png "Break:" "Make a break for 10 min"
lunch_break_job.sh:
#!/bin/bash
/usr/bin/notify-send -i /home/nazar/Pictures/icons/Apple.png "Lunch: " "Please, take a lunch!"
|
You need to set XDG_RUNTIME_DIR as well. Change your crontab to this:
DISPLAY=":0.0"
XAUTHORITY="/home/nazar/.Xauthority"
XDG_RUNTIME_DIR="/run/user/1001"
00 13 * * * /home/nazar/Documents/scripts/lunch_break_job.sh # JOB_ID_2
50 * * * * /home/nazar/Documents/scripts/pc_break.sh # JOB_ID_1
* * * * * /home/nazar/Documents/scripts/cron_job_test.sh # JOB_ID
Make sure you change nazar to whatever your username is and 1001 to your actual UID. You can get your UID by running id -u.
And all you need in your script is:
#!/bin/bash
/usr/bin/notify-send "hello"
I just tested this on Arch running Cinnamon and it worked fine.
The variables are being set in the crontab, no need to export anything from the script. There's also no point in doing so, the script is being called by cron, it wouldn't export the values you need anyway.
| Notify-send doesn't work at Cinnamon |
1,450,910,572,000 |
I am using Ubuntu 16.04 LTS. There is an alias in my .bashrc which uses notify-send:
alias alert='notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e '\''s/^\s*[0-9]\+\s*//;s/[;&|]\s*alert$//'\'')"'
I can append alert to other commands as somecommand; alert or somecommand && alert and get get a pop-up notification after somecommand finishes (successfully). It reminds me that the command I ran in a Terminal window which is now minimised or in a different workspace has finished executing.
But I want a similar alert when it waits for an input from the user instead of completion (e.g. Yes/No prompt). How can I do that?
Analogous solution using notify-send would be great, but other relatively simple alternative would also be fine.
In case there's a confusion, I'm not planning to create an automated reply to the prompt. I just want it to remind me of forgotten (minimised/in different workspace) Terminal windows while running commands with lengthy output which may ask for user-input (e.g. apt update && apt upgrade).
|
Monitoring the dialogue of a program and send an alert
You can monitor the activity of
a fifo or
an xterm log file, now with an interactive mode
and let it start a zenity info message, when there is input from the monitored program. If you wish, you can also install espeak and let it send an audio message.
1. Start a zenity info message, when there is input from a monitored program using a fifo.
The following shellscript can monitor the output dialogue from a program and send an alert.
assuming a graphical desktop environment
start a wrapper shellscript in a terminal window, which is used like a 'console' for wrapper
starting the program to be monitored in an xterm window
running the dialogue in the xterm window (that is where you write your input)
using a fifo to get access to the output of the program to be monitored, /dev/stdout and dev/stderr.
running a while loop
testing if the fifo has been modified and in that case
starting a zenity info message window.
You are expected to close the zenity window (can work with 'Enter') to get back to the xterm window, where you write your input.
#!/bin/bash
if [ $# -eq 0 ]
then
echo "'$0' is a wrapper, that sends a notification, when the wrapped program
has written to standard input and standard error and may be waiting for input.
---
Usage: $0 <program name> [parameters]
Example: $0 .program"
exit
fi
message="'${1##*/} $2 ...' has written something, maybe asks for input"
tmpdir=$(mktemp -d)
tmpfifo=$(mktemp --tmpdir=$tmpdir)
rm "$tmpfifo"
mkfifo "$tmpfifo"
#ls -l "$tmpdir"
cnt1=$(stat --printf "%Y" "$tmpfifo")
sleep 1
xterm -title "${1##*/} $2 ..." -fa default -fs 11 -bg '#403600' \
-e bash -c "$* 2>&1 | tee /dev/stderr 2>&1 > $tmpfifo" 2> /dev/null & pid=$!
#< "$tmpfifo" espeak &
< "$tmpfifo" cat &
cont=true
while $cont
do
tmpstr=$(ps -Af |sed "s/grep $pid//"|grep "$pid")
# echo "$tmpstr"
if [ "$tmpstr" != "" ]
then
cnt0=$cnt1
cnt1=$(stat --printf "%Y" "$tmpfifo")
if [ "$cnt1" != "$cnt0" ]
then
# zenity --notification --text="$message" 2> /dev/null
# espeak "$message" &
zenity --info --title="${0##*/} ${1##*/} $2 ..." \
--text="$message" --width=500 2> /dev/null
fi
sleep 1
else
sleep .2
# echo "process $pid has finished"
cont=false
fi
done
# clean up
rm -r "$tmpdir"
You may wish to run espeak near zenity to get an audio message too. In that case you can remove the # character in the beginning of that line. (There may be a lot of text from the program, so it is usually a bad idea to redirect the fifo to espeak. It is better to redirect the fifo to cat and have it printed in the 'console'.)
Demo
You can test some command lines with cp -i and mv -i and you can test with the following little shellscript program,
#!/bin/bash
while true
do
read -p "Waiting for input. 'Stop' to Quit " string
if [ "${string:0:4}" == "Stop" ]
then
printf "$string. Gotcha\n"
break
elif [ "$string" != "" ]
then
printf "$string\n"
printf "Working for 10 seconds ...\n"
sleep 10
else
sleep 3
fi
done
Help text:
$ ./wrapper
'./wrapper' is a wrapper, that sends a notification, when the wrapped program
has written to standard input and standard error and may be waiting for input.
---
Usage: ./wrapper <program name> [parameters]
Example: ./wrapper .program
Monitoring program:
$ ./wrapper ./program
zenity info message window:
Dialogue in the xterm window:
Waiting for input. 'Stop' to Quit Hello
Hello
Working for 10 seconds ...
Waiting for input. 'Stop' to Quit World
World
Working for 10 seconds ...
Waiting for input. 'Stop' to Quit Goodbye
Goodbye
Working for 10 seconds ...
Waiting for input. 'Stop' to Quit Stop
'Console' output in the original terminal window after finishing:
$ ./wrapper ./program
Waiting for input. 'Stop' to Quit Hello
Working for 10 seconds ...
Waiting for input. 'Stop' to Quit World
Working for 10 seconds ...
Waiting for input. 'Stop' to Quit Goodbye
Working for 10 seconds ...
Waiting for input. 'Stop' to Quit Stop. Gotcha
Monitoring cp -ip:
$ LANG=C /path/wrapper cp -ip ubuntustudio-18.04-dvd-amd64.iso ubuntu-18.04.1-desktop-amd64.iso /tmp
zenity info message window:
Dialogue in xterm:
cp: overwrite '/tmp/ubuntustudio-18.04-dvd-amd64.iso'? y
cp: overwrite '/tmp/ubuntu-18.04.1-desktop-amd64.iso'? n
Monitoring sudo parted /dev/sdc:
$ LANG=C ./wrapper sudo parted /dev/sdc
Dialogue in xterm:
[sudo] password for sudodus:
GNU Parted 3.2
Using /dev/sdc
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: SanDisk Extreme (scsi)
Disk /dev/sdc: 16,0GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
3 2097kB 258MB 256MB primary fat32 boot
4 258MB 1366MB 1108MB primary
2 1366MB 12,4GB 11,0GB extended lba
5 1367MB 6736MB 5369MB logical ext2
6 6737MB 12,4GB 5615MB logical ext4
1 12,4GB 16,0GB 3662MB primary ntfs
(parted) q
2. Start a zenity info message, when something is written to an xterm window (from the monitored program or from the user).
The following shellscript can monitor the dialogue with a program and send an alert.
assuming a graphical desktop environment
start a wrapper shellscript in a terminal window, which is used like a 'console' for wrapper
starting the program to be monitored in an xterm window
running the dialogue in the xterm window (that is where you write your input)
using a log file of xterm to get access to the output from and input to the program to be monitored
running a while loop
testing if the log file has been modified and in that case
starting a zenity info message window.
short delays are allowed during typing the input (8 seconds; you can edit the script file to change the delay time).
You are expected to close the zenity window (can work with 'Enter') to get back to the xterm window, where you write your input.
Now there is an interactive mode, where you use the xterm window just as you use any terminal window. Close the xterm window to stop monitoring.
#!/bin/bash
# date editor comment
# 2018-12-31 sudodus version 1.0
version=1.0
name="${0##*/}"
if [ "$1" == "-h" ] || [ "$1" == "--help" ]
then
echo "'$name' is a wrapper, that sends a notification, when the wrapped program
has written to standard input and standard error and may be waiting for input.
---
Usage: $name [program name] [parameters]
Examples: $name # to run program(s) interactively in an xterm window
$name program
$name -h # to get help (this text)
$name -v # show version"
exit
elif [ "$1" == "-v" ]
then
echo "$name version $version"
exit
fi
tstart=$(date '+%s')
echo "----- start $name at $(date '+%F %T') ----------------------------"
tmpstr="${1##*/}"
xtermlog=$(mktemp -u)
if [ $# -eq 0 ]
then
mess_zenity="Check, if the monitored program asks for input"
mess_espeak="${mess_zenity/program/, Program,}"
xterm -title "monitored by ${0##*/}" -fa default -fs 11 -bg '#2c2b2a' \
-l -lf "$xtermlog" -sb -rightbar 2> /dev/null & pid=$!
else
mess_espeak="Check if '${tmpstr^} ${2##*/} ${3##*/} ...' asks for input"
mess_zenity="Check if '$tmpstr $2 $3 ...' asks for input"
xterm -title "${1##*/} $2 $3 ..." -fa default -fs 11 -bg '#2c2b2a' \
-l -lf "$xtermlog" -e "$@" 2> /dev/null & pid=$!
fi
sleep 0.5
sync
cnt1=$(stat --printf "%Y" "$xtermlog")
tail -f "$xtermlog" & ptail=$!
cont=true
while $cont
do
sleep 1
cnt0=$cnt1
tmpstr=$(ps -Af |sed "s/grep $pid//"|grep "$pid")
# echo "$tmpstr"
if [ "$tmpstr" != "" ]
then
cnt1=$(stat --printf "%Y" "$xtermlog")
if [ $cnt1 -gt $((cnt0 + 8)) ]
then
# zenity --notification --text="$message" 2> /dev/null
espeak "$mess_espeak" &
zenity --info --title="${0##*/} ${1##*/} $2 ..." \
--text="$mess_zenity" --width=500 2> /dev/null
touch "$xtermlog"
cnt1=$(stat --printf "%Y" "$xtermlog")
fi
sleep 1
else
sleep .2
# echo "process $pid has finished"
cont=false
fi
done
# clean up
tmpstr="$(tail -n1 "$xtermlog" | sed 's/.*exit.*/exit/')"
if [ "$tmpstr" != "exit" ]
then
echo ""
fi
rm -r "$xtermlog"
kill $ptail
tend=$(date '+%s')
tuse=$((tend-tstart))
echo "------- end $name at $(date '+%F %T') --- used $tuse seconds"
Save thís bash code to a file and give it [for example] the name vialog, make it executable and maybe move it to a directory in your path.
$ vialog
----- start vialog at 2018-12-31 14:37:41 ----------------------------
You work in the xterm window and the dialogue is echoed to the starting window too.
sudodus@bionic64 /media/multimed-2/test/test0/pomsky-wrap $ ./program
Waiting for input. 'Stop' to Quit Hello World
Hello World
Working for 10 seconds ...
Waiting for input. 'Stop' to Quit I am writing ...
I am writing ...
Working for 10 seconds ...
Waiting for input. 'Stop' to Quit Stop
Stop. Gotcha
sudodus@bionic64 /media/multimed-2/test/test0/pomsky-wrap $ scrot -sb
sudodus@bionic64 /media/multimed-2/test/test0/pomsky-wrap $ exit
exit
------- end vialog at 2018-12-31 14:39:02 --- used 81 seconds
| Send a notification or alert when bash asks for input from user |
1,450,910,572,000 |
I have wrote this script, when the increase volume button is pressed to display desktop notifications with "notify-send".
when the button is pressed:
notify-send "Current volume 'pamixer --get-volume'"
The problems is that the notifications get stacked e.g.
Is there a way to prevent the notifications from stacking and just display the newest notification?
|
The notification api has a means to specify the id of a current notification that should be updated instead of creating a new popup, but notify-send does not provide for this. If you are willing to use a small amount of python, you can retrieve the id of a notification when you make it, and then try to update that id later. Put the following python2 code in a file in a directory that is in your PATH, say mynotify-send and do chmod +x mynotify-send:
#!/usr/bin/python
import argparse, gi
#gi.require_version('Notify', '0.7')
from gi.repository import Notify
def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('-m', '--message', default="body")
parser.add_argument('-i', '--id', type=int)
return parser.parse_args()
def run(myid,message):
Notify.init("mynote")
obj = Notify.Notification.new("my summary", message)
obj.set_timeout(60*1000)
if myid:
obj.set_property('id', myid)
obj.show()
newid = obj.get_property('id')
print newid
else:
obj.show()
myid = obj.get_property('id')
print myid
def main():
options = parse_args()
run(options.id, options.message)
main()
You must install python-gobject too. When you run
mynotify-send -m 'message 1'
it should popup the notification, but also print an id on stdout. Often this is just a small number counting the number of notifications, eg 6. You can then change the message in the existing popup by adding this id:
mynotify-send --id 6 -m 'message 2'
You can do this as long as the popup exists. After the popup goes away the next message will get a new id, eg 7, which the program prints, and you will have to use this in later messages. So basically in a shell script you would just remember the output from the program and reuse it each time.
| Prevent "notify-send" from stacking |
1,450,910,572,000 |
here are my notifications and I would like each notification to replace the one before it
by the way I am using dunst for my notifications
|
You should use dunstify instead of notify-send, because first one allows you to use notification ID and replace older notifications with newer ones. Here is link to info about dunstify, and link to example of creating volume level indicator.
| How to make dunst show repeated notifications of the same program as one single notification |
1,450,910,572,000 |
I am on Linux Mint 18.1, MATE.
I am using the notify-send command to visualize the name of keys (such as <enter>), while sending them to the current window via a python script.For about two weeks, notify-send has shown a weird behavior. I know the basic syntax in bash is notify-send [OPTIONS] <summary> [body].
Basic problem
When executing notify-send -t 0 '<enter>' 'text body', everything looks fine:
However, when trying to print the key name in the message body with notify-send -t 0 'Summary' '<enter>', I get:
The same happens with notify-send -t 0 'Summary' '<', notify-send -t 0 'Summary' '>' or notify-send -t 0 'Summary' \<
Any ideas why the body text is printed blank if it contains < or > ?
Workaround (fails)
I have tried to use a python module istead:
from gi.repository import Notify
Notify.init("App Name")
Notify.Notification.new("Summary","<enter>").show()
But the result is the same as in picture 2 above.
Additional info:
When trying zenity --info --title='Summary' --text='<enter>' in bash, I get an error message:
(zenity:4952): Gtk-WARNING **: Failed to set text '<enter>' from markup due to error parsing markup: Error on line 1 char 24: Element 'markup' was closed, but the currently open element is 'enter'
And instead of the text <enter>, the opening info dialog has the surprising text: All updates are complete.
|
The notification spec says that body can include simple markup, so any tags inside "<...>" will be removed and interpreted if possible. For example, "<b>hello</b>" will show the word in bold.
You can use the standard html entity mechanism and show a < with < and
> with > giving, for example,
notify-send 'Summary' '<enter>'
If you prefer you can just use a multiline summary, eg:
notify-send 'Summary
<enter>'
| libotify / notify-send: body text is not printed if it contains '<' or '>' |
1,450,910,572,000 |
Is it possible to generate a brief notification every time mpv starts a playback? Maybe through notify-send?
|
mpv can run lua user scripts, some of which are listed here. One of these, notify will generate a sophisticated notify-send. It has a few dependencies, and I wasn't able to get it to work in my setup, but the followed greatly simplified code worked for me. Place this file in
~/.config/mpv/scripts/mynotify.lua (create the directory if needed), and run mpv as usual. You should see a notification when the artist or title changes.
-- based on https://github.com/rohieb/mpv-notify
-- https://unix.stackexchange.com/a/455198/119298
lastcommand = nil
function string.shellescape(str)
return "'"..string.gsub(str, "'", "'\"'\"'").."'"
end
function do_notify(a,b)
local command = ("notify-send -a mpv -- %s %s"):format(a:shellescape(),
b:shellescape())
if command ~= lastcommand then
os.execute(command)
lastcommand = command
end
end
function notify_current_track()
data = mp.get_property_native("metadata")
if data then
local artist = (data["ARTIST"] or data["artist"] or " ")
local title = (data["TITLE"] or data["title"] or " ")
if artist..title~=" " then
do_notify(artist, title)
return
end
end
local data = mp.get_property("path")
if data then
local file = data:gsub("^.-([^/]+)$","%1")
file = file:gsub("%....$","") -- delete 3 char suffix
local dir = data:gsub("^.-([^/]+)/[^/]*$","%1")
do_notify(dir, file)
end
end
mp.register_event("file-loaded", notify_current_track)
This updated version waits for events that are sent when a new file is ready to be played. It tries to find the metadata and extract the artist and title from it. If this is empty, it then gets the current filename ("path") and splits out the last part after / to get a filename, from which it removes any trailing 3 character suffix. It tries to find the last directory part of the filename, and uses these 2 items in the notification. If your directories are structured with say, artist/albumname/tracktitle.aac, you might like to change this with a more appropriate pattern match and extraction. See the lua section on patterns.
| Add notification to mpv through notify.send? |
1,450,910,572,000 |
According to Bash: Special Parameters:
($!) Expands to the process ID of the job most recently placed into the background, whether executed as an asynchronous command or using the bg builtin
I can utilize this as follows:
$ leafpad &
[2] 3962
$ kill $!
This works and kills the most recent process (eg. leafpad) but for notify-send it seems not working:
$ notify-send Hello &
[2] 4052
$ kill $!
bash: kill: (4052) - No such process
And I have to use killall notify-osd in order to kill it.
So, I want to know why kill $! doesn't work for notify-send? And what is the proper way to kill such a process?
Note: I know that I can specify the time-out, but this is a different issue.
|
notify-send doesn't run for any length of time: It starts, connects to notify-osd, delivers the notification message to be displayed, and terminates.
By the time you run the kill command, notify-send has already terminated on its own. The notification you're seeing is served by notify-osd.
| Why can't I use `kill $!` with parameter expansion in Bash, when the most recent process is "notify-send"? |
1,450,910,572,000 |
I am trying to get libnotify (notify-send) to pop-up a notification once a certain character is found while I tail a log file.
Without grep it works fine ...
Here is my code:
tail -f /var/log/mylogfile | grep ">" | while read line; do notify-send "CURRENT LOGIN" "$line" -t 3000; done
When I include grep it passes nothing to notify-send. The code above I modified from https://ubuntuforums.org/showthread.php?t=1411620
Also, how can I change the font size?
|
This page explains grep and output buffering, in short you want to use the --line-buffered flag:
tail -f /var/log/mylogfile | grep --line-buffered ">" | while read line; do notify-send "CURRENT LOGIN" "$line" -t 3000; done
About the font, this AskUbuntu question mentions it's not officially possible, but describes a tool notifyosdconfig that allows some modifications.
| libnotify with bash and grep |
1,450,910,572,000 |
I've been trying to write a shell script that will interface with cmus and then notify me of the track info using notify-send. Right now it is not working, mainly because xargs does not seem to pass 2 arguments to notify-send. It only sends one and I cannot figure out why. I've done everything I can think of with sed to get the right output but it doesn't work. Also, if I use notify-send with two arguments, it works, so I don't think it's a problem with notify-send.
The output of cmus-remote -Q is:
status paused
file /home/dennis/music/Coheed And Cambria/GOODAP~1/05 Crossing the Frame.mp3
duration 207
position 120
tag artist Coheed & Cambria
tag album Good Apollo I'm Burning Star IV Volume One: From Fear Through the Eyes of Madness
tag title Crossing the Frame
tag date 2005
tag genre Rock
tag tracknumber 5
tag albumartist Coheed & Cambria
set aaa_mode all
set continue true
set play_library true
set play_sorted false
set replaygain disabled
set replaygain_limit true
set replaygain_preamp 6.000000
set repeat false
set repeat_current false
set shuffle true
set softvol false
set vol_left 100
set vol_right 100
My code is terrible. I'm just starting to learn shell scripting so sorry about that.
#!/bin/sh
#
# notify of song playing
info="$(cmus-remote -Q)"
title="`echo "$info" | grep 'tag title' | sed "s/'//g" | sed 's/tag title \(.*\)/'\''\1'\''/g'`"
artist="`echo "$info" | grep 'tag artist' | sed "s/'//g" | sed 's/tag artist \(.*\)/ '\''\1/g'`"
album="`echo "$info" | grep 'tag album ' | sed "s/'//g" | sed 's/tag album \(.*\)/ \1'\''/g'`"
stupid="${title}${artist}$album"
echo "$stupid" | xargs notify-send
|
xargs is working as intended; each line is taken as a parameter. If you want multiple parameters, separate them with newlines.
{echo "$title"; echo "$artist"; echo "$album"} | xargs notify-send
That said, you're doing far too much work for something quite simple:
title="$(echo "$info" | sed -n 's/^tag title //p')"
artist="$(echo "$info" | sed -n 's/^tag artist //p')"
album="$(echo "$info" | sed -n 's/^tag album //p')"
notify-send "$title" "$artist" "$album"
(Also note one other gotcha: notify-osd sends the messages it's passed through Pango, so you need to escape anything that might be mistaken for Pango markup. This means <, >, and & in practice, much as with HTML and XML. The above doesn't try to handle this.)
| why isn't xargs parsing my input correctly? |
1,450,910,572,000 |
Suppose I have a command like:
foo() {
echo a
echo b >&2
echo c
echo d >&2
}
I use the following command to process stdout in terminal but send error via notify-send
foo 1> >(cat) 2> >(ifne notify-send "Error")
The issue i am having is, I want to view the stderr (in this case b d) as notify-send body.
I have tried:
foo 1> >(cat) 2> >(ifne notify-send "Error" "$(cat)")
foo 1> >(cat) 2> >(ifne notify-send "Error" "$(</dev/stdin)")
Nothing is working. What can be the solution here?
|
With your try with "$(cat)" you're almost there, but you need this cat to read from ifne, not along ifne.
In case of ifne notify-send "Error" "$(cat)", cat reads from the same stream ifne does, but not simultaneously. The shell handling this part of code can run ifne only after cat exits (because only then it knows what $(cat) should expand to, i.e. what arguments fine should get). After cat exits, the stream is already depleted and ifne sees its input as empty.
This is a way to make a similarly used cat read from ifne:
foo 2> >(ifne sh -c 'exec notify-send "Error" "$(cat)"')
(I'm not sure what the purpose of your 1> >(cat) was. I skipped it.)
Here ifne relays its input to the stdin of whatever it (conditionally) runs. It's sh, but everything this sh runs shares its stdin. Effectively cat reads from ifne. And similarly to your try, exec notify-send can be executed only after cat exits; so even if notify-send tried to read from its stdin, cat would consume everything first.
This method may fail if there is too much data passing through cat. Argument list cannot be arbitrarily long. And because cat will exit only after foo exits, the method works for foo that ever exits and generates not too many messages to its stderr.
Using xargs instead of $(cat) may be a good idea for long-running foo that occasionally generates a line of error. This is an example of such foo:
foo() {
echo a
echo b >&2
sleep 10
echo c
echo d >&2
sleep 20
}
The above solution is not necessarily good in case of this foo (try it). With xargs it's different. foo may even run indefinitely and you will be notified of errors (one line at a time) immediately. If your xargs supports --no-run-if-empty (-r) then you don't need ifne. This is an example command with xargs:
foo 2> >(xargs -r -I{} notify-send "Error" {})
(Note this xargs still interprets quotes and backslashes.)
| notify-send with stderr in notification body if there is stderr |
1,450,910,572,000 |
I was making a battery notifier script for my raspberry pi3. The script is executing when I do
/usr/bin/python /home/pi/Documents/shutdown.py
and showing the popup notifications. However the service is not executing it or not showing the notification. I can see the python process if I do sudo systemctl status battery-notifier.service.
battery-notifer.service
[Unit]
Description=Battery Notifier
[Service]
Type=simple
WorkingDirectory=/home/pi/Documents
ExecStart=/usr/bin/python /home/pi/Documents/shutdown.py
[Install]
WantedBy=multi-user.target
shutdown.py
import raspiupshat
import statistics
import subprocess
from time import sleep
raspiupshat.init()
while(True):
voltagesList = []
sleep(0.5)
currentVoltage = raspiupshat.getv()
voltagesList.append(currentVoltage)
medianVoltage = statistics.median(voltagesList)
if(medianVoltage>4):
subprocess.Popen(["notify-send","Battery Full!"])
This is the status of the service when I do sudo systemctl status battery-notifier.service:
● battery-notifier.service - Battery Notifier
Loaded: loaded (/lib/systemd/system/battery-notifier.service; enabled)
Active: active (running) since Sat 2017-07-15 04:05:18 UTC; 48min ago
Main PID: 28384 (python)
CGroup: /system.slice/battery-notifier.service
└─28384 /usr/bin/python /home/pi/Documents/shutdown.py
Jul 15 04:05:18 raspberrypi systemd[1]: Started Battery Notifier.
|
Edited to add: Your program tries to access the graphical desktop of a user (pi), however, it is itself executed by systemd. That is, it will run as root per default and not as the user pi. Moreover, there might not even be a graphical desktop yet at the time this is started.
Therefore, you could go several paths:
Start the program not via systemd, but when the logged in user starts the
graphical desktop. Therefore, you need to put the program start into the
file ~/.xinitrc or ~/.xprofile. See the Arch Wiki on Autostart
Add a line User=pi to your systemd service. This is the solution you mentioned in the comment to your original post
systemd can be configure to be executed in a user-mode (as opposed to system mode) when the user logs in. You could let your service be started by that way. See the Arch Wiki on systemd/User
Original answer:
See this SO question. There is an env= parameter in the subprocess.Popen(.) function which can be used.
Alternatively, you could use the feature of sh that if you assign a variable immediately before a call, that variable is set (only) for that call:
myvar=world
# cannot use echo $myvar, because $myvar would be substituted before echo is launched
myvar=hello declare -x myvar
echo $myvar
gives
declare -x myvar="hello"
world
Link to corresponding chapter in POSIX spec
So for you:
subprocess.call("DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-goOEk5dZcK,guid=9c1f14175e6be0992b16e5155969b46c notify-send 'Battery full!'",shell=True)
The call you mentioned in the comment to your question does not work because it leads to the following process
subprocess.Popen('export DBUS_SESSION_BUS_ADRESS=..., shell = True)
Launch a shell
Set the environment variable DBUS_SESSION_BUS_ADRESS=... for this shell
and all sub-shells (that's what export does).
What it does NOT: Set the variable for all shells there are on the machine!
Close the shell. Now the value of DBUS_SESSION_BUS_ADRESS is lost again.
subprocess.Popen(['notify-send',...], shell=True)
Launch a shell
The export call of the previous shell has no power here because these two shells are completely unrelated
Launch notify-send as in your original question
So the setting of the environment variable has no effect here.
| systemd service not executing the python script |
1,450,910,572,000 |
On my system, notify-send requires 3 enviorment variables to run, which are kept in a file which is generated automatically on logon:
/home/anmol/.env_vars:
DBUS_SESSION_BUS_ADDRESS=unix:abstract=/tmp/dbus-PwezoBTpF3
export DBUS_SESSION_BUS_ADDRESS
XAUTHORITY=/home/anmol/.Xauthority
export XAUTHORITY
DISPLAY=:0
export DISPLAY
And, in the crontab buffer, I have entered this:
PATH=/home/anmol/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
* * * * * /home/anmol/display-notif.sh
where display-notif.sh contains:
#!/usr/bin/env bash
. /home/anmol/.env_vars
notify-send 'hello'
Although I am able to run notify-send from non-sudo cron (crontab -e) through this setup, I am unable to do so from sudo cron (sudo crontab -e).
I also tried checking if there are any errors being generated:
* * * * * /home/anmol/display-notif.sh 2>/home/anmol/log
But that log file is empty.
How do I make it work from sudo cron ?
I am using Ubuntu 16.04.
|
It is working after replacing
* * * * * /home/anmol/display-notif.sh
with
* * * * * sudo -u anmol /home/anmol/display-notif.sh
| notify-send from root cron |
1,450,910,572,000 |
I am using Plasma-desktop notifications for all kinds of things, often from scripts using kdialog or notify-send. My Plasma desktop uses a dark background with a light foreground (text). Until a few months ago, all was well, but after a system update (in May?) my notifications kept their dark background but started using dark text as well for the notification body. Hence, all I can read is the title.
I looked at my current colour scheme in ~/.local/share/color-schemes/*.colors, setting all text (Foreground*=255,0,0) to red, but the notification fonts did not change. This document provides some specifications and mentions that ForegroundInactive should be used for the body text. However, this may be in a section of the colour scheme that is currently lacking. Also, the document is about three years old.
I'm using KDE Plasma v.5.25.5 on Gentoo Linux.
How can I change the colour of the (body) text in my notifications?
|
Luckas' answer pointed me in the direction of a workaround (rather than a solution):
In /usr/share/plasma/plasmoids/org.kde.plasma.notifications/contents/ui/NotificationItem.qml
@@ -230,7 +230,7 @@ ColumnLayout {
// HACK RichText does not allow to specify link color and since LineEdit
// does not support StyledText, we have to inject some CSS to force the color,
// cf. QTBUG-81463 and to some extent QTBUG-80354
- text: "<style>a { color: " + PlasmaCore.Theme.linkColor + "; }</style>" + notificationItem.body
+ text: "<style>a { color: " + PlasmaCore.Theme.linkColor + "; } p { color: " + PlasmaCore.Theme.textColor + "; }</style><p>" + notificationItem.body + "</p>"
// Cannot do text !== "" because RichText adds some HTML tags even when empty
visible: notificationItem.body !== ""
While this does not solve my issue properly, at least I can read my notifications again. Since it took me several hours to reach such a trivial goal (not helped by the fact I had to log out and into Plasma after every change - you'll have to do the same after this edit), I'm sufficiently happy/annoyed to stop here.
However, this workaround may give some insight into the actual problem - I'm still very interested to hear any further suggestions.
| How can I change the text colour in a Plasma-desktop notification? |
1,450,910,572,000 |
There is both zenity and notify-send. (on Fedora/GNOME at least, both seems to be pre-installed.)
So considering I want to show notifications, what are the differences between these too ones?
Is there anyone that is installed in more distros or so (by default)? Is the process of showing notifications any different? Is anyone maybe more compatible to some desktop environments? Is it even available on some desktop environments?
I also noticed the documentation on notify-send is pretty spare. In my Fedora installation, it does not even have a man page…
|
Functionality comparison
zenity --notification is equivalent to notify-send for the most simple cases.
For example, these two commands are equivalent:
$ zenity --notification --text=Title
$ notify-send Title
and so are these:
$ notify-send Title 'Long text message'
$ zenity --notification --text='Title\nLong text message'
As you can see, the syntax for notify-send is shorter and simpler, because it is a specialised tool, while --notification is just one of the many commands available in zenity.
Differences are:
notify-send has an expire-time option, which however, according to the current man page, is ignored by both Ubuntu's Osd and the Gnome shell.
zenity has a --listen option which can change the appearance of the notification without closing and reopening it: message displayed, visibility and icon can all be changed by sending appropriate strings to zenity's standard input.
setting an icon requires just --icon for notify-send while it requires --listen and icon: error command on standard input for zenity.
category and urgency are set with dedicated options in zenity, while they require a --hint option.
Note that you need to explicitly kill the zenity process when using --listen, because it keeps listening on standard input even after it is closed (this is a bug, I suppose). This is not a problem for complex procedures where the notification changes dynamically, but it makes things unnecessarily complex for simple cases.
Also note that the --hint option of zenity is briefly mentioned by zenity --help-notification, but it is not mentioned in the man page.
Both category and urgency are documented in the Desktop Notifications Specification.
Alternatives
dialog and the old whiptail are possible alternatives, but I don't see any advantage in using them for notifications unless you cannot install notify-send or zenity or you are already using them for other purposes, since both have much more functionality than just notifications.
| What are the differences/(dis)advantages of zenity vs notify-send? |
1,450,910,572,000 |
As continuation of this question (How can I send a notification with polkit 0.106?), I've discovered that I have to execute notify-send as the user who I want to send notification.
But, with my current config, I can't do this, because polkit execute the script as polkitd user, and I can't do su $user without known user password.
By this reason, I need to create a new polkit action, to allow execute notify-send as other user from polkitd.
My polkit rule is this:
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.consolekit.system.stop" ||
action.id == "org.freedesktop.login1.power-off" ||
action.id == "org.freedesktop.login1.power-off-multiple-sessions" ||
action.id == "org.xfce.session.xfsm-shutdown-helper")
{
try{
polkit.spawn(["/usr/bin/pendrive-reminder/check_pendrive.sh", subject.user]);
return polkit.Result.YES;
}catch(error){
polkit.spawn(["/usr/bin/pendrive-reminder/send_notify.sh", subject.user]);
return polkit.Result.NO;
}
}
});
This polkit rule must lock shutdown option in shutdown menu, and shows a notification with notify-send, with send_notify.sh script, which execute this:
#!/bin/bash
export DISPLAY=":0"
user=$1
pkexec --user $user notify-send "Pendrive Reminder" "Shutdown lock enabled. Disconnect pendrive to enable shutdown" -u critical
exit 0
I tried to add this polkit policy file:
<policyconfig>
<action id="org.freedesktop.notify-send">
<description>Launch notify-send command</description>
<defaults>
<allow_any>yes</allow_any>
<allow_inactive>yes</allow_inactive>
<allow_active>yes</allow_active>
</defaults>
<annotate key="org.freedesktop.policykit.exec.path">/usr/bin/notify-send</annotate>
<annotate key="org.freedesktop.policykit.exec.allow_gui">true</annotate>
</action>
</policyconfig>
I put this file in /usr/share/polkit-1/actions/org.freedesktop.policykit.notify-send.policy
But, after put policy file in /usr/share/polkit-1/rules.d/ and press shutdown button, the shutdown menu took a long time to be showed, and notification didn't appeared. The shutdown option is locked correctly
How can I get that polkit can call notify-send from my script?
|
After doing a few test, I got this results:
polkitd is a nologin user
If I execute this command, to execute my script with polkitd user, shows an error:
sudo su polkitd -s /bin/bash -c aux_scripts/send_notify.sh almu
Error executing command as another user: Not authorized
This incident has been reported.
So, I think that polkitd user is a limited account, who it can't execute commands as other user
As a conclusion, I determine that this action isn't possible to do without modify system internal. I can't allow this in my application, so I can't launch commands as another user from polkit
| How to allow running notify-send as another user with pkexec? |
1,450,910,572,000 |
I am developing a application to don't forget the pendrive.
This app must lock the shutdown if a pendrive is connected to the machine. As this form, if the user wants to shutdown the system while a pendrive is connected, the system shows a notification to alert about it must disconnect the pendrive to unlock shutdown.
To detect the shutdown event, I set a polkit rule what call a script to check if any pendrive are connected to the system.
If there are any pendrive connected, the polkit rule calls to notify-send through the script send_notify.sh, which execute this command:
notify-send "Pendrive-Reminder" "Extract Pendrive to enable shutdown" -t 5000
The polkit rule is this:
polkit.addRule(function(action, subject) {
if (action.id == "org.freedesktop.consolekit.system.stop" ||
action.id == "org.freedesktop.login1.power-off" ||
action.id == "org.freedesktop.login1.power-off-multiple-sessions" ||
action.id == "org.xfce.session.xfsm-shutdown-helper")
{
try{
polkit.spawn(["/usr/bin/pendrive-reminder/check_pendrive.sh", subject.user]);
return polkit.Result.YES;
}catch(error){
polkit.spawn(["/usr/bin/pendrive-reminder/send_notify.sh", subject.user]);
return polkit.Result.NO;
}
}
}
But. after put this polkit rule and press shutdown button, my user don't receive any notification.
I debug the rule and I checked that second script It's executed, but the notify-send don't shows the notification to my user.
How can I solve this?
UPDATE:
I tried to modify the script as this:
#!/bin/bash
user=$1
XAUTHORITY="/home/$user/.Xauthority"
DISPLAY=$( who | grep -m1 $user.*\( | awk '{print $5}' | sed 's/[(|)]//g')
notify-send "Extract Pendrive to enable shutdown" -t 5000
exit 0
The user is passed as parameter by pòlkit
But the problem continues
UPDATE: I've just seen this bug https://bugs.launchpad.net/ubuntu/+source/libnotify/+bug/160598 that don't allows to send notifications as root.
Later I'll test to modify workaround changing user
UPDATE2: After change code to this. the problem continues:
#!/bin/bash
export XAUTHORITY="/home/$user/.Xauthority"
export DISPLAY=$(cat "/tmp/display.$user")
user=$1
su $user -c 'notify-send "Pendrive Reminder" "Shutdown lock enabled. Disconnect pendrive to enable shutdown" -u critical'
|
Finally, I created a dbus client, launched as user, which receives signal from systembus and shows notification to user.
The dbus client code is in https://github.com/AlmuHS/Pendrive_Reminder/blob/work-in-progress/dbus-client/client.py
In the send-notify.sh script, I only added
dbus-send --system /org/preminder/mensaje org.preminder.App
Executing the dbus client as user, the notification is showed correctly
Now I'm try that the client can be launched automatically when user connect pendrive
Continue in How to launch a dbus client from a script?
| How can I send a notification with polkit 0.106? |
1,450,910,572,000 |
I want to display a notification for user via notify-send when power button is pressed.
Power button has a special script attached which allows to shutdown only when specified amount of time has passed.
In that time when user cannot shutdown the device I want to display a notification which says something like "Please wait...".
Problem is that when I put a notify-send command into the script it spams entire screen with that notification because after pressing the button script is executed like 50 times. It is related to power button hardware (I think).
So my question is there a way to do it?
|
Inside your script write a file on hdd ("touch alreadyPressed"). Each other call should check the existence of this file and leave the script. When the script notified the user delete the script. If this is too early, then delete it on startup.
| Elegant way for notification when power button pressed |
1,450,910,572,000 |
I'm trying to send a notification through a notify-send
notify-send 'System' 'Dist files is already' $(du -h /var/cache/distfiles/ | tr -d '/var/cache/distfiles/')
But the spacer that I need cannot be displayed for some unknown to me reason, it outputs:
Invalid number of options.
But If I'll remove the spacer in there like this:
notify-send 'System' 'Dist files is already'$(du -h /var/cache/distfiles/ | tr -d '/var/cache/distfiles/')
It would works perfectly. Please, explain to me why is that so, I'm too dumb.
|
The command tr doesn't work as you intended:
-d, --delete
delete characters in SET1, do not translate
Meaning that tr removes single characters from the SET1, for example:
$ echo foobar | tr -d fb
ooar
Now let's see man notify-send:
SYNOPSIS
notify-send [OPTIONS] {summary} [body]
So you have to pass 2 arguments (besides the OPTIONS). For example:
$ notify-send 'System' 'foo' 'bar'
Invalid number of options.
$ notify-send 'System' 'foo'
<notification appears>
Let's see the output of du -h /boot 2>/dev/null:
4,0K /boot/efi
3,4M /boot/grub/x86_64-efi
2,3M /boot/grub/fonts
8,0M /boot/grub
146M /boot
You have there 2 strings for each row! so your command results in (using /boot as example dir):
notify-send 'System' 'Dist files is already' 4,0K /boot
if the output is only 1 line, but as you can see, it could be many lines. So, a bunch of arguments.
When you remove the space, the resulting string is read as one, hence it looks like the correct 2 arguments.
So change your command to:
notify-send 'System' "Dist files is already $(du -h /var/cache/distfiles/)"
only if you are sure that output is just one line. Here an example with /root folder
$ notify-send 'System' "Dist files is already $(du -h /root 2> /dev/null)"
or
$ notify-send 'System' "Dist files is already $(du -h /root | awk '{ print $1 }')"
| Problem with a spacer |
1,450,910,572,000 |
I want to use notify-send to show a notification every n minutes to tell me the vpn status.
The command would be running in a popup terminal I have already set up, hidden from view until I need to stop the while loop, then I'd just do ctrl-c.
I wrote the command below, but it errors out infinitely: Invalid number of options.
while true
do
notify-send $(timeout 5s nordvpn status | rg -i "status|country|uptime" ; nordvpn settings | rg -i "kill switch|auto-connect")`
done
timeout 5s is there because sometimes nordvpn is not responsive, and I need to kill the command after 5 seconds.
Thank you for assisting.
|
The command takes one or two arguments:
SYNOPSIS
notify-send [OPTIONS] {summary} [body]
So, you'll simply wrap the result of your command in double quotes:
while :; do
notify-send "$(command)"
done
| How do I notify-send a long command? |
1,627,423,012,000 |
I tried to use the following command:
gtk-launch nvim file.txt
But it gave me this error:
gtk-launch: error launching application: Unable to find terminal required for application
How am I supposed to set the terminal required for application? I already set my $TERM and $TERMINAL environmental variables:
export TERM="kitty"
export TERMINAL="kitty"
|
As of 18-09-2023: The list of hardcoded terminals here is
static const struct {
const char *exec;
const char *exec_arg;
} known_terminals[] = {
{ "xdg-terminal-exec", NULL },
{ "kgx", "-e" },
{ "gnome-terminal", "--" },
{ "mate-terminal", "-x" },
{ "xfce4-terminal", "-x" },
{ "tilix", "-e" },
{ "konsole", "-e" },
{ "nxterm", "-e" },
{ "color-xterm", "-e" },
{ "rxvt", "-e" },
{ "dtterm", "-e" },
{ "xterm", "-e" }
};
Solution 1
The first program on the list is a script which allows us to select an arbitrary terminal. So, a solution would be to install this script into /usr/bin/xdg-terminal-exec and configuring it. (On arch there is an aur package xdg-terminal-exec-git. I use Arch, btw)
To configure it, we need to create a file listing the terminals, in the priority order we want. The file should be $HOME/.config/xdg-terminals.list. For example:
echo kitty.desktop >> $HOME/.config/xdg-terminals.list
Then, we need to add the .desktop files to the data hierarchy of the script, the folder $HOME/.local/share/xdg-terminals/. For example:
ln -s /usr/applications/kitty.desktop $HOME/.local/share/xdg-terminals/
Now we can check if the terminal we wanted is opened by the command:
xdg-terminal-exec
with the previous example, this should open kitty.
Now gtk-launch, exo-open, xdg-open, thunar and all programs that use glib2 to open .desktop files, marked with Terminal=true, should open in the desired terminal.
Solution 2
A simpler solution would be to simply link the desired terminal into /usr/bin/xdg-terminal-exec. For example:
ln -s /usr/bin/kitty /usr/bin/xdg-terminal-exec
Note: This will only work if your terminal does not take a flag to receive commands to execute. If it does, you can use this script instead of the symbolic link:
#!/usr/bin/sh
$TERM [needed-flag] $1
| Error With gtk-launch: Unable to find terminal required for application |
1,627,423,012,000 |
I use neovim in tmux in gnome-terminal on Fedora 25. Here I found out, that I do not have true color support because terminal is not linked to some libvte of correct version. Since many nvim color schemes need true color support (and also I want this from a general perspective) I'd like to activate it!
However, the posted site only refers to the ppa (which as I imagine are ubuntu-repos). So my question: How do I activate true colors in gnome-terminal on fedora 25?
|
Those instructions do not actually provide the correct test for the version of libvte used on Fedora, since our gnome-terminal-server is in /usr/libexec. Instead, I'd suggest
$ rpm -qR gnome-terminal|grep vte
libvte-2.91.so.0()(64bit)
vte291(x86-64) >= 0.46.0
Here, we see that 0.46.0 is greater than the 0.36 your tutorial says is required, so this is not your problem. In fact, check this out:
$ echo $COLORTERM
truecolor
TrueColor is already enabled out of the box on Fedora 25 Workstation.
$COLORTERM is also truecolor inside of tmux. In fact, this blog post has a simple test script with which I verified that TrueColor is in fact working both outside and inside tmux with no further configuration.
So, this is down to neovim configuration. To make it work in current versions, you need set termguicolors in your ~/.config/nvim/init.vim. (In versions before May 2016, set the environment variable NVIM_TUI_ENABLE_TRUE_COLOR to 1.) This is documented in the neovim log of "breaking changes".
| Enable true color for neovim in Fedora 25 |
1,627,423,012,000 |
I have a script with which I toggle between a dark and a light colorscheme. The terminal emulator I use (termite) rereads its configuration when receiving the signal USR1, so when toggling my colorscheme I send USR1 to all termite instances so that the colorscheme is updated immediatly.
Does there exist some possibility to convince neovim to reread its configuration (from outside of neovim)?
I couldn't really find a list of what neovim makes of unix signals. It also doesn't need to be a signal, as far as I understand neovim has some concepts of "server" and "frontend", so I guess something like connecting to each server and issuing to reload the configuration would also work.
|
One can remotely control vim via the remote feature. For neovim I found the neovim-remote which makes it easy to send a command to an already running nvim process. The following snippet iterates through every nvim process (discovered by neovim-remote) and sends a command to source the config file:
for path in $(nvr --nostart --serverlist)
do
nvr --nostart --servername $path -cc 'so ~/.config/nvim/init.vim'
done
This assumes that the config is present in ~/.config/nvim/init.vim. If your config is in a different file it should be replaced there. If there are different nvims with different configurations loaded (e.g. via the -u <configfile> flag which loads a different config file) this script will ignore that and command each instance to load the same config. I would think nvr --c 'so $MYVIMRC' should work, but it didn't for me.
The first --nostart is probably superfluous, but it shouldn't hurt.
| Signal neovim to reread its configuration |
1,627,423,012,000 |
The terminal i'm using is termite, and it is very good, but i'm having a problem with vim colorschemes and some colors in the terminal itself too when using tmux. It happens as follows:
As you can see in the screenshot, when i open nvim or vim with tmux, the case on the left, the colorscheme gets altered, this happens with all colorschemes i tested (about 15 or 20). I observed, that when i execute the command TERM=xterm-256color tmux right after i enter termite, the nvim/vim (i simlinked my .vimrc, so it is the same as init.vim) colorschemes works like a charm! I'm using i3-gaps, so, i decided to set the binding, $mod+Return to open termite that way:
bindsym $mod+Return exec termite -e 'TERM=xterm-256color tmux'
It should have worked, but when i press the keys, the screen only blinks and nothing happens.
Things i've tried include:
Set $TERM to various values, both in termite, in ~/.tmux.conf.local and in my .vimrc:
xterm-256color, screen-256color, termite-256color;
Tested the same thing with other terminal emulators, like xfce-terminal and gnome-terminal, in both it works normally, and the colors get displayed correctly;
Set in my .vimrc:
`if &term == "screen"`
`set t_Co=256`
`endif`
Starting tmux with tmux -2;
Aliasing in my .zshrc:
tmux="tmux -2";
tmux="TERM=xterm-256color tmux" (screen blinks and nothing happens);
termite="termite --exec "TERM=xterm-256color tmux";
termite="termite -e "TERM=xterm-256color tmux".
Edit: The $TERM inside and outside tmux continue to be the same as the terminal. It outputs the same thing when I echo $TERM inside and outside tmux. This happened with termite, have to test it with other terminals.
|
So, if anyone is still interested in finding a fix; for those who still have the same problem: I was able to get it working nicely without doing much.
I had forgotten about this matter and stopped using tmux for a long time until I decided to ask about it on a Linux group on Telegram and a cool lad helped me with the issue. We've come up to this, on the sxhkd config file:
# Open alacritty with tmux
super + shift + Return
alacritty -e $SHELL -i -c tmux &
I changed terminal to Allacrity, that's a GPU-based term, faster and more easily configurable out-of-the-box, but i don't think that has something to do
with, to be fair. Honestly, i don't remember having had the issue with the colors on Alacritty, because, at the time, i didn't used it as it was in beta or something...
You can both add that to a key-binding in your wm's rc, or create an alias for that in your shell rc.
Also, as stated before:
you need to set the $TERM variable to be the same in your .rc and in
.tmux.conf files. To know if something is wrong, i recommend using
:checkhealth command inside neovim.
Colors now behave the same in and out of tmux :)
The file for the sxhkd configuration is on my GitHub, line 06.
I know that necroposting isn't good, but I had to do it as this problem had haunted me for several months, and it was very hard to find a sulution even though i searched in the web on a variety of forums and FAQs, read the docs about the software in question and couldn't find something relevant.
| Terminal colors look wrong when using tmux with termite |
1,627,423,012,000 |
I want to set nvim(Neovim) as my default editor, I have tried to edit my .bashrc and add this two line:
export EDITOR=nvim
export VISUAL=nvim
and then
$ source .bashrc
but it didn't work. Proof:
$ sudo visudo
visudo: no editor found (editor path = /usr/bin/vi)
How can I set that correctly?
|
sudo by default sanitizes your environment: variables you set for your user account won't be visible in the process started by sudo. You can run sudo with the -E (--preserve-env) flag:
sudo -E visudo
You can add VISUAL and EDITOR to the list of environment variables that sudo preserves by default by editing /etc/sudoers and adding:
Defaults env_keep += "VISUAL EDITOR"
Or you can set EDITOR and VISUAL in root's .bashrc file.
| Can't change Arch default editor |
1,627,423,012,000 |
Background:
I'm just setting up an install of Arcolinux. I have a keyboard shortcut SUPER+Enter to launch alacritty.
What I want is to have alacritty automatically create and start in a new tmux session if there are none that havent been attached to.
Or attach to an existing tmux session if nothing is attached to it.
I got this working by adding the following into alacritty.yml:
shell:
program: /usr/bin/bash
args:
- -l
- -c
- "tmux ls | grep -v attached && tmux attach || tmux"
This all works exactly as I would like apart from one thing, the colors in nvim are messed up using this method (darker so visual mode has the same highlight color as the background - annoying).
I found several related issues and have tried solutions from there:
e.g
I have the following in my tmux config (as well as some other variants including a 2 liner):
set -ag terminal-overrides ",xterm-256color:RGB"
I've made sure the TERM variable is set to xterm-256color and I also tried setting background to "dark" in nvim.
None of this seems to help when I launch nvim from a tmux session connected to using the above configuration in alacritty.
However if I remove the alactritty configuration and run the command to connect to tmux manually everything works fine and nvim looks as expected:
/usr/bin/bash -l -c "tmux ls | grep -v attached && tmux attach || tmux"
Any ideas why im getting different results launching from the alacritty config.
FYI part of the reason I am doing it this way is because I have no idea where the binding for SUPER+Enter is set, cant find it in any config files or settings (is there a better way to chase it down) so thoughts on that might be useful too.
|
I hope this helps someone:
I solved this but I'm still a little confused.
When I ran :checkhealth from nvim it was confirmed that there was an issue with the TERM variable. It reported that TERM was set to tmux-256color and gave this warning:
WARNING Neither Tc nor RGB capability set. True colors are disabled.
|'termguicolors'| won't work properly.
The confusing part is that you can set a term in alacritty but the docs say:
# This value is used to set the `$TERM` environment variable for
# each instance of Alacritty. If it is not present, alacritty will
# check the local terminfo database and use `alacritty` if it is
# available, otherwise `xterm-256color` is used.
and when I ran echo $TERM it returned xterm-256color so I left it, however it turns out the solution was to explicitly set TERM in alacritty config to xterm-256color:
env:
TERM: "xterm-256color"
this also needs to be set in your tmux config (but I had that already):
set -ag terminal-overrides ",xterm-256color:RGB"
| Colors different when running nvim through tmux using alacritty config |
1,627,423,012,000 |
How can I search a string in vim and thereby ignoring any whitespace (or particularly line breaks) between any characters?
And probably thereby making a function that can search ignoring all whitespace.
I found a way to ignore whitespace in regex. No really elegant. Just insert \s* between any character. So therefor making this a function would be really needed.
https://stackoverflow.com/questions/4590298/how-to-ignore-whitespace-in-a-regular-expression-subject-string
I also have found a way to search around line breaks in vim:
vim search around line breaks
However the last link gives a solution only if the line break is there were a space was. However I want to ignore all whitespace.
So when I search helloworld here:
blabla blalba hell
oworld bla bla h
elloworl bla bla
It should match it twice, despite the line breaks.
I thought I basically need to alter the function from the vim link a bit which changes the search to:
h\n?e\n?l\n?l\n?o\n? \n?w\n?o\n?r\n?l\n?d
Or:
h\s_?e\s_?l\s_?l\s_?o\s_? \s_?w\s_?o\s_?r\s_?l\s_?d
But I have not idea how to do alter the function for that.
|
The Search for visually selected text topic on the Vim Tips Wiki has a mapping that searches for the current visual selection, irrespective the amount and type of whitespace in between the words. You can use it like the built-in * mapping, to search (ignoring spaces) for the current selection. Very handy!
However, you want even more indifference to whitespace, allowing line breaks (and other white space?) at any position in the text. That's possible as well. You can adapt the current search pattern (stored in register /, accessible from Vimscript via @/) with this command:
:let @/ = join(split(@/, '\zs'), '\_s*')
The split() first cuts the current (literal) search into a List of individual characters (so it won't work properly with existing regular expression stuff like \+ or \|!), then join()s it back together with \_s* (matching any amount of whitespace), and assigns it back to the search register.
You can either build a mapping from that (:nnoremap <Leader>/ :let ...<CR>), or incorporate this into the visual mode mapping mentioned above.
| How to search in vim ignoring all whitespace (and making it a function) |
1,627,423,012,000 |
I added and updated Debian's experimental distribution to my source list which should contain neovim 0.5.0-1.
deb http://deb.debian.org/debian experimental main contrib non-free
When I run apt-cache policy neovim I can see only 0.4.4-1
Installed: (none)
Candidate: 0.4.4-1
Version table:
0.4.4-1 500
500 http://deb.debian.org/debian bullseye/main amd64 Packages
If I run apt-cache policy nano I can see the latest version from the experimental.
apt-cache policy nano
nano:
Installed: 5.4-2
Candidate: 5.4-2
Version table:
5.8-1 1
1 http://deb.debian.org/debian experimental/main amd64 Packages
*** 5.4-2 500
500 http://deb.debian.org/debian bullseye/main amd64 Packages
100 /var/lib/dpkg/status
|
If you scroll down to the bottom of the neovim package page for experimental, you can see that it’s not available on amd64. That’s because it fails to build there (the build timed out).
A useful tool to figure this out locally is rmadison, in the devscripts package:
$ rmadison neovim
...
neovim | 0.5.0-1 | experimental | source, i386, ppc64el
neovim | 0.5.0-1 | experimental-debug | source
You could add the i386 architecture with dpkg --add-architecture i386, and install that version of the package; or wait for the build to be fixed.
| Why I cannot see and install the latest version of NeoVim 0.5? |
1,627,423,012,000 |
In neovim 0.4.3-3 in normal mode this command :
:put=range(1,4)
will put numbered list from 1 to 4
but when i want to put numbers only in blank lines like this:
:g/^$/norm :put=range(1,14)
it is not working as expected - only highlighting empty lines but put is not working, why ?
|
The :normal command only executes complete commands and your :put Ex command is missing an "Enter" at the end to actually execute it.
From :help :normal:
{commands} should be a complete command. If {commands} does not finish a command, the last one will be aborted as if <Esc> or <C-C> was typed. A : command must be completed as well.
You can fix that by adding an extra "Enter" character at the end of your command, which you can enter with:
Ctrl+V, Enter
It will display as a ^M in Vim:
:g/^$/norm :put=range(1,14)^M
(There are ways to avoid having to enter a literal "Enter" in your command. For instance, the :execute command is often used for that.)
But in this case there's a much simpler solution, which is to drop the :normal altogether and just have :g run :put directly!
:g/^$/put=range(1,14)
The :g command will run an Ex command for each line it matches and :put is an Ex command, so you can just cut the middle man here.
Note that what this command does is append 14 new numbered lines after each blank line in your buffer. Not sure if that's actually what you intended with it or not.
| nvim norm command |
1,627,423,012,000 |
How can can I start nvim with a bindsym from i3? If I just type
bindsym $mod+F1 exec nvim
nvim doesn't show, since it just runs in the background without a terminal. So how can I invoke nvim with a keybind in i3?
|
Since you tagged your question as Manjaro and rxvt, I'm supposing that the terminal emulator you're using is rxvt-unicode. In this case, you can use:
bindsym $mod+F1 exec --no-startup-id urxvt -e nvim
Even if you use a different terminal, most emulators have a similar option to execute a command on startup.
| Start nvim with a bindsym in i3 |
1,627,423,012,000 |
I have just finished installing Fedora 36 and was in the process of installing my usual software. I personally prefer nvim over vim, but I got used to typing vim; so, I just use an alias alias vim='nvim' in .bashrc.
Using vim directly uses nvim and uses the init.vim; however, using sudo vim doesn't seem to use any of the mappings I wrote. (I linked init.vim with .vimrc with ln -s .config/nvim/init.vim .vimrc). I read that sudo uses another file other than .bashrc, but I don't want to create aliases everywhere.
Found some answers recommending using sudo update-alternatives --config vim and choosing nvim from a "list", but I don't get any output when running the command. It just gives me a new terminal line. How do I make sudo update-alternatives --config vim return the "list"? Or is there a better way to do it other than update-alternatives?
|
As far as I can see, sudo changes your user id, so all configurations you made specifically for your normal user do not apply when you call something with sudo (meaning if you have a .vimrc it will not apply when calling vim with sudo). In addition to that, as far as I know sudo cannot read 'aliases' you created for bash anyway, meaning if you type in alias l=ls;sudo l sudo would not know what the command l means.
update-alternatives is a useful tool, that creates and handles links to binaries, so if you have for instance different version of vim on your system, you can use update-alternatives to change what version is called when you call vim. An alternative can be configured with the command you tried with the --config flag, but only after it has been created with the --install flag.
Here is a tutorial going through how to do that for python: link.
However I am not sure how this is going to help you with your problem, but it could be, that by creating an alternative you allow sudo to find the program you want.
I am not sure if this is advisable, probably not, but a simple and dirty solution to your problem would be to just rename the binary or create a link to the binary of nvim with the name vim. This way every environment will use this binary if it calls it. But learning and using update-alternatives is the cleaner and more resistant way I think.
Personally I go the lazy route and type out the correct binary names when using sudo. I have an alias to v=vim but when using sudo I am probably doing something serious, so I may as well be precise in what I want, but I can understand if you want to use your aliases everywhere.
| sudo vim and vim open different editors |
1,627,423,012,000 |
Each time I open NeoVim, I have to type :so .vimrc (it is located inside of my $HOME folder) for it to load the file.
I've tried googling my way out of it, but I can't seem to get any results on what I want to do. The results are stuff like "auto source vimrc when saved", "reload vimrc without restart" etc. and none of these worked, either.
Is it possible to automatically source that file upon startup instead of manually typing it each time I open nvim?
|
The NeoVim editor uses ~/.config/nvim/init.vim.
See the vimrc-intro section in the NeoVim manual.
You could also set VIMINIT to the Ex command so ~/.vimrc to force the sourcing of the ~/.vimrc file, as described in the $MYVIMRC section.
| How can I set NeoVim to automatically source .vimrc? |
1,627,423,012,000 |
I'm trying to run the command:
nvim "./some-file" '+/Text (with/slash)'
But I get the following error:
Error detected while processing command line:
E486: Pattern not found: Text (with
And the command line arguments after running :exe '!tr "\0" " " </proc/' . getpid() . '/cmdline' gives me:
nvim /tmp/.tmpxn2hIQ +/GitHub (bookit/issues)
But I don't really know how it's processing it.
Am I missing something about shell expansion?
I can run with the expected result of '+/Text (with/slash)'.
echo '+/Text (with/slash)'
|
It is not a shell issue. It is nvim doesn't understand this pattern as a valid search pattern. I'm afraid the only solution would be to use backslash:
nvim "./some-file" '+/Text (with\/slash)'
By the way my nvim gives me the different error:
$ cat afile
a
Text (with/slash)
c
$ nvim -u NONE afile '+/Text (with/slash)'
Error detected while processing command line:
E492: Not an editor command: /Text (with/slash)
Press ENTER or type command to continue
| Neovim open file with search pattern not escaped? |
1,627,423,012,000 |
I want to create a simple script to launch nvim (not gvim) in separate terminal window (I'm using urxvt term). Currently I have:
#!/usr/bin/env bash
exec urxvt -hold -e "vim"
It seems legit and works but the problem is that vim theme is not loaded when opening the terminal (probably because .bashrc is not read or some other weird issues with base16-shell).
Plugins do load though which means that nvim's init file is loaded. I tried to do something like
exec urxvt -hold -e "source <absolute_path>/.bashrc; vim"
to force base16-shell to load the terminal theme (which might be a dependency for a vim's one) but it still doesn't work.
I feel that I'm missing something but I can't get it right. How to get this script working?
|
Ok, so I solved the problem. It turned out that -e flag runs a command without actually launching interactive bash shell which means that bash doesn't read .bashrc on startup.
As base16-shell initializes terminal theme (which is indeed a dependency of vim's base16 theme) running a script from a .bashrc in the end nvim is launched with a default colors (as dependency is not loaded and base16-vim can't initialize properly).
So the solution is to launch interactive bash shell explicitly to read .bashrc and load base16 theme and only after that launch nvim (which is aliased to vim in my occasion).
Here is the whole script:
#!/usr/bin/env bash
# -hold urxvt option is not needed as vim stays running
# -i bash option to run interactively
exec urxvt -e bash -i -c "vim"
| Exec vim in new urxvt window preserving theme |
1,627,423,012,000 |
I'm using yamlfix with ale in vim.
I followed these steps because I wanted to remove the automatic "---" adding on top of the file every time I save my work (and some others default configurations).
For some reason, the file is correctly fixed, but my configuration is skipped..
So I decided to try with CLI in order to test my config.
yamlfix exits without error, fixes my file, but is completely skipping my configuration..
The configuration is in ~/pyproject.toml:
# pyproject.toml
[tool.yamlfix]
explicit_start = false
The command is
yamlfix -c ~/pyproject.toml file.yaml
Do I miss something ? Do I need something more ?
|
When running yamlfix directly from the command line (or via a shell spawned from Vim, and not via e.g. maison), your TOML configuration file should contain no section headers:
$ cat myconfig.toml
explicit_start = false
You may then run yamlfix from the command line like so:
$ cat test.yml
---
test: some test data
$ yamlfix -c myconfig.toml test.yml
[+] YamlFix: Fixing files
[+] Fixed test.yml
[+] Checked 1 files: 1 fixed, 0 left unchanged
$ cat test.yml
test: some test data
This is described in the documentation.
You may also use an environment variable to trigger the same behaviour without needing a separate configuration file. This may be what you may want to do if removing the YAML document start marker is the only setting you want to change from the default:
$ cat test.yml
---
test: some test data
$ YAMLFIX_EXPLICIT_START=false yamlfix test.yml
[+] YamlFix: Fixing files
[+] Fixed test.yml
[+] Checked 1 files: 1 fixed, 0 left unchanged
$ cat test.yml
test: some test data
| yamlfix not using configuration + (neo)vim usage |
1,438,790,520,000 |
I'm trying to use OpenConnect to connect to my company's Cisco VPN (AnyConnect)
The connection seems to work just fine, what I'm not understanding is how to set up routing. I'm doing this from the command line.
I use the default VPN script to connect like this:
openconnect -u MyUserName --script path_to_vpnc_script myvpngateway.example.com
I type in my password, and I'm connected fine, but my default route has changed to force all traffic down the VPN link, whereas I just want company traffic down the VPN link.
Are there some variables that I need to be putting into the vpnc-script? It's not very clear how this is done.
|
This answer is as follows:
Use the following bash wrapper script to call the vpnc-script. In the wrapper script, the routes to be used for the VPN connection can be specified via a ROUTES variable.
#!/bin/bash
#
# Routes that we want to be used by the VPN link
ROUTES="162.73.0.0/16"
# Helpers to create dotted-quad netmask strings.
MASKS[1]="128.0.0.0"
MASKS[2]="192.0.0.0"
MASKS[3]="224.0.0.0"
MASKS[4]="240.0.0.0"
MASKS[5]="248.0.0.0"
MASKS[6]="252.0.0.0"
MASKS[7]="254.0.0.0"
MASKS[8]="255.0.0.0"
MASKS[9]="255.128.0.0"
MASKS[10]="255.192.0.0"
MASKS[11]="255.224.0.0"
MASKS[12]="255.240.0.0"
MASKS[13]="255.248.0.0"
MASKS[14]="255.252.0.0"
MASKS[15]="255.254.0.0"
MASKS[16]="255.255.0.0"
MASKS[17]="255.255.128.0"
MASKS[18]="255.255.192.0"
MASKS[19]="255.255.224.0"
MASKS[20]="255.255.240.0"
MASKS[21]="255.255.248.0"
MASKS[22]="255.255.252.0"
MASKS[23]="255.255.254.0"
MASKS[24]="255.255.255.0"
MASKS[25]="255.255.255.128"
MASKS[26]="255.255.255.192"
MASKS[27]="255.255.255.224"
MASKS[28]="255.255.255.240"
MASKS[29]="255.255.255.248"
MASKS[30]="255.255.255.252"
MASKS[31]="255.255.255.254"
export CISCO_SPLIT_INC=0
# Create environment variables that vpnc-script uses to configure network
function addroute()
{
local ROUTE="$1"
export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_ADDR=${ROUTE%%/*}
export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_MASKLEN=${ROUTE##*/}
export CISCO_SPLIT_INC_${CISCO_SPLIT_INC}_MASK=${MASKS[${ROUTE##*/}]}
export CISCO_SPLIT_INC=$((${CISCO_SPLIT_INC}+1))
}
# Old function for generating NetworkManager 0.8 GConf keys
function translateroute ()
{
local IPADDR="${1%%/*}"
local MASKLEN="${1##*/}"
local OCTET1="$(echo $IPADDR | cut -f1 -d.)"
local OCTET2="$(echo $IPADDR | cut -f2 -d.)"
local OCTET3="$(echo $IPADDR | cut -f3 -d.)"
local OCTET4="$(echo $IPADDR | cut -f4 -d.)"
local NUMADDR=$(($OCTET1*16581375 + $OCTET2*65536 + $OCTET3*256 + $OCTET4))
local NUMADDR=$(($OCTET4*16581375 + $OCTET3*65536 + $OCTET2*256 + $OCTET1))
if [ "$ROUTESKEY" = "" ]; then
ROUTESKEY="$NUMADDR,$MASKLEN,0,0"
else
ROUTESKEY="$ROUTESKEY,$NUMADDR,$MASKLEN,0,0"
fi
}
if [ "$reason" = "make-nm-config" ]; then
echo "Put the following into the [ipv4] section in your NetworkManager config:"
echo "method=auto"
COUNT=1
for r in $ROUTES; do
echo "routes${COUNT}=${r%%/*};${r##*/};0.0.0.0;0;"
COUNT=$(($COUNT+1))
done
exit 0
fi
for r in $ROUTES; do
addroute $r
done
exec /etc/openconnect/vpnc-script
Then connect as follows:
openconnect -u myusername --script wrapper-script -b vpngateway.example.com
| OpenConnect: Setting default routes |
1,438,790,520,000 |
I have ssh and openconnect installed but when I proceed to start or stop the ssh service, I get the following error:
Failed to start ssh.service: Unit ssh.service not found.
Also, when I try sudo apt-get install ssh I get the following:
sudo apt-get install ssh
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
ncurses-term openssh-server openssh-sftp-server ssh-import-id
Suggested packages:
ssh-askpass rssh molly-guard monkeysphere
The following NEW packages will be installed:
ncurses-term openssh-server openssh-sftp-server ssh ssh-import-id
0 upgraded, 5 newly installed, 0 to remove and 193 not upgraded.
Need to get 640 kB of archives.
After this operation, 5.237 kB of additional disk space will be used.
Do you want to continue? [Y/n]
Which I find confusing. If I do which ssh, I get:
/usr/bin/ssh
How can the binary be there if apt-get thinks the package is not installed?
Also, when calling ssh <valid-IP-address>, I get the following error:
ssh: connect to host port 22: No route to host
But if I use openconnect and connect to a VPN, ssh work without problems.
What am I missing? I'm running Ubuntu 16.04.
|
The ssh binary, the SSH client, is provided by the openssh-client package, which is installed on your system.
The ssh service runs the SSH server, provided by the openssh-server package, which isn’t installed on your system.
The ssh package is a meta-package which installs both the client and the server.
| ssh installed but I get the error: Failed to start ssh.service: Unit ssh.service not found |
1,438,790,520,000 |
I am going to connect to a VPN using openconnect on CEntOS 7 terminal. I only have one terminal because I am on a SSH session. I need to connect to the VPN using openconnect. I do so like this:
openconnect -u username us.myprovider.net
I need to run the VPN in the background and then do other things in the foreground. Currently, I start the VPN, I press Ctrl + Z and then press bg to send it to the background. But, this seems to close the VPN connection. How can I do that?
|
According to the Openconnect documentation, the option you would want to try would be:
-b,--background
Continue in background after startup
| How can I use openconnect in the background |
1,438,790,520,000 |
I have been having some problems with openconnect in my Arch Linux (Antergos to be precise) and I have no idea what's causing it. (Although I'm used to linux and all, I'm very new to VPNs and openconnect.)
I'm trying to connect to my University's VPN via 2 methods. I start by following the instructions, which simply say to create a Cisco AnyConnect Compatible VPN and input the name and gateway.
When I do it this way and try to connect via the network-manager applet it doesn't work. When I flip the VPN switch it simply flips right back immediately and that's it. No error messages or anything.
The second approach I'm trying is via command line. So I try this
$ sudo openconnect -u myusername my.gateway.edu
(I'm replacing the actual gateway with my.gateway.edu and also the username and blurring relevant IPs from now on)
This is the log I get from that input:
POST https://my.gateway.edu/
Connected to 164.**.**.**:443
SSL negotiation with my.gateway.edu
Server certificate verify failed: signer not found
Certificate from VPN server "my.gateway.edu" failed verification.
Reason: signer not found
To trust this server in future, perhaps add this to your command line:
--servercert sha256:bb2476a96b88357fe74f28a347ba549a2af4bea8668e30a77e1a8295f466bfdc
Enter 'yes' to accept, 'no' to abort; anything else to view: yes
Connected to HTTPS on my.gateway.edu
Got HTTP response: HTTP/1.1 401 Unauthorized
Error generating GSSAPI response:
gss_init_sec_context(): Unspecified GSS failure. Minor code may provide more information
gss_init_sec_context(): SPNEGO cannot find mechanisms to negotiate
Server 'my.gateway.edu' requested Basic authentication which is disabled by default
GET https://my.gateway.edu/
Connected to 164.**.**.**:443
SSL negotiation with my.gateway.edu
Server certificate verify failed: signer not found
Connected to HTTPS on my.gateway.edu
Got HTTP response: HTTP/1.1 401 Unauthorized
No more authentication methods to try
GET https://my.gateway.edu/
Please enter your username.
POST https://my.gateway.edu/auth
Please enter your password.
Password:
POST https://my.gateway.edu/auth
Got CONNECT response: HTTP/1.1 200 CONNECTED
CSTP connected. DPD 90, Keepalive 32400
Connected as 169.**.***.**, using SSL
DTLS handshake failed: Resource temporarily unavailable, try again.
Failed to open tun device: No such device
Set up tun device failed
Unknown error; exiting.
I have asked the University's IT support but they also don't know what's happening (I think they're not very familiar with Arch Linux). I have tried some other things such as using the flag --script /etc/vpnc/vpnc-script but the result is the same.
EDIT
I have recently come across this website via the IT people that says that I have to create a tunnel device before connecting. Even after doing that the results of sudo openconnect -u myusername my.gateway.edu --interface tun1 are still the same.
|
After I created a tunnel device using this link the CLI approach worked, even though the GUI approach still failed. I also installed networkmanager-vpnc through pacman, but I don't think that is related to anything.
I also found out through the IT people that adding the --http-auth=Basic flag gets rid of some the errors. It's worth noting that even after all this I still get a DTLS handshake failed, even though I can connect to the servers I need to.
| Openconnect won't connect in Arch Linux |
1,438,790,520,000 |
I tried using openconnect today for the first time to connect to my organization's VPN. However, once connected, it runs in the foreground of the terminal and the only way I could see to close the connection was to use CTRL-C. Is this an acceptable way to close the openconnect session cleanly? If not, what is the preferred method?
|
Yes, Ctrl-C (i.e. SIGINT) cleanly shuts it down, according to https://www.infradead.org/openconnect/manual.html#heading5.
Personally I run openconnect in a terminal and use Ctrl-C to shut it down; some people might prefer to use NetworkManager, systemd-networkd, etc. to manage openconnect connections.
| How to shut down openconnect cleanly? |
1,438,790,520,000 |
I recently upgraded to fedora 25. Since then my VPN connection via openconnect (Cisco AnyConnect Compatible VPN) ceased to work.
When I now try to define a new equivalent VPN connection, I get the message
Error: unable to load VPN connection editor
This appears under both, Wayland and X. I have
OpenConnect version v7.07; and I have NetworkManager-openconnect-1.2.4-1.fc25.x86_64.
Can you think of ways of getting the editor to work again? Or
Can you point to a way to manually define such a connection, circumventing gnome?
|
You need to install:
NetworkManager-openconnect-gnome
| openconnect VPN ceased to work after Fedora upgrade |
1,438,790,520,000 |
Currently I am using the following command for executing authentication request to obtain the server certificate (FINGERPRINT) and OpenConnect-Cookie:
openconnect --authenticate --user=<username> "VPN host"
Hereby I always have to enter my password in a later appearing user prompt.
Is there an option available to pass-over the password to OpenConnect already in the upper command?
For example, by extending the command like...
openconnect --authenticate --user=<username> password=<password> "VPN host"
... ?
The challenge is:
The user RuiFRibeiro had the idea just to echo the password within the command. Unfortunately this does not work in our case, because the server provides one more user prompt before reaching the second prompt (= password prompt).
It will happen like that:
First user prompt: Server saying
"Please choose if you want to tunnel all traffic or only specific one.
"Type in Tunnel all or Tunnel company".
Second user prompt: Server is saying
"Please enter your password."
As you can see, a simple echo would give the wrong answer to the wrong question. :-)
For a possible expect-script the real (exact) server request before inserting text is like followed:
First prompt: GROUP: [tunnel MyCompany|tunnel all]:, answer-insertion should be tunnel MyCompany
Second prompt: Password:, answer-insertion should be 123456789
|
Usually, VPN software does not allow as input the password for a user, because it is considered a security risk.
A possible solution is feeding the password via a pipe as in:
echo -e "Tunnel all\nYourPassword" | openconnect --authenticate --user=<username> "VPN host"
If we are talking about you being interested in this method to write a script:
be sure to understand the security implications of having your password in a file, and restrict the read rights of that file only to the user running the openconnect command.
PS Replace YourPassword with your real password
| OpenConnect: Passing-over user password when executing authentication request? |
1,438,790,520,000 |
I've been working at my company for over a year, and have never had this particular issue with my VPN. Unfortunately, I don't know much about networking so I'm a little confused at what's happening. Here's the behavior on a Fedora 25 workstation totally fresh install.
run sudo openconnect --juniper somevpn.com
cat /etc/resolv.conf immediately after the connection is made shows all the various nameservers I can connect to at work.
trying to actually navigate to any of the sites on the local network fails, and even regardless of that, if I check the resolve.conf again just a few seconds after the connection is made, I'll see that I'm back on my local network, although the process for the VPN is still going.
So is there some black-list that I'm not aware of? What's going in and rewriting my resolve.conf? I've got VPN connected on other devices, so I know my credentials are fine, and I'm positive I'm below the maximum number of allowed connections.
|
Systemd-Resolved usually handles changing /etc/resolv.conf based on the network you're connecting to:
When connecting to a network, it will change /run/systemd/resolve/resolv.conf, in some cases on your system, this file may be symlinked to /etc/resolv.conf - If that is the case, systemd-resolved will change /etc/resolv.conf accordingly. If this symlink is not present, systemd-resolved will not change /etc/resolv.conf
The same kind of functionality is present at /usr/lib/systemd/resolv.conf Once again, if that symlink is present, systemd-resolved will handle /etc/resolv.conf
Additionally, if this is wrecking your DNS, after connection you can still manually change /etc/resolv.conf
You can additionally use systemctl stop systemd-resolved && systemctl disbale systemd-resolved - the disable command will prevent this from running at system boot. The stop will shutdown the current instance of resolved. You will need to edit (and remove) the symlink that exists from /etc/resolv.conf
| Namservers reverted to normal shortly after connecting VPN using Openconnect |
1,438,790,520,000 |
I want to access my personal network drive at my university via VPN from home. In the past I have been using NetworkManager for this what worked completely fine. However, recently I moved to ConnMan and I don't know very well how to set it up there.
Thanks to GAD3R I figured out there is a graphical input mask available to set up a VPN-connection in ConnMan's CMST interface.
The previous (successfully working) VPN configuration from NetworkManager looked like that:
[openconnect]
Description=My Company
Host=vpngw2-out.net.provider.com
CACert=(null)
Protocol=anyconnect
Proxy=
CSDEnable=1
CSDWrapper=/home/user/.cisco/csd-wrapper.sh
UserCertificate=(null)
PrivateKey=(null)
FSID=0
StokenSource=disabled
StokenString=
However, this successfully working VPN config from NetworkManager was using a so called CSD-wrapper from Cisco.
The challenge in ConnMan now is: When creating the necessary VPN provisioning file which variant of OpenConnect do I have to select to match the upper specifications? When creating the new provisioning file via ConnMan-CMST there are several OpenConnect-options available:
Provider OpenConnect
OpenConnect.ServerCert
OpenConnect.CACert
OpenConnect.ClientCert
OpenConnect.MTU
OpenConnect.Cookie
OpenConnect.VPNHost
Which one do I have to choose to match the previous configuration of the NetworkManager config? Do I have to mention something special to include the CSD-Wrapper file in ConnMan?
|
Thanks to a comment from GAD3R and the Connman developer mailing list, a friend figured out on how to set up the VPN connection. Although there is still a small error existent we got it work mostly.
1. Initial situation
The following packages have to be installed on your client machine from where you want to access the host server:
connman
connman-vpn
cmst
openconnect
Furthermore the script csd-wrapper.sh was run in your clients /home-directory and has created the directory /home/.cisco with several authentication files of your machine.
2. Generating the necessary VPN authentication information by engaging OpenConnect
In a second step you have to execute the OpenConnect authentication request to obtain the server certificate (FINGERPRINT) and a COOKIE that Connman will use to connect to the VPN. This information will be created by utilizing OpenConnect package which is later displaying a server certificate and a cookie in terminal. We generate this information in terminal by running
$ sudo openconnect --csd-wrapper=/home/user/.cisco/csd-wrapper.sh --authenticate --user <username> <hostname>
Afterwards this command will display four variables: POST, COOKIE, HOST and FINGERPRINT. Hereby the fingerprint (starting with sha256:...) acts as a server certificate while the COOKIE is what it sounds like.
3. Creating the VPN provisioning file for Connman
In contrast to NetworkManager the Connman is using so called VPN provisioning files for each VPN connection from where it takes the information on how to connect to the VPN host. Therefore in a third step the previously generated authentication data has to be pasted into this VPN provisioning file that Connman will utilize to connect to the server. To do so we create the file /var/lib/connman-vpn/<connection-name>.config based on the following structure:
[global]
Name = VPN name, for example "My Company VPN" (without quotes)
[provider_openconnect]
Type = OpenConnect
Name = VPN Provider name, for example "My Company Cisco VPN" (without quotes)
Host = <VPN host IP address>
Domain = <VPN host domain>
OpenConnect.ServerCert = <paste the output of FINGERPRINT from the previous openconnect command>
OpenConnect.Cookie = <paste the output of COOKIE from the previous openconnect command>
Afterwards save and close the file.
4. Reboot your machine and check VPN connection
Reboot your system and you will find your now created VPN connection listed in the rider VPN of Connman System Tray (CMST) GUI. Mark it, click on "connect" and after a few seconds the VPN-connection to your VPN-host will be established. Now you can easily access the VPN-host within the file manager of your choice.
5. Eyesore: Generated cookie is only valid for a few hours
After a few hours your previously successfully working VPN-connection won't work anymore. When checking /var/log/syslog the connection approach will complain about failed verification of server certificate:
Aug 24 00:14:51 <hostname> connmand[444]: ipconfig state 2 ipconfig method 1
Aug 24 00:14:51 <hostname> connmand[444]: vpn0 {create} index 23 type 65534 <NONE>
Aug 24 00:14:51 <hostname> connmand[444]: vpn0 {update} flags 4240 <DOWN>
Aug 24 00:14:51 <hostname> connmand[444]: vpn0 {newlink} index 23 address 00:00:00:00:00:00 mtu 1500
Aug 24 00:14:51 <hostname> connmand[444]: vpn0 {newlink} index 23 operstate 2 <DOWN>
Aug 24 00:14:51 <hostname> connman-vpnd[365]: vpn0 {create} index 23 type 65534 <NONE>
Aug 24 00:14:51 <hostname> connman-vpnd[365]: vpn0 {update} flags 4240 <DOWN>
Aug 24 00:14:51 <hostname> connman-vpnd[365]: vpn0 {newlink} index 23 operstate 2 <DOWN>
Aug 24 00:14:51 <hostname> connmand[444]: ipconfig state 2 ipconfig method 1
Aug 24 00:14:51 <hostname> openconnect[4476]: Connected to <VPN server IP>:443
Aug 24 00:14:51 <hostname> openconnect[4476]: SSL negotiation with <VPN server IP>
Aug 24 00:14:51 <hostname> openconnect[4476]: Server certificate verify failed: signer not found
Aug 24 00:14:51 <hostname> openconnect[4476]: Connected to HTTPS on <VPN server IP>
Aug 24 00:14:51 <hostname> openconnect[4476]: Got inappropriate HTTP CONNECT response: HTTP/1.1 401 Unauthorized
Aug 24 00:14:51 <hostname> connmand[444]: vpn0 {dellink} index 23 operstate 2 <DOWN>
Aug 24 00:14:51 <hostname> connmand[444]: (null) {remove} index 23
Aug 24 00:14:51 <hostname> connman-vpnd[365]: vpn0 {dellink} index 23 operstate 2 <DOWN>
Aug 24 00:14:51 <hostname> connman-vpnd[365]: vpn0 {remove} index 23
Aug 24 00:14:51 <hostname> connmand[444]: ipconfig state 7 ipconfig method 1
Aug 24 00:14:51 <hostname> connmand[444]: ipconfig state 6 ipconfig method 1
Hereby the initial authentication-COOKIE has changed, so the previously generated cookie is not valid anymore. Therefore you have to repeat the upper procedure all few hours to create a new COOKIE and paste this new one into your VPN provisioning file (/var/lib/connman-vpn/<yourvpnname>.config) while overwriting the old cookie. Afterwards restart Connman and your VPN will work great again for the next few hours.
Important:
It seems that NetworkManager can nudge the recreation of the new COOKIE by himself, while Connman needs to get feeded with the new cookie into its VPN provisioning file. Probably Connman is missing some kind of interface to launch the OpenConnect-command by himself.
6. Workaround to make recreation of the new cookie a bit more comfortable
You can use a bash-script to generate the new cookie and overwrite the old one. Just copy the following text into a *.sh-file, make it executable and run it. The new cookie will be placed into /var/lib/connman-vpn/vpnname.config at the right position automatically. Afterwards restart Connman and the VPN will work fine again.
#!/bin/bash
sed -i "s/^OpenConnect.Cookie =.*$/$( echo '<YOUR-VPN-PASSWORD>' | openconnect --csd-wrapper=/home/user/.cisco/csd-wrapper.sh --authenticate --user=<USERNAME> --authgroup="<YOURGROUP>" --passwd-on-stdin <VPN-HOST-DOMAIN> | grep 'COOKIE=' | sed "s/COOKIE='//; s/'//g; s/^/OpenConnect.Cookie = /")/" <EXTERNAL-FILENAME>
This script will:
Start OpenConnect and execute the OpenConnect authentication request to obtain the server certificate (FINGERPRINT) and a COOKIE
Insert your username into the user prompt
Insert your password into the user prompt
Insert your desired group into the user prompt
Generate a new cookie
Overwrite the old cookie in /var/lib/connman-vpn/vpnname.config with the new cookie
Afterwards you can reconnect to your VPN-host without any problems. Thanks to this script it is more comfortable and way faster to recreate new cookies when necessary.
| ConnMan: How to set up OpenConnect VPN with CSD-Wrapper correctly? |
1,438,790,520,000 |
On a lightweight Debian machine I am using ConnMan instead of NetworkManager. For this I installed ConnMan based on the following packages:
connman
connman-vpn
cmst
Ethernet, wifi, virtual bridges etc. are working completely fine.
However, it seems to be impossible to graphically add a VPN-connection (openconnect) via connman's cmst-GUI.
Do I have to create config-files for every VPN via text editor by hand?
In NetworkManager this was a pretty easy task within the GUI. First, install the packages
network-manager
network-manager-gnome
network-manager-openconnect-gnome
Afterwards it was possible to set up the VPN inside the GUI.
In ConnMan this intention seems to be a bit different. So the final question now is:
How to set up a Cisco AnyConnect compatible VPN-connection (OpenConnect) for use with ConnMan?
|
To configure your VPN file you can use connman_dmenu:
# apt install suckless-tools
$ git clone https://github.com/march-linux/connman_dmenu.git
$ cd connman_dmenu
# ./connman_dmenu
You will be able to connect/disconnect to the configured VPN from the cmst GUI.
Edit :
cmst (connman-ui) have a VPN Editor , it can be enabled from Preferences by checking Advenced Control , The VPN Editor tab will appear at the bottom of the GUI.
| Debian 9: Is there any GUI to add OpenConnect VPN-connection in ConnMan? |
1,438,790,520,000 |
Update: Solved thanks to comment by @NotTheDr01ds
Original question
(Details of the machines I'm using at end)
I connect to a my Uni's VPN using:
sudo /sbin/modprobe tun && sudo openconnect gucsasa1.cent.gla.ac.uk
I get this output:
POST https://gucsasa1.cent.gla.ac.uk/
Got CONNECT response: HTTP/1.1 200 OK
CSTP connected. DPD 30, Keepalive 20
Connected as 172.20.183.165, using SSL, with DTLS in progress
Established DTLS connection (using GnuTLS). Ciphersuite (DTLS1.2)-(ECDHE-RSA)-(AES-256-GCM).
Unknown DTLS packet type 13, len 16
Then I run the following command to connect via ssh:
ssh -X mymachine
Once connected, my .bashrc on the server tries to automatically launches tmux
(version 2.6):
# Launch tmux
if command -v tmux>/dev/null; then
[[ ! $TERM =~ screen ]] && [ -z $TMUX ] && tmux new-session -A -s main
fi
But it instantly crashes leaving my terminal display looking like this (here I
typed ls to show the problem, but it happens with all stdout):
I also can't see anything I'm typing into the terminal - i.e. it doesn't update
the display until I hit enter on the command (having typed it 'blind').
When using the -X and -Y flags with ssh, I have no problems with GUI
programs. This is specific to stdout rendering in the terminal in tmux.
After googling, I found that typing reset brought back a 'normal' experience,
but also killed the tmux server. Here is the result of typing ls:
As soon as I try to launch tmux, it crashes again and I'm back to the original
problem.
I don't have this problem when I connect to other servers (running Debian 10,
and Ubuntu 20.04, bash and tmux 2.8) using the same client machine and same
terminal.
Does anyone have any ideas of how I can troubleshoot this issue? I've been
googling all day without success.
Client machine
OS: lubuntu 20.04
terminal: st
shell: bash
In tmux session: echo $TERM: st-256color
Outside tmux session: echo $TERM: screen-256color
Server machine
OS: Ubuntu 18.04
terminal: gnome-terminal
shell: bash
Outside tmux session: echo $TERM: st-256color
|
I wasn't able to reproduce this myself using a similar configuration, but it sounds like (and confirmed from the comments) that there may be a mismatch with the termcap and $TERM (st-256color) on the host.
There are a few things I'd try:
First, see if the same issue happens with a different terminal. In this case, gnome-terminal worked correctly.
Experiment with different TERM setting, such as tmux-256color, screen-256color, or xterm-256color.
(What worked in this case) export TERM=xterm-256color on the client before connecting to the host. TERM=xterm256color ssh -X mymachine should also work.
| In tmux on remote machine, each new line in terminal is indented to the end of the previously displayed line [closed] |
1,438,790,520,000 |
I frequently use the OpenConnect plugin for NetworkManager to access VPN resources. However, each time I activate the VPN profile it asks me if I would like to accept the certificate. Is there a way to pull the remote VPN certificate so I can persistently put it in my trusted certificate store?
I am using nm-applet to do all of this from a nice tray icon, but the command-line equivalent also requests that I accept the certificate.
tldr; How can you requests certificates?
|
As openconnect/anyconnect are ssl based, you might try openssl:
: | openssl s_client -connect example.com:443 -prexit 2>/dev/null | sed -n -e '/BEGIN\ CERTIFICATE/,/END\ CERTIFICATE/ p'
the first certificate returned would be the servers, the last certificate would be the ca certificate. you could then copy them to /etc/ssl/certs/ (or similar on arch).
(credits for the command go to: https://stackoverflow.com/questions/7885785/using-openssl-to-get-the-certificate-from-a-server#comment19766982_7886248 )
| How to pull VPN Certificate to put in Cert Store? |
1,438,790,520,000 |
Take a look at all openconnect versions & this.
What is the correct way to install openconnect(ocserv-0.12.3) package on CentOs 7?
I tried these commands :
sudo yum -y install epel-release
sudo yum repolist enabled
sudo yum info ocserv
But it shows me version 0.12.2, not version 0.12.3!
Now how can i install version 0.12.3?
|
As version 0.12.3 is EPEL candidate you can install it via downloading the package (RPM) and install it. Or compile it from source. But I will recommend you to install available in EPEL package and do not hurry up. The command you can use is:
wget https://kojipkgs.fedoraproject.org//packages/ocserv/0.12.3/1.el7/x86_64/ocserv-0.12.3-1.el7.x86_64.rpm
yum localinstall ocserv-0.12.3-1.el7.x86_64.rpm
OF course you can install it directly via rpm but this will bring warning messages later when you use again yum
rpm -i wget https://kojipkgs.fedoraproject.org//packages/ocserv/0.12.3/1.el7/x86_64/ocserv-0.12.3-1.el7.x86_64.rpm
| What is the correct way to install openconnect(ocserv-0.12.3-1.el7) package on CentOs 7 |
1,438,790,520,000 |
The following bash script is working completely fine:
#!/bin/bash
echo '!PaSsWoRd!' | openconnect --csd-wrapper=/home/user/.cisco/csd-wrapper.sh --authenticate --user=abcde123 --authgroup="tunnel My Company" --passwd-on-stdin vpn.mycompany.com
However, I want to replace the previous input parameters with variables like that:
#!/bin/bash
WRAPPER=/home/user/.cisco/csd-wrapper.sh
USER=abcde123
PASSWD=!PaSsWoRd!
AUTHGROUP=tunnel My Company
DOMAIN=vpn.mycompany.com
echo '$PASSWD' | openconnect --csd-wrapper=$WRAPPER --authenticate --user=$USER --authgroup="$AUTHGROUP" --passwd-on-stdin $DOMAIN
Unfortunately this attempt does not work anymore. I think I have to put in some quote chars or similar. Do you know what is wrong with the bash script below?
|
The lack of quotes around the assignment to a variable containing ! is the problem here. The shell tries to interpret the ! character to run the history expansion first before assigning it to the variable. The quoting should prevent the shell from interpreting the contents within as '..' as special and keep it as-is.
The assignment should have been written as
PASSWD='!PaSsWoRd!'
and pass the variable with quoted expansion
echo "$PASSWD" | openconnect --csd-wrapper="$WRAPPER" --authenticate --user="$USER" --authgroup="$AUTHGROUP" --passwd-on-stdin "$DOMAIN"
Or turn of the history expansion in the script temporarily by including the line, set +H at the top of the script. Subsequently do set -H at the end to disable it. This is not recommended and much recommended to use the proper quoted approach above.
| Bash-Script: How to insert Variables into Bash-Script? |
1,438,790,520,000 |
I config OpenVPN service on CentOS 7 and Clients could connect to the server with no problem.
the problem occurred when OpenVPN server connect to another VPN(openconnet VPN).at this time clients lost internet while the server has internet.
I added forwarding rule between OpenVPN and OpenConnect in iptables.
-A INPUT -s tun0 -o tun1 -j ACCET
and vise verse.
what is the reason?
|
regarding derober comment, it correct.
-A POSTROUTING -s OpenVpnRange -o OpenConnectNICName -j MASQERADE
| OpenVPN client internet lost when connecting OpenVPN server to another VPN while OpenVPN server has the internet |
1,438,790,520,000 |
In OS Windows I used openconnect-gui. After I changed the OS on Debian, I didn't find a version for Debian and I don't understand how to build openconnect-gui for Debian.
|
From the application sources you're linking:
Supported Platforms
Microsoft Windows 7 and newer
macOS 10.11 and newer
This isn't made readily available for Linux.
There is no need to compile a GUI for openconnect, it's already available as a NetworkManager plugin called network-manager-openconnect-gnome at Debian. Please note that this is a plugin for NetworkManager, not a stand-alone program with a command directly usable. As such you're supposed to be using NetworkManager, its own applet GUI nm-applet provided by network-manager-gnome and add there a new VPN configuration of type openconnect (and subtype Cisco or Juniper/Pulse etc.)
(picture might be slightly different depending on installed versions)
If for some reason you really want to access sources as shipped by Debian, you should first read there: https://wiki.debian.org/Packaging/SourcePackage . Package description above has a link to sources informations. Providing full informations on how to build with all caveats is out of scope.
| How to build openconnect-gui? [closed] |
1,438,790,520,000 |
I have VPN network based CentOS 8 with OpenConnect Package. I need to allow VPN clients to use their local internet for browsing instead of server side. Currently all VPN clients utilising server side internet for browsing.
ip add
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 2c:27:d7:19:03:4a brd ff:ff:ff:ff:ff:ff
inet 200.200.200.3/24 brd 200.200.200.255 scope global dynamic noprefixroute eno1
valid_lft 84701sec preferred_lft 84701sec
inet6 fe80::c53b:410a:9d0f:cc5b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
6: vpns0: <POINTOPOINT,UP,LOWER_UP> mtu 1434 qdisc fq_codel state UNKNOWN group default qlen 500
link/none
inet 10.10.10.1 peer 10.10.10.76/32 scope global vpns0
valid_lft forever preferred_lft forever
inet6 fe80::8da5:409d:a886:5bfb/64 scope link stable-privacy
valid_lft forever preferred_lft forever
ip route
default via 200.200.200.1 dev eno1 proto dhcp metric 100
10.10.10.76 dev vpns0 proto kernel scope link src 10.10.10.1
200.200.200.0/24 dev eno1 proto kernel scope link src 200.200.200.3 metric 100
firewall-cmd --list-all
public (active)
target: default
icmp-block-inversion: no
interfaces: eno1
sources:
services: cockpit dhcpv6-client http https ipsec ssh
ports: 500/udp 4500/udp 443/tcp 443/udp 80/tcp
protocols:
forward: no
masquerade: yes
forward-ports:
source-ports:
icmp-blocks:
rich rules:
rule protocol value="ah" accept
rule protocol value="esp" accept
rule family="ipv4" source address="10.10.10.0/24” masquerade
netstat -rn
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 200.200.200.1 0.0.0.0 UG 0 0 0 eno1
10.10.10.76 0.0.0.0 255.255.255.255 UH 0 0 0 vpns0
200.200.200.0 0.0.0.0 255.255.255.0 U 0 0 0 eno1
|
it was solved by disabling default route and ad local route in ocserv.conf file
route = xx.xx.xx.0/xx
route = 10.10.10.0/255.255.255.0
route = 192.168.0.0/255.255.0.0
route = fef4:db8:1000:1001::/64
#route = default
| How can i route specific traffic through VPN Client |
1,387,949,048,000 |
I am installing hadoop on my Ubuntu system. When I start it, it reports that port 9000 is busy.
I used:
netstat -nlp|grep 9000
to see if such a port exists and I got this:
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN
But how can I get the PID of the process which is holding it?
|
Your existing command doesn't work because Linux requires you to either be root or the owner of the process to get the information you desire.
On modern systems, ss is the appropriate tool to use to get this information:
$ sudo ss -lptn 'sport = :80'
State Local Address:Port Peer Address:Port
LISTEN 127.0.0.1:80 *:* users:(("nginx",pid=125004,fd=12))
LISTEN ::1:80 :::* users:(("nginx",pid=125004,fd=11))
You can also use the same invocation you're currently using, but you must first elevate with sudo:
$ sudo netstat -nlp | grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 125004/nginx
You can also use lsof:
$ sudo lsof -n -i :80 | grep LISTEN
nginx 125004 nginx 3u IPv4 6645 0t0 TCP 0.0.0.0:80 (LISTEN)
| Finding the PID of the process using a specific port? |
1,387,949,048,000 |
I have a process I can't kill with kill -9 <pid>. What's the problem in such a case, especially since I am the owner of that process. I thought nothing could evade that kill option.
|
kill -9 (SIGKILL) always works, provided you have the permission to kill the process. Basically either the process must be started by you and not be setuid or setgid, or you must be root. There is one exception: even root cannot send a fatal signal to PID 1 (the init process).
However kill -9 is not guaranteed to work immediately. All signals, including SIGKILL, are delivered asynchronously: the kernel may take its time to deliver them. Usually, delivering a signal takes at most a few microseconds, just the time it takes for the target to get a time slice. However, if the target has blocked the signal, the signal will be queued until the target unblocks it.
Normally, processes cannot block SIGKILL. But kernel code can, and processes execute kernel code when they call system calls. Kernel code blocks all signals when interrupting the system call would result in a badly formed data structure somewhere in the kernel, or more generally in some kernel invariant being violated. So if (due to a bug or misdesign) a system call blocks indefinitely, there may effectively be no way to kill the process. (But the process will be killed if it ever completes the system call.)
A process blocked in a system call is in uninterruptible sleep. The ps or top command will (on most unices) show it in state D (originally for “disk”, I think).
A classical case of long uninterruptible sleep is processes accessing files over NFS when the server is not responding; modern implementations tend not to impose uninterruptible sleep (e.g. under Linux, since kernel 2.6.25, SIGKILL does interrupt processes blocked on an NFS access).
If a process remains in uninterruptible sleep for a long time, you can get information about what it's doing by attaching a debugger to it, by running a diagnostic tool such as strace or dtrace (or similar tools, depending on your unix flavor), or with other diagnostic mechanisms such as /proc/PID/syscall under Linux. See Can't kill wget process with `kill -9` for more discussion of how to investigate a process in uninterruptible sleep.
You may sometimes see entries marked Z (or H under Linux, I don't know what the distinction is) in the ps or top output. These are technically not processes, they are zombie processes, which are nothing more than an entry in the process table, kept around so that the parent process can be notified of the death of its child. They will go away when the parent process pays attention (or dies).
| What if 'kill -9' does not work? |
1,387,949,048,000 |
Sometimes I want to start a process and forget about it. If I start it from the command line, like this:
redshift
I can't close the terminal, or it will kill the process. Can I run a command in such a way that I can close the terminal without killing the process?
|
One of the following 2 should work:
$ nohup redshift &
or
$ redshift &
$ disown
See the following for a bit more information on how this works:
man nohup
help disown
Difference between nohup, disown and & (be sure to read the comments too)
| How can I run a command which will survive terminal close? |
1,387,949,048,000 |
I want to have a shell script like this:
my-app &
echo $my-app-pid
But I do not know how the get the pid of the just executed command.
I know I can just use the jobs -p my-app command to grep the pid. But if I want to execute the shell multiple times, this method will not work. Because the jobspec is ambiguous.
|
The PID of the last executed command is in the $! shell variable:
my-app &
echo $!
| How to get the pid of the last executed command in shell script? |
1,387,949,048,000 |
I have started a wget on remote machine in background using &. Suddenly it stops downloading. I want to terminate its process, then re-run the command. How can I terminate it?
I haven't closed its shell window. But as you know it doesn't stop using Ctrl+C and Ctrl+Z.
|
There are many ways to go about this.
Method #1 - ps
You can use the ps command to find the process ID for this process and then use the PID to kill the process.
Example
$ ps -eaf | grep [w]get
saml 1713 1709 0 Dec10 pts/0 00:00:00 wget ...
$ kill 1713
Method #2 - pgrep
You can also find the process ID using pgrep.
Example
$ pgrep wget
1234
$ kill 1234
Method #3 - pkill
If you're sure it's the only wget you've run you can use the command pkill to kill the job by name.
Example
$ pkill wget
Method #4 - jobs
If you're in the same shell from where you ran the job that's now backgrounded. You can check if it's running still using the jobs command, and also kill it by its job number.
Example
My fake job, sleep.
$ sleep 100 &
[1] 4542
Find it's job number. NOTE: the number 4542 is the process ID.
$ jobs
[1]+ Running sleep 100 &
$ kill %1
[1]+ Terminated sleep 100
Method #5 - fg
You can bring a backgrounded job back to the foreground using the fg command.
Example
Fake job, sleep.
$ sleep 100 &
[1] 4650
Get the job's number.
$ jobs
[1]+ Running sleep 100 &
Bring job #1 back to the foreground, and then use Ctrl+C.
$ fg 1
sleep 100
^C
$
| How to terminate a background process? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.