date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,530,474,646,000 |
I want to find out what gnome-disk is doing, how to do the same on the command line and how to undo whatever gnome-disk does. (It can not undo all it does itself.)
I have already experimented a little and found out the following: the USB memory thumb drive ("stick") I played with has at least 3 "state levels" to toggle, 2 of them can be switched with gnome-drive's buttons "eject" (on and off) and "power off" (only off).
From highest level to lowest, I discovered:
eject
gnome-drive's eject button
drive does not disappear, neither from gnome-drive, nor elsewhere
command line: eject /dev/sdb
can not be undone with gnome-drive
undo with: eject --trayclose /dev/sdb
kernel messages (journalctl -k)
eject
sdb: detected capacity change from 30253056 to 0
uneject
sd 4:0:0:0: [sdb] 30253056 512-byte logical blocks: (15.5 GB/14.4 GiB)
sdb: detected capacity change from 0 to 30253056
sdb: [partition details of my drive]
(un)bind
did not find equivalent in gnome-drive
command line: echo 3-6 > /sys/bus/usb/drivers/usb/unbind
device disappears in gnome-drive entirely
no kernel messages
lsusb -t still sees the device, but does not show class ("Mass Storage") or driver ("usb-storage") any more
/sys/bus/usb/drivers/usb/3-6 directory gone
undo with echo 3-6 > /sys/bus/usb/drivers/usb/bind
this provokes kernel messages
usb-storage 3-6:1.0: USB Mass Storage device detected
scsi host4: usb-storage 3-6:1.0
scsi 4:0:0:0: Direct-Access TOSHIBA TransMemory PMAP PQ: 0 ANSI: 6
sd 4:0:0:0: Attached scsi generic sg2 type 0
sd 4:0:0:0: [sdb] 30253056 512-byte logical blocks: (15.5 GB/14.4 GiB)
sd 4:0:0:0: [sdb] Write Protect is off
sd 4:0:0:0: [sdb] Mode Sense: 45 00 00 00
sd 4:0:0:0: [sdb] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
sdb: [partition details of my drive...]
sd 4:0:0:0: [sdb] Attached SCSI removable disk
power off
gnome-drive's power off button
device disappears on everything, like physically unplugged
indistinguishable from real unplugging
kernel message:
usb 3-6: USB disconnect, device number 10
How to power off via the command line?
How to power back on without real re-plugging?
For completeness: re-plugging the stick assigns a new device number (11), bus and port stay the same (3-6) and these kernel messages are show:
usb 3-6: new high-speed USB device number 11 using xhci_hcd
usb 3-6: New USB device found, idVendor=0930, idProduct=6545, bcdDevi>
usb 3-6: New USB device strings: Mfr=1, Product=2, SerialNumber=3
usb 3-6: Product: TransMemory
usb 3-6: Manufacturer: TOSHIBA
usb 3-6: SerialNumber: C03FD5F7713EE2B1B000821E
[plus all kernel messages as quoted under (re-)bind above]
|
How to do power-off on the command line?
echo 1 > /sys/bus/usb/devices/3-2.1/port/disable
The port is bus_nr-port.by.port.hub.chain.seperated.with.dots, i.e. 3-3 or 3-2.4.6. In those two examples the port is on bus 3. First example is just port 3, second example is a chain of two hubs, where the first is in port 2 of bus 3 and the second hub is in port 4 of the first. The device is on port 6 of the second hub.
How to power back on without real re-plugging?
echo 0 > '/sys/devices/pci0000:00/0000:00:14.0/usb3/3-0:1.0/usb3-port3/disable'
or for 3-2.1 it could be
echo 0 > '/sys/devices/pci0000:00/0000:00:14.0/usb3/3-2/3-2:1.0/3-2-port1/disable'
While the port is enabled you can pwd -P in the directory with the disable file to see the real location of the port management interface of the corresponding hub beforehand. For a list of host controllers which establish a USB bus, ls -l /sys/bus/usb/devices/usb* is a good start.
See also how to interpret lsusb for more tips. (Or the kernel docs.)
| How to disconnect and reconnect USB devices on the command line? |
1,530,474,646,000 |
Let's say:
$ ls -l /dev/input/by-id
lrwxrwxrwx 1 root root 10 Feb 10 03:47 usb-Logitech_USB_Keyboard-event-if01 -> ../event22
lrwxrwxrwx 1 root root 10 Feb 10 03:47 usb-Logitech_USB_Keyboard-event-kbd -> ../event21
$ ls -l /dev/input/by-path/
lrwxrwxrwx 1 root root 10 Feb 10 03:47 pci-0000:00:14.0-usb-0:1.1:1.0-event-kbd -> ../event21
lrwxrwxrwx 1 root root 10 Feb 10 03:47 pci-0000:00:14.0-usb-0:1.1:1.1-event -> ../event22
I know Interface number 1 (event22) above is non-functional because of bInterfaceProtocol is None for bInterfaceNumber 1:
$ sudo lsusb -v -d 046d:c31c
Bus 002 Device 005: ID 046d:c31c Logitech, Inc. Keyboard K120
Device Descriptor:
bLength 18
bDescriptorType 1
bcdUSB 1.10
bDeviceClass 0 (Defined at Interface level)
bDeviceSubClass 0
bDeviceProtocol 0
bMaxPacketSize0 8
idVendor 0x046d Logitech, Inc.
idProduct 0xc31c Keyboard K120
bcdDevice 64.00
iManufacturer 1 Logitech
iProduct 2 USB Keyboard
iSerial 0
bNumConfigurations 1
Configuration Descriptor:
bLength 9
bDescriptorType 2
wTotalLength 59
bNumInterfaces 2
bConfigurationValue 1
iConfiguration 3 U64.00_B0001
bmAttributes 0xa0
(Bus Powered)
Remote Wakeup
MaxPower 90mA
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 0
bAlternateSetting 0
bNumEndpoints 1
bInterfaceClass 3 Human Interface Device
bInterfaceSubClass 1 Boot Interface Subclass
bInterfaceProtocol 1 Keyboard
iInterface 2 USB Keyboard
HID Device Descriptor:
bLength 9
bDescriptorType 33
bcdHID 1.10
bCountryCode 0 Not supported
bNumDescriptors 1
bDescriptorType 34 Report
wDescriptorLength 65
Report Descriptors:
** UNAVAILABLE **
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x81 EP 1 IN
bmAttributes 3
Transfer Type Interrupt
Synch Type None
Usage Type Data
wMaxPacketSize 0x0008 1x 8 bytes
bInterval 10
Interface Descriptor:
bLength 9
bDescriptorType 4
bInterfaceNumber 1
bAlternateSetting 0
bNumEndpoints 1
bInterfaceClass 3 Human Interface Device
bInterfaceSubClass 0 No Subclass
bInterfaceProtocol 0 None
iInterface 2 USB Keyboard
HID Device Descriptor:
bLength 9
bDescriptorType 33
bcdHID 1.10
bCountryCode 0 Not supported
bNumDescriptors 1
bDescriptorType 34 Report
wDescriptorLength 159
Report Descriptors:
** UNAVAILABLE **
Endpoint Descriptor:
bLength 7
bDescriptorType 5
bEndpointAddress 0x82 EP 2 IN
bmAttributes 3
Transfer Type Interrupt
Synch Type None
Usage Type Data
wMaxPacketSize 0x0004 1x 4 bytes
bInterval 255
Device Status: 0x0000
(Bus Powered)
$
I don't get it and raise up two possibility questions:
If value of bInterfaceProtocol always None independent of Host, then what's the point of this unused Interface exists ?
If value of bInterfaceProtocol decided by Kernel, then what's
the condition did Kernel take to set it to None ?
|
Kernel does not decide the bInterfaceProtocol. The value is received from the connected USB device.
A variety of protocols are supported HID devices. The bInterfaceProtocol
member of an Interface descriptor only has meaning if the bInterfaceSubClass
member declares that the device supports a boot interface, otherwise it is 0.
Check USB Device Class Definition for HID 1.11 for more information.
| Is value of bInterfaceProtocol fixed or decided by Kernel? |
1,530,474,646,000 |
I am trying to run a usbreset program against lsusb which has several devices with the same device identifier.
I run lsusb to list the devices, and add a | grep [identifer] to only list the devices with that identifier.
I need to then run an awk command to get the bus and device number, which will be inserted into a usbreset program (https://github.com/jkulesza/usbreset) which will reset all of the devices that match the id.
The command looks like this:
lsusb | grep 1234:a1b1 | while read line ; do awk -F '[^0-9]+' '{ system("sudo ./usbreset /dev/bus/usb/002/"$3) }'; done
where -F '[^0-9]+' helps remove the ":" from the end of the device number, and $3selects the fourth column of the lsusb command output: Bus 002 Device 010: ID 1234:a1b1
This works nicely, but the issue I have is that i have 6 devices with this id, and the awk command cuts off the first result, and only prints 5.
user@localhost:~$ lsusb | grep 1234:a1b1
Bus 002 Device 015: ID 1234:a1b1
Bus 002 Device 014: ID 1234:a1b1
Bus 002 Device 013: ID 1234:a1b1
Bus 002 Device 010: ID 1234:a1b1
Bus 002 Device 009: ID 1234:a1b1
Bus 002 Device 008: ID 1234:a1b1
and:
user@localhost:~$ lsusb | grep 1234:a1b1 | while read line ; do awk -F '[^0-9]+' '{ print $3 }'; done
014
013
010
009
008
Any advice to help find out where I am going wrong with this would be great!
|
The line is disappearing because of the read line invocation, not AWK. I would go about this differently:
lsusb -d 1234:a1b1 | while read _ bus _ device _; do
sudo ./usbreset "/dev/bus/usb/${bus}/${device%:}"
done
This uses lsusb’s own ability to filter devices, then reads the bus and device identifiers into the corresponding variables, and gives the appropriate values to usbreset, stripping the trailing “:” from ${device}.
| lsusb | grep | Awk command is cutting off the first result |
1,530,474,646,000 |
From what I understand, devices connected to different controllers should show up under different USB busses. However, when I connect a keyboard to the xHCI controller, it is still listed under one of the EHCI busses. See the >>>> markers in the listings:
$ lspci | grep -i usb
>>>> 00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 04)
00:1a.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #2 (rev 04)
00:1d.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB EHCI #1 (rev 04)
$ lspci -vs 00:14.0
00:14.0 USB controller: Intel Corporation 8 Series/C220 Series Chipset Family USB xHCI (rev 04) (prog-if 30 [XHCI])
Subsystem: ASUSTeK Computer Inc. 8 Series/C220 Series Chipset Family USB xHCI
Flags: bus master, medium devsel, latency 0, IRQ 27
Memory at ef920000 (64-bit, non-prefetchable) [size=64K]
Capabilities: [70] Power Management version 2
Capabilities: [80] MSI: Enable+ Count=1/8 Maskable- 64bit+
Kernel driver in use: xhci_hcd
So I do indeed have an xHCI controller. It is a separate physical port on the motherboard.
$lsusb
Bus 002 Device 002: ID 8087:8000 Intel Corp.
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 8087:8008 Intel Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
>>>> Bus 004 Device 002: ID 174c:3074 ASMedia Technology Inc. ASM1074 SuperSpeed hub
>>>> Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 014: ID 046d:c03d Logitech, Inc. M-BT96a Pilot Optical Mouse
Bus 003 Device 015: ID 195d:2030 Itron Technology iONE
Bus 003 Device 013: ID 05e3:0608 Genesys Logic, Inc. Hub
Bus 003 Device 012: ID 0424:2228 Standard Microsystems Corp. 9-in-2 Card Reader
Bus 003 Device 011: ID 0424:2602 Standard Microsystems Corp. USB 2.0 Hub
Bus 003 Device 010: ID 0424:2512 Standard Microsystems Corp. USB 2.0 Hub
Bus 003 Device 003: ID 174c:2074 ASMedia Technology Inc. ASM1074 High-Speed hub
>>>> Bus 003 Device 016: ID 03f0:0024 Hewlett-Packard KU-0316 Keyboard
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
The "superspeed" 3.0 hub on bus 004 should be the xHCI controller. The keyboard, however, is attached to bus 003:
$lsusb -t
/: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/6p, 5000M
|__ Port 3: Dev 2, If 0, Class=Hub, Driver=hub/4p, 5000M
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/14p, 480M
>>>>|__ Port 1: Dev 16, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M
|__ Port 3: Dev 3, If 0, Class=Hub, Driver=hub/4p, 480M
|__ Port 2: Dev 10, If 0, Class=Hub, Driver=hub/2p, 480M
|__ Port 1: Dev 11, If 0, Class=Hub, Driver=hub/4p, 480M
|__ Port 1: Dev 12, If 0, Class=Mass Storage, Driver=usb-storage, 480M
|__ Port 3: Dev 13, If 0, Class=Hub, Driver=hub/4p, 480M
|__ Port 2: Dev 15, If 0, Class=Human Interface Device, Driver=usbhid, 12M
|__ Port 2: Dev 15, If 1, Class=Human Interface Device, Driver=usbhid, 12M
|__ Port 2: Dev 15, If 2, Class=Human Interface Device, Driver=usbhid, 12M
|__ Port 4: Dev 14, If 0, Class=Human Interface Device, Driver=usbhid, 1.5M
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/2p, 480M
|__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/8p, 480M
/: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=ehci-pci/2p, 480M
|__ Port 1: Dev 2, If 0, Class=Hub, Driver=hub/6p, 480M
In fact, no matter how I connect devices to physical controllers, they always show up under the same bus. Does anyone have a clue what might be going on?
System
Processor: Intel(R) Core(TM) i7-4771 CPU @ 3.50GHz
OS: Debian GNU/Linux testing (buster) with ACS patch, IOMMU enabled.
Kernel: Linux 4.10.0-acs+ (x86_64)
Version: #3 SMP PREEMPT Sun Feb 26 00:03:48 CET 2017
Processor: Intel(R) Core(TM) i7-4771 CPU @ 3.50GHz : 3900.00 MHz
Board: Asus Z87-PRO
BIOS: AMI version 1707, VT-d/x enabled
|
USB 3.0 in 5G mode isn't compatible to USB 2.0 or earlier, so the way they implemented compatibility is to use one pin pair in the same position as for USB 2.0 for legacy devices, and two new pin pairs for "real" USB 3.0 devices, as you can see e.g. in the pinout on Wikipedia.
So your 00:14.0 xHCI controller is really two controllers in one: A USB 2.0 legacy controller for the "old" pair in each connector, which shows up as bus 3 (with 14 ports), and a "real" USB 3.0 controller for the two "new" pairs in each connector, which shows up as bus 4 (with 6 ports).
Some of your USB connectors will be marked blue on your PC, and they are connected to both controllers. If you plug in a USB 2.0 device, it will physically connect to bus 3, while if you plug in a "real" USB 3.0, it will physically connect to bus 4. That's why different devices plugged into the same connector can show up on one or the other bus.
Also note that the legacy controller has a lot more ports, and is also connected to some hubs. I don't know if you connected any external hubs, and how many, but there are also internal hubs on the motherboard.
So it's entirely possible all your connectors just belong to the xHCI controller, and the two other EHCI controllers that lspci shows aren't actually connected to anything (or possibly to connectors on the motherboard).
The way to find out is to connect a USB 2.0 device to each connector in turn, write down on which bus, and under which port (and port of hubs) it shows up. Then repeat the same exercise with a "real" USB 3.0 device, and you should know how the USB connectors are set up.
| lsusb lists devices on different PCI controllers under the same USB hub |
1,500,581,594,000 |
Linux Mint 17 Cinnamon (Ubuntu 14.04)
HDA Intel PCH
The microphone works well (tested with VLC's capture mode), but Skype won't catch any sound. Only PulseAudio is listed in Skype's configuration as microphone option. Amplifying the input audio to the very top (>100%) manages to get some sound, but it's distorted and still almost inaudible.
Solutions tested without success:
Trying to configure it trough ALSA instead of PulseAudio (many sources, this one for example). No luck tweaking the few options available.
Installing gstreamer-properties to set a different default input (source). No difference.
Uninstalling/disabling PulseAudio (many sources). No good done. This is actually a bad idea.
Adding the options snd-hda-intel model= bit to /etc/modprobe.d/alsa-base.conf (source). This is also a bad idea.
Using gnome-alsamixer (source). Couldn't even launch it.
Adding the package libasound2-plugins:i386 (source). No effect.
|
SkypeTroubleshooting article in Ubuntu's Help Wiki solved the issue very neatly. I used the "older Ubuntu versions" instructions.
Skype has been known to mess up the mixer settings. So disable the
automatic configuration of the mixer controls in Skype: right-click
with your mouse on the Skype icon in the system tray - Options - Sound
Devices - remove the tick at: Allow Skype to automatically adjust my
mixer levels. Click Apply. Then close Skype (right-click with your
mouse on the Skype icon - Quit).
Then use Synaptic Package Manager to install pavucontrol (Pulse Audio
Volume Controller). Use that application to set up your input device.
Most built-in mics are mono. The default setting on the Input Control
is to lock the R&L channel together. By reading the mono mic as
stereo, PulseAudio cancels the input. Click on the middle button on
the upper right of the control panel to unlock the R&L channel. Move
either the left or right channel to 10 leaving the other channel about
90. You should now see the VU meter sensing sound. Now start Skype again. The test call should register your voice now.
| Fix Skype audio input in Mint 17 Cinnamon |
1,500,581,594,000 |
I've found a solution that doesn't work by me:
audio - Monitoring the microphone level with a command line tool in Linux - Super User
https://superuser.com/questions/306701/monitoring-the-microphone-level-with-a-command-line-tool-in-linux
The problem is that they are using Maximum amplitude to detect sound. However its value is always the same by me, no matter whether the recorded audio contains only silence or some sounds. For example:
10 sec of silence (Can be downloaded here: http://denis-aristov.ucoz.com/en/test-mic-silence.wav ):
$ arecord -f S16_LE -D hw:2,0 -d 10 /tmp/test-mic-silence.wav
$ sox -t .wav /tmp/test-mic-silence.wav -n stat
Samples read: 80000
Length (seconds): 10.000000
Scaled by: 2147483647.0
Maximum amplitude: 0.999969
Minimum amplitude: -1.000000
Midline amplitude: -0.000015
Mean norm: 0.202792
Mean amplitude: 0.009146
RMS amplitude: 0.349978
Maximum delta: 0.913849
Minimum delta: 0.000000
Mean delta: 0.001061
RMS delta: 0.005564
Rough frequency: 20
Volume adjustment: 1.000
10 sec with some sounds (Can be downloaded here: http://denis-aristov.ucoz.com/en/test-mic-sounds.wav ):
$ arecord -f S16_LE -D hw:2,0 -d 10 /tmp/test-mic-sounds.wav
$ sox -t .wav /tmp/test-mic-sounds.wav -n stat
Samples read: 80000
Length (seconds): 10.000000
Scaled by: 2147483647.0
Maximum amplitude: 0.999969
Minimum amplitude: -1.000000
Midline amplitude: -0.000015
Mean norm: 0.185012
Mean amplitude: 0.010225
RMS amplitude: 0.334286
Maximum delta: 1.999969
Minimum delta: 0.000000
Mean delta: 0.006213
RMS delta: 0.057844
Rough frequency: 220
Volume adjustment: 1.000
What is the difference? What values to use for sound detection? Or do I have to set something up because something works wrong?
I've just used another computer (a notebook with a built-in microphone). I've recorded two WMA files (with and without sounds) using Windows "Sound Recorder". Converted them to WAV files using audacity and got the following outputs. Maximum amplitudes differ this time:
With sounds:
$ sox -t .wav /tmp/mic-sounds.wav -n stat
Samples read: 581632
Length (seconds): 6.594467
Scaled by: 2147483647.0
Maximum amplitude: 0.999969
Minimum amplitude: -1.000000
Midline amplitude: -0.000015
Mean norm: 0.013987
Mean amplitude: 0.000062
RMS amplitude: 0.065573
Maximum delta: 1.999969
Minimum delta: 0.000000
Mean delta: 0.011242
RMS delta: 0.047009
Rough frequency: 5031
Volume adjustment: 1.000
Without sounds:
$ sox -t .wav /tmp/mic-silence.wav -n stat
Samples read: 372736
Length (seconds): 4.226032
Scaled by: 2147483647.0
Maximum amplitude: 0.029022
Minimum amplitude: -0.029114
Midline amplitude: -0.000046
Mean norm: 0.005082
Mean amplitude: -0.000053
RMS amplitude: 0.006480
Maximum delta: 0.030487
Minimum delta: 0.000000
Mean delta: 0.005815
RMS delta: 0.007285
Rough frequency: 7891
Volume adjustment: 34.348
May it be an indication that there are some problems with the microphone on another computer?
|
(Answer based on various comments, as this method seems to be acceptable, and comments are not guaranteed to stay.)
Look at the first recording ("10 secs of silence") in an audio editor, e.g. audacity. You'll see a DC (very low frequency) component when the level goes from 1 at 0 secs to -1 at 1 secs to 0.5 at 1.5 secs, and then falls down to near zero near the end. Did you plug in the mic during that time? If yes, you need to wait ca. 10 seconds before the amplitude settles, then measure. If not, you need to filter out the DC (direct current, that is constant voltage offset) component somehow. sox has several filters you can try.
You can use the sox filters from a shell script without problems. Try e.g. highpass 100, that filters out most of it except for the initial jump.
If filtering out DC components is too much effort, you can also ignore the initial part, and use the remaining part as it is.
| How to monitor microphone volume level? |
1,500,581,594,000 |
I'm running Crunchbang++ (Debian stretch with pulseaudio).
When I plug in my headset, the laptop's mic works but not the one of the headset -- that would be OK. But when I unplug it, I can see in "Sound preferences" the "connector" switching from "Microphone" to "Internal microphone" which is recording constant noise only. But even when I switch back manually to "Microphone" nothing gets recorded (silence).
So right now, I can not do a call on my laptop without the headset on, which is bad when I want to have everybody in the room to be able to listen.
I'd like to use the internal mic without the headset, using the headset's mic when it's plugged is not even that important.
I found lots of posts everywhere on how to keep the internal mic on when plugging in in the headset. But my situation is almost the inverse.
What could cause this strange behavior? ... and how could I fix it?
|
What could cause this behavior? Your laptop most likely has both audio-in channels mixed up:
With the headset detected, it wants to switch to the external microphone (expected behaviour), but actually uses the internal one.
When plugged out, it believes to switch to the internal one as expected, but chooses the external input (which is floating, and some automatic gain amplifies the noise.
When you manually switch to the external microphone, the system believes there is non, so it records silence for you.
How to fix it? Switch the audio-in assignments in the device tree (I hope your system uses device tree). Create a new dtb and boot with that one.
| Internal mic works when headset is plugged in only |
1,500,581,594,000 |
I know it is possible to check whether the webcam is currently opened or not. But is there a similar way to check whether it is currently being recording from a microphone?
I was poking around a bit in /dev/snd/, had a quick look at Pulseaudio functionality, and was running up and down the web. Unfortunately I couldn't find an easy solution. A general solution that does not rely on Pulseaudio would be ideal.
|
A general solution that does not rely on Pulseaudio would be ideal.
Most if not all popular modern Linux distros use Pulseaudio which opens ALSA kernel devices and keeps them open at all times which means the solution must probably involve it.
Also if PA is installed and running, applications cannot read from/write to ALSA kernel devices because PA opens them exclusively.
Here's a quick command for PulseAudio which, if it returns any output, means your input devices are being used:
pacmd list-sources | grep RUNNING
For PipeWire that will be:
pactl list sources | grep RUNNING
| How to detect whether a program records from the microphone? |
1,500,581,594,000 |
I'm running Pop OS on a Lenovo T495s. My onboard microphone was working fine but after updating to 20.04 I am getting no sound from it.
When I run pavucontrol I see two input devices:
Digital Microphone - Family 17h (Models 10h-1fh) Audio Controller Digital Microphone. Shows no input.
Headphones Stereo Microphone (unplugged) - Family 17h (Models 10h-1fh) HD Audio Controller Headphones Stereo Microphone. This does show input.
I'm not quite sure what the first devices is and it seems like I want to use the bottom one but only "Digital Microphone" is available to select in the OS Sound settings, or when using a microphone in a browser.
Why is the second microphone not available to use?
|
I had the same problem with Ubuntu 20.04. What I did was to update the linux kernel. If you type
uname -a
to find out which version you have, then see if you are on 5.6 or 5.8
I was on 5.6-1020 and then updated to 5.8 by using the following command:
sudo apt install linux-image-5.8.0-23-generic linux-headers-5.8.0-23-generic linux-buildinfo-5.8.0-23-generic linux-modules-5.8.0-23-generic linux-modules-extra-5.8.0-23-generic
I noticed that in PulseAudio Volume Control, I can now actually disable the non working microphone under the configuration tab. Whereas before I couldn't.
| Microphone recognised in pavucontrol but not useable |
1,500,581,594,000 |
Every hardware device ( incl. the internal microphone, I suppose ) works under some driver.
How to find out which is the specific driver controlling the work of the internal microphone on a PC that works under Linux?
|
lspci -v
Lspci is a command on Unix-like operating systems that print a "list" of detailed information about all buses and PCI devices in the system.
output:
01:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Cedar
HDMI Audio [Radeon HD 5400/6300/7300 Series] Subsystem: XFX Pine
Group Inc. Cedar HDMI Audio [Radeon HD 5400/6300/7300 Series] Flags:
bus master, fast devsel, latency 0, IRQ 29 Memory at f7e40000
(64-bit, non-prefetchable) [size=16K] Capabilities:
Kernel driver in use: snd_hda_intel Kernel modules: snd_hda_intel
| How to find out the internal microphone's driver on a Linux PC? |
1,500,581,594,000 |
A couple of comments on Hacker News suggest that, on FreeBSD, you can:
use cat to send a file ( a .wav file for instance ) to the audio
speaker (/dev/dsp).
record from the mic using a similar method.
send a live stream across the network (using netcat?)
I can nearly do the first one. I do cat /dev/random > /dev/dsp and it makes white noise. Can't work out how to record that to a file. And 2 and 3 I can't figure out. Any ideas?
https://news.ycombinator.com/item?id=28054789
https://news.ycombinator.com/item?id=28055498
|
cat somefile.wav > /dev/dsp
cat /dev/dsp > record_from_mic.wav
cat /dev/dsp | nc -l 1234
| Playing, recording, and streaming sound with cat and /dev/dsp |
1,500,581,594,000 |
Back when COVID started, I got myself a USB webcam (this one, to be specific). I found the fact that you have to plug it into the audio jack to be a little annoying, but otherwise it worked fine. For reference, I was running Windows 10 back then.
Recently I switched to Linux Mint (Ulyana Cinnamon, if it's any help). When I try to use my webcam the video works, but my computer's audio stops working for the duration of having the webcam plugged in. I think the computer is getting the microphone input mixed up with sound output, but I'm not sure. CONFIRMED: When I checked in Pulse Audio Volume Control, the microphone input matched the sound output. My computer has no other microphones or webcams (it's a desktop), so I would appreciate it if y'all could help me resolve this quickly.
EDIT: You might want to zoom your browser out before viewing the pictures, they are a bit large.
Debug output from terminal:
$ arecord -l
**** List of CAPTURE Hardware Devices ****
card 0: PCH [HDA Intel PCH], device 0: ALC3234 Analog [ALC3234 Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
card 0: PCH [HDA Intel PCH], device 2: ALC3234 Alt Analog [ALC3234 Alt Analog]
Subdevices: 1/1
Subdevice #0: subdevice #0
$ lspci | grep Audio
00:1f.3 Audio device: Intel Corporation 200 Series PCH HD Audio
$ journalctl -f
-- Logs begin at Mon 2020-07-27 12:10:35 CDT. --
Aug 01 12:22:24 user-OptiPlex-7050 rtkit-daemon[980]: Supervising 4 threads of 3 processes of 1 users.
Aug 01 12:22:34 user-OptiPlex-7050 dbus-daemon[1229]: [session uid=1000 pid=1229] Activating via systemd: service name='org.gnome.Terminal' unit='gnome-terminal-server.service' requested by ':1.134' (uid=1000 pid=11104 comm="/usr/bin/gnome-terminal.real " label="unconfined")
Aug 01 12:22:34 user-OptiPlex-7050 systemd[1213]: Created slice apps.slice.
Aug 01 12:22:34 user-OptiPlex-7050 systemd[1213]: Created slice apps-org.gnome.Terminal.slice.
Aug 01 12:22:34 user-OptiPlex-7050 systemd[1213]: Starting GNOME Terminal Server...
Aug 01 12:22:34 user-OptiPlex-7050 dbus-daemon[1229]: [session uid=1000 pid=1229] Successfully activated service 'org.gnome.Terminal'
Aug 01 12:22:34 user-OptiPlex-7050 systemd[1213]: Started GNOME Terminal Server.
Aug 01 12:22:34 user-OptiPlex-7050 systemd[1213]: Started VTE child process 11115 launched by gnome-terminal-server process 11107.
Aug 01 12:22:35 user-OptiPlex-7050 pk-debconf-help[7706]: No active connections, exiting
Aug 01 12:22:35 user-OptiPlex-7050 systemd[1213]: pk-debconf-helper.service: Succeeded.
Aug 01 12:23:19 user-OptiPlex-7050 kernel: [UFW BLOCK] IN=enp0s31f6 OUT= MAC=01:00:5e:00:00:01:cc:2d:21:f0:0c:00:08:00 SRC=192.168.39.1 DST=224.0.0.1 LEN=28 TOS=0x00 PREC=0xC0 TTL=1 ID=13838 PROTO=2
Aug 01 12:23:24 user-OptiPlex-7050 kernel: [UFW BLOCK] IN=enp0s31f6 OUT= MAC=01:00:5e:00:00:fb:14:0a:c5:46:da:09:08:00 SRC=192.168.39.191 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2
Sound/Output in settings without microphone plugged in:
Sound/Output with microphone plugged in:
Sound/Input without microphone plugged in:
Sound/Input with microphone plugged in (1):
Sound/Input with microphone plugged in (2):
Pulse Audio Video Control (PAVC) Output Devices tab without microphone:
PAVC Output Devices with microphone:
PAVC Input Devices without microphone:
PAVC Input Devices with microphone:
PAVC Configuration (doesn't change if mic is plugged in or not):
Sound / Applications:
|
Well, as it turns out, it is the webcam's fault. The sound chip thingy (?) in the webcam isn't configured properly, such that the computer thinks it's both an input and an output.
Solution: In Pulse Audio Volume Control go to the "Configuration" tab and set the audio profile to "Analog Stereo Duplex". Then go to the output tab and set the port to match your speakers (they may or may not be listed as "unavailable"; the computer is lying.) Then, go to the input tab and ensure that the port is set to "Microphone," not "Headset Microphone."
| Plugging in webcam microphone causes sound to stop working |
1,500,581,594,000 |
I have a bluetooth headphones with a mic connected to Linux Mint. On blueman (bluetooth Manager) I am able to see the audio profile selected as A2DP sink so the output is working fine.
I know for microphone to work I have to select the profile HSP/HFP but I am not able to select that option and get and error "Failed to change profile to headset_head_unit". I have tried all the hack provided on Linux forum like pulse audio, ofno but none of them worked as I haven't figured out the root cause yet.
I am yet to find the steps where I can do this via running a script. I just ugraded to Linux 20 hoping the latest version of Mint would solve it.
|
I was able to get my AirPods Pro working on Linux Mint 20 by following these instructions:
https://reckoning.dev/airpods-pro-ubuntu/
I wish I understood better what the problem is, but whne it comes to headphone profiles I have no experience or knowledge. All I know is that it came down to "in order for a microphone to work, you need phonesim -- but Ubuntu 20 dropped support for phonesim, so you need to install it from a third-party repository". The link I posted will walk you through installing and configuring phonesim
One caveat that I'm still working through (and Googling which led me to your question): After I followed all of the steps I was able to select HSP/HFP and use my mic... For 5 minutes. Then it reverted to A2DP and wouldn't let me switch back. I found (through random clicking and experimenting) that if I restarted pulseaudio again (pulseaudio -k in the terminal) then I was back to HSP/HFP mode... for 5 more minutes
I'm trying to make sense of why this is happening and I'll update the answer once I've figured out, but hopefully this link will get you started in the right direction.
| Bluetooth headphone's mic not working on Linux Mint |
1,500,581,594,000 |
I use the following script to monitor my microphone:
while true; do
printf "$(AUDIODEV=hw:2,0 rec -n stat trim 0 1 2>&1 |
awk 'BEGIN { ORS="" } /^Maximum amplitude/ { print "Max. amplitude: "$3}
/^Rough\s+frequency/ { print " Frequency: "$3}
/^Maximum\s+delta/ { print " Max. delta: "$3}')\r";
done
It records a segment which is 1 second long, extracts values of Maximum amplitude and Rough frequency from the standard sox output and prints them.
Can I save a segment to file if its volume or frequency is greater than a particular threshold? I know that I can save each segment and then analyze it, but there will be too many write operations, which I want to avoid.
|
I found a solution in the meanwhile. It is based on piping the output of rec to base64 so that it can be encoded to ASCII and stored in a bash variable. If it is time to analyze the segment's volume and frequency I run base --decode on the variable contents. In the script below only volume is analyzed. If it exceeds the threshold (0.6) handleExcess is called and the segment is saved. I also increased the segment length to 5 seconds.
handleExcess() {
echo "$1" | base64 --decode > /tmp/"$2".wav
}
VOLUME="";
while true; do
AUDIO_DATA="$(AUDIODEV=hw:0,0 rec -c 1 -t wav - trim 0 5 2> /dev/null | base64)";
declare $(echo "$AUDIO_DATA" | base64 --decode | sox - -n stat 2>&1 | awk 'BEGIN { ORS="" } /^Maximum amplitude/ { print "VOLUME="$3 }');
if [ $(echo "$VOLUME > 0.6" | bc) == 1 ]; then
AUDIO_DATA_TMP="$AUDIO_DATA";
handleExcess "$AUDIO_DATA_TMP" "$VOLUME""_""$(date +%s)" &
fi
done
| Monitor microphone and save filtered segments |
1,500,581,594,000 |
I have a Bluetooth headset and I'd like to pipe the microphone of the headset to the speaker(the port) of my laptop. And I would like to take the microphone input on my laptop and pipe it to my Bluetooth headphones.
|
For Pulseaudio, use module-loopback. First get the names of the sources and sinks:
pacmd list-sources | grep name:
pacmd list-sinks | grep name:
The create a loopback connection with
pacmd load-module module-loopback source="..." sink="..."
with the names of the source and sink you want, without the angular brackets.
| Pipe audio through headphones and mic jack |
1,456,043,351,000 |
I installed Skype 4.3.0.37 on my machine (debian 7 64bits XFCE 4.10) and both incoming and outgoing audio is not woring. I tried the echo sound service of skype but to no success.
Here my configuration details:
Sound config:
Skype sound devices config
|
It looks like you are communicating directly with the hardware with your configuration. This might cause multiplexing problems (it has for me in the past). You might consider installing the pulseaudio sound server to handle multiplexing the device to multiple pieces of software that want to use it, and the pavucontrol mixer:
$ sudo apt-get install pulseaudio pavucontrol
Once this is installed, re-start skype and manage configuration using the pavucontrol GUI tool.
| Unable to listen or use microphone in Skype |
1,456,043,351,000 |
When monitoring audio input (microphone or line-in) using Audacity, I can see the current input level.
How can I monitor the input and see the level in a text console?
|
With the arecord and sox command you can record a sample of 1s and measure its level:
arecord -qd 1 file && sox file -n stat
Here's an example of output:
Samples read: 8000
Length (seconds): 1.000000
Scaled by: 2147483647.0
Maximum amplitude: 0.992188
Minimum amplitude: -0.992188
Midline amplitude: 0.000000
Mean norm: 0.093221
Mean amplitude: -0.015338
RMS amplitude: 0.232947
Maximum delta: 0.617188
Minimum delta: 0.000000
Mean delta: 0.001067
RMS delta: 0.009643
Rough frequency: 52
Volume adjustment: 1.008
If you're only interested in the level/maximum amplitude you can pipe the result to awk to only output the second field of the fourth line:
arecord -qd 1 /tmp/rec.wav && sox /tmp/rec.wav -n stat 2>&1 | awk 'BEGIN{FS=":"} NR==4 {print $2}'
And if you want to monitor its evolution you can put this command in a while loop:
while :; do
arecord -qd 1 /tmp/rec.wav && sox /tmp/rec.wav -n stat 2>&1 | awk 'BEGIN{FS=":"} NR==4 {print $2}'
sleep 1 # repeat every one second
done
Output:
0.992188
0.023438
0.046875
0.375000
0.523438
0.109375
0.242188
If you want the output to be in dB you can calculate it with awk:
while :; do
arecord -qd 1 /tmp/rec.wav && sox /tmp/rec.wav -n stat 2>&1 | awk 'BEGIN{FS=":"} NR==4 {db=20*log($2)/log(10); printf("%0.4f\n",db)}'
sleep 1 # repeat every one second
done
Output in dB:
-12.6467
-13.4366
-13.2010
-14.4959
| Text-mode tool to view audio input level |
1,456,043,351,000 |
So, I've been playing with bash scripting on Linux and attempting to level up my terminal wizardry. I've learned about piping data from /dev/video0 for web cams, and quickly creating image files from it, in a few different ways.
Next up, I'm wondering if I can do the same with audio. It seems like it must be possible, and perhaps relatively easy; but I can't find any reference (using udev) to which device file data would be piped from.
Does such a device exist? How would I find it?
|
No, sound in Linux is based on alsa: no user space nodes for audio stream like /dev/video0.
You can do some piping though: If you want to dig into this: two keywords: alsa, pipeline.
Have fun!
| Is there a /dev/video0 like default for pc-microphones? |
1,456,043,351,000 |
I am trying to reconfigure the setup of my microphone with HDAjackRetask however when I click reply I get the message:
tee: /sys/class/sound/hwC1D0/reconfig: No such device
Here is a screenshot:
I am running Deepin 15.1 desktop. Thanks in advance!
|
If you follow hdajackretask source, you can see it creates nothing else than a script in temp folder and run it with root privileges.
I'd suggest to start from that; errors.log may have some additional clue.
And then: does reconfig file exist in the first place?
Because it's not even created if kernel hasn't been compiled with CONFIG_SND_HDA_RECONFIG option.
Last if this still hasn't told you nothing, you might consider to enable tracepoints. For as much as I can't see a rationale, sending raw commands is the last possible thing.
They can be eventually interpreted with hda-decode-verb contained in hda-emu:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/hda-emu.git
autoreconf -i
./configure --with-kerneldir=/path/linux-2.0
make install
| HDAjackRetask: tee: /sys/class/sound/hwC1D0/reconfig: No such device |
1,456,043,351,000 |
My various microphones work fine, and I can record myself with various applications, but I can't hear myself talking while recording. All I need is some application that will play the mic without a lot of fuss, just to hear my own voice.
I'm wondering if there's a CLI solution to my question, for example using mplayer or some other simple command, perhaps piping the output of the mic to the headphones? I'm using pulseaudio, but I would accept any solution, as long as I can hear myself through my headphones, aka the "built-in audio analog stereo" device.
I can test my webcam like mplayer tv:// But is there an equivalent for microphones, like hypothetically mplayer mic:// obviously doesn't work. This is not necessarily an mplayer question, any software solution will be accepted, the simpler the better.
|
arecord -f cd - | aplay -
Should do it but it may lead to quite nasty consequences if you're not using headphones.
Also read this topic: https://askubuntu.com/questions/123798/how-to-hear-my-voice-in-speakers-with-a-mic
| How to hear the mic from the CLI? |
1,286,483,993,000 |
I know that it can, in some circumstances, be difficult to move a Windows installation from one computer to another (physically move the hard drive), but how does that work on Linux? Aren't most of the driver modules loaded at bootup? So theoretically would it be that much of a hassle?
Obviously, xorg configs would change and proprietary ATI drivers and such would have to be recompiled (maybe?). Is there more to it than I'm thinking of?
Assume the two computers are from the same era, e.g. both i7s but slightly different hardware.
Update:
Thanks for the answers. This is mostly for my own curiosity. I have my Linux system up and running at work, but eventually I'd like to move to a computer that I can get dual video cards into so I can run more than two monitors. But not any time soon.
|
Moving or cloning a Linux installation is pretty easy, assuming the source and target processors are the same architecture (e.g. both x86, both x64, both arm…).
Moving
When moving, you have to take care of hardware dependencies. However most users won't encounter any difficulty other than xorg.conf (and even then modern distributions tend not to need it) and perhaps the bootloader.
If the disk configuration is different, you may need to reconfigure the bootloader and filesystem tables (/etc/fstab, /etc/crypttab if you use cryptography, /etc/mdadm.conf if you use md RAID). For the bootloader, the easiest way is to pop the disk into the new machine, boot your distribution's live CD/USB and use its bootloader reparation tool.
Note that if you're copying the data rather than physically moving the disk (for example because one or both systems dual boot with Windows), it's faster and easier to copy whole partitions (with (G)Parted or dd).
If you have an xorg.conf file to declare display-related options (e.g. in relation with a proprietary driver), it will need to be modified if the target system has a different graphics card or a different monitor setup. You should also install the proprietary driver for the target system's graphics card before moving, if applicable.
If you've declared module options or blacklists in /etc/modprobe.d, they may need to be adjusted for the target system.
Cloning
Cloning an installation involves the same hardware-related issues as moving, but there are a few more things to take care of to give the new machine a new identity.
Edit /etc/hostname to give the new machine a new name.
Search for other occurrences of the host name under /etc. Common locations are /etc/hosts (alias for 127.0.0.1) and /etc/mailname or other mail system configuration.
Regenerate the ssh host key.
Make any necessary change to the networking configuration (such as a static IP address).
Change the UUID of RAID volumes (not necessary, but recommended to avoid confusion), e.g., mdadm -U uuid.
See also a step-by-step cloning guide targeted at Ubuntu.
My current desktop computer installation was cloned from its predecessor by unplugging one of two RAID-1 mirrored disks, moving it into the new computer, creating a RAID-1 volume on the already present disk, letting the mirror resynchronize, and making the changes outlined above where applicable.
| Moving Linux install to a new computer |
1,286,483,993,000 |
Does anyone know how I can copy my customizations of XFCE's settings plus its appearance to another machine?
The settings for appearance/design, panels, keyboard shortcuts and geany are not there yet, like at all.
So far I have done:
copied ~/.config/{autostart,xfce4,Thunar} (not literally like that)
logged out and back in, rebooted
Resources:
https://forum.xfce.org/viewtopic.php?id=4168
https://askubuntu.com/questions/563382/copy-xfce4-configuration-files-from-one-user-to-another
https://superuser.com/questions/677151/how-can-i-migrate-my-xfce-configuration-and-settings-to-another-system
Some info, which is true for both machines:
$ pacman -Qi xfwm4 | grep Version
Version : 4.12.4-1
$ uname -r
4.10.5-1-ARCH
|
Xfce usually stores its configuration files in ~/.config/xfce4 (as well as ~/.local/share/xfce4 and ~/.config/Thunar). Copying these directories to your laptop should do the job. Keyboard shortcuts are stored in ~/.config/xfce4/xfconf/xfce-perchannel-xml/xfce4-keyboard-shortcuts.xml, so they should get copied as well.
It's possible that after you copy the files they are getting overwritten when you log out of the session, thus preventing the new settings from getting enabled. Perhaps you could try copying the aforementioned directories by logging in through a tty?
Note that there's a global set of configuration files in /etc/xdg/xfce4, /etc/xdg/Thunar/, /etc/xdg/menus, etc. (as well as /etc/xdg/xdg-xubuntu if you're using Xubuntu). If you're copying the configuration files between two systems having completely different base installations, you'll have to copy these files as well.
| How to copy all my XFCE settings between two computers/machines? |
1,286,483,993,000 |
I'm trying to move my emails Maildir from an old CentOS server to a new Debian server.
rsync -avz /home/me/Maildir ssh root@ipaddress:/var/vmail/me/Maildir
I tried to copy a 8GB account, didn't work, try to move another about 20mb, didn't work, tried to use -avn, didn't work either.
sync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1039) [sender=3.0.6]
|
Error 23 is defined as a "partial transfer" and might be caused by filesystem incompatibilities, such as different character sets or access control lists. In this case, it could be caused by files in /home that begin with a . and are thus marked hidden.
In this case you could try something like:
rsync -avz --exclude='/*/.local' /home/me/Maildir ssh root@ipaddress:/var/vmail/me/Maildir
The verbose argument -v should actually give you some sort of list of the problems.
From official documentation:
23 - Partial transfer due to error
| rsync error: some files/attrs were not transferred |
1,286,483,993,000 |
To the best of my understanding, all linux process are actually files, is it possible to copy a running process from one machine to another?
for example - copy a running tomcat server from one machine to another without having to restart the server
|
To the best of my understanding, all linux process are actually files
You shouldn't take the metaphor too literally. Linux processes can indeed be accessed through a pseudo file system for debugging, monitoring and analysis purpose but processes are more than just these files and "copying" them from a source host /proc file system to a target /proc file system is doomed.
Is possible to copy a running process between machines?
One of the serious issues moving a running process between hosts is how to handle the open file descriptors this process is using. If a process is reading or writing a file, this very file (or an exact clone) must be available on the target host. File descriptors related to sockets would be tricky to process as the IP address they are bound to will likely change from one host to the other. Processes sharing memory segments with other ones would cease to do it after a migration. PID clashes might also happen, if a running process has the same pid that the incoming one, one of them will need to be changed. Parent child relationship will be lost, and I have just scratched the potential problems.
Despite these issues, there are technical solutions providing that functionality called "Application checkpointing" like DMTCP and CRIU. This is similar to what is used with hypervisors like VMWare, VirtualBox, Oracle VM and others when they do virtual machines live migration / teleportation. With virtual machines, the job is actually "simpler" as the whole OS is moved, including the files descriptors, the file systems, the memory, the network and other devices, etc.
| Is is possible to copy a running process between machines? |
1,286,483,993,000 |
I want to migrate the configuration of an Ubuntu desktop to a new box with different hardware. What is the easiest way to do this? /etc/ contains machine and hardware specific settings so I can't just copy it blindly. A similar problem exists for installed packages.
edit: This is a move from x86 to x86-64.
|
First, if you're going to keep running 32-bit binaries, you're not actually changing the processor architecture: you'll still be running an x86 processor, even if it's also capable of doing other things. In that case, I recommend cloning your installation or simply moving the hard disk, as described in Moving linux install to a new computer.
On the other hand, if you want to have a 64-bit system (in Ubuntu terms: an amd64 architecture), you need to reinstall, because you can't install amd64 packages on an i386 system or vice versa. (This will change when Multiarch comes along).
Many customizations live in your home directory, and you can copy that to the new machine. The system settings can't be copied so easily because of the change in processor architecture.
On Ubuntu 10.10 and up, try OneConf.
OneConf is a mechanism for recording software information in Ubuntu One, and synchronizing with other computers as needed. In Maverick, the list of installed software is stored. This may eventually expand to include some application settings and application state. Other tools like Stipple can provide more advanced settings/control.
One of the main things you'll want to reproduce on the new installation is the set of installed packages. On APT-based distributions, you can use the aptitude-create-state-bundle command (part of the aptitude package) to create an archive containing the list of installed packages and their debconf configuration, and aptitude-run-state-bundle on the new machine. (Thanks to intuited for telling me about aptitude-create-state-bundle.) See also Ubuntu list explicitly installed packages and the Super User and Ask Ubuntu questions cited there, especially Telemachus's answer, on how to do this part manually.
For things you've changed in /etc, you'll need to review them. Many have to do with the specific hardware or network settings and should not be copied. Others have to do with personal preferences — but you should set personal preferences on a per-user basis whenever possible, so that the settings are saved in your home directory.
If you plan in advance, you can use etckeeper to put /etc under version control (etckeeper quickstart). You don't need to know anything about version control to use etckeeper, you only need to start learning if you want to take advantage of it to do fancy things.
| How do I migrate configuration between computers with different hardware? |
1,286,483,993,000 |
I currently run a Windows server with Active Directory. But since we're no longer using Exchange 2007, it became a fancy file server with authentication.
I would like to move the AD to a Linux server. What would be the best way to do this? And which LDAP server should I use?
Update there won't be any Windows clients left. They'll be updated to Edubuntu.
|
Samba v.3 is able to be a NT4 style domain controller. If you had a AD server running for Exchange, that is not good enough.
Samba v.4 will be able to be a Windows 2003 style domain controller, but is not done yet. Not by far.
Next question would be: do you have any Windows clients left? If so, you have a problem. Windows is not as pluggable as Linux. While it is possible to change a certain dll file (I forgot the name) to authenticate against a generic KDC, Windows was built to work with AD and with AD alone. Anything else requires altering Windows system dll's. That sucks.
If you do not have any Windows clients left, it becomes a lot easier. You can easily replace Windows AD with a combined Kerberos / LDAP solution. Kerberos kdc (Key Distribution Center) packages are in all distro's. LDAP servers are available in a lot of different forms. OpenLDAP server is in most distro's. A GUI based management tool for you LDAP directory is available from a lot of open source LDAP serers, like 389 and I think Apache DS too.
I mentioned the FreeIPA project in this context in another thread as an integrated solution, but it is only for Linux.
So, to make a long story short: do you have Windows clients on your network still?
Edit: Apparently not. So, build yourself a KDC, grab a copy of 389 DS and you're good to go. Then, you'll have to do some LDAP scripting to pull user information from the domain controller and insert it into your LDAP server. I don't think you can migrate the users' passwords though, you will probably have to reset those.
| How would you migrate from a Windows AD to a Linux LDAP server? |
1,286,483,993,000 |
Assume I have two Debian 11 systems. System A with custom application setup. etc. And a vanilla system B. Now I would like to transfer the whole setup from A to B. I found some links, where users tried to transfer the whole root tree or clone their system to another drive. The main effort in this solutions are to re-install grub and adjust some crucial configuration files like fstab. Can I just exclude those directories, which contain crucial configurations files like /boot and /etc/fstab from copy/tar?
Or is there a tool that allows me make a backup of system A and create a boot able usb pen drive using this backup?
|
One way is to create a blank os and copy all folders and files that you need.
There are a lot of tutorials for that.
Check for how to create a linux system backup with rsync
How To Backup Your Entire Linux System Using Rsync
Full system backup with rsync
The other way and the best way to clone a whole drive, partition with data or an os on a drive and I prefer that myself, is with dd, the best in my opinion for clone/backup of devices/partitions.
dd will clone everything bit per bit.
Before you start experimenting and trying out different tools, I would do a full backup/clone of the device to another with dd, if you have the option, and check if the backup/clone works.
If your whole devices is encrypted with luks as an example, you can do a whole clone and flash to your new device, that will work too!
If you work with mounted fuse/sshfs you can backup/clone directly to this network folders too.
You can list all your block devices with lsblk
Example:
if your drive is /dev/sda and you wanna store/backup/clone on a directory or storage
dd if=/dev/sda of=/home/user/osbkp.img bs=1M status=progress
You don't need name.img it can be os123.bkp too
do a live clone of a running system to target drive without creating an image.
source is /dev/sda and target is /dev/sdb
dd if=/dev/sda of=/dev/sdb bs=1M status=progress
Sometimes you create the new backup/clone to your new drive but you can't start from this device, than try again with dd(nothing works 100%)
clone the image to a new drive, where target is /dev/sdb
dd if=/home/user/osbkp.img of=/dev/sdb bs=1M status=progress
clone a given partition
dd if=/dev/sda1 of=/home/user/part1.img bs=1M status=progress
Explain:
if=INPUT/SOURCE
of=OUTPUT/TARGET
bs=BLOCKS SIZE FOR COPY
There are different blocksize that can be used I prefer 1MB, you can speed the process up or slow done with this setting, you have to find by yourself what is the best option
status=progress STATUS IN REALTIME
if you work with fat* as storage you can split the files, take a look at that posts too:
Break up a dd image into multiple files
Creating a 80GB image with dd on a FAT32 drive
There are a few thinks that you have to keep in mind:
0. dd will clone everything of this devices.
Your drive is /dev/sda and you clone this
with 5 partitions
/dev/sda1
/dev/sda2
/dev/sda3
/dev/sda4
/dev/sda5
You will get one file from /dev/sda with all this partitions, mbr, gpt, etc..
1. You can clone to every drive/storage
You can clone from a harddisk to usb, or usb to harddisk, etc.. and
run your cloned os from your new device
2. Your running target devices must have the same size or must be bigger
you can't clone a bigger device to a smaller drive or clone only the used space of a partition
Example:
Your partition to clone is 8 GB but your os on the partition is only 1GB
so you have free space of 7 GB, your target where you wanna clone to run your os is 4GB, this is not possible! You will clone the whole device with dd to your new drive you can't resize this.
If you clone to a bigger devices you can create a new partition with the remaining space and mount/use that on your new device/os.
Take care if you try to merge the remaining space to your given partition!
3. The best way is to use a live system or other linux systems
and than plugin your drives and clone from target to source or from target to storage
4. every device has it's unique uuid and a label name that identified the device
if you clone drive a to b and you have both drives in one pc and you try to boot one of them with the label name or uuid, check grub or your bootmanager you will get a problem or you boot the wrong os.
You can check this with blkid and other commands.
You can change that and generate a new uuid, label, etc.. but be careful
5. You don't need to format the drive where your cloned image will run the dd will destroy/delete everything an create the new mbr, gpt, format, filesystem etc. from the given backed os
Create your basic clones with dd and do your stuff, but later I mean it is better to clone/copy only the changed files.
In GNU/LINUX everything is a File.
| Migration of a working Debian 11 system to another |
1,286,483,993,000 |
Using pacemaker in a 2 nodes master/slave configuration.
In order to perform some tests, we want to switch the master role from node1 to node2, and vice-versa. For instance if the current master is node1, doing
# crm resource migrate r0 node2
does indeed move the resource to node2. Then, ideally,
# crm resource migrate r0 node1
would migrate back to node1. The problem is that migrate added a line in the configuration to perform the switch
location cli-prefer-r0 r0 role=Started inf: node2
and in order to migrate back I have first to remove that line...
Is there a better way to switch master from one node to the other?
|
I know this bit old; but it seems like no one answered this satisfactorily, and the requester never posted if his problem was solved or not.
So here is an explanation.
When you perform:
# crm resource migrate r0 node2
a cli-prefer-* rule is created.
Now when you want to move the r0 back to node1, you don't do:
# crm resource migrate r0 node1
but you perform:
# crm resource unmigrate r0
Using umigrate or unmove gets rid of
the cli-prefer-* rule automatically.
If you try to delete this rule manually in cluster config, really bad things happen in cluster, or at least bad things happened in my case.
| Pacemaker: migrate resource without adding a "prefer" line in config |
1,286,483,993,000 |
I'm running a Linux Mint 12 virtual machine in Virtualbox. I would like to move it into a dual-boot setup with my Windows 7 installation. I have lot of settings in there, and apps installed, and my research says that I cannot just copy files.
I guess I could accomplish this by having some kind of list that I could then import for apt-get to install, and then have a back-up of my ~/ files... Am I on the right track, or totally lost?
|
I think you're on the right track. If I was in your position, this is how I'd tackle it:
Do a new install of Linux Mint 13, now that it's out. Hopefully it can manage repartitioning your disk and shrinking your existing NTFS partition non-destructively, but usually it's much safer and easier to simply install to a clean disk.
Learn to use aptitude, should let you reinstall all your apps pretty quick. If it wasn't installed via apt-get, then it's probably sitting in /usr/local/ or /opt/
Install virtualbox on LM13 so you can run your old LM12 install. Then just use rsync to migrate your files and directories over. If you haven't installed many services, there probably isn't that much you need to do beyond bringing over your home directory.
Sure, this seems like it could be a bit messy, but I've pretty much carried my same /home directory for 10 years through at least 3 distros of Linux. Data migration is actually pretty easy, there aren't any user settings buried in some registry or somewhere else on the filesystem. It can be a bit more work migrating services, but even then the files to migrate would be limited to the /etc and /var directories.
| Migrating a Virtualbox virtual machine into a physical dual-boot system |
1,286,483,993,000 |
I have quite some scripts that are still using the apt-key adv command. And I know this command is deprecated. And soon becoming unable to use.
Correct me if I'm wrong, but Debian 11 is the last Debian version supporting apt-key.
I also know we need to migrate to fetching the .asc file directly and put the file into the /etc/apt/trusted.gpg.d/ folder.
How do I convert from the command below to a wget of this .asc file? Where can I find the .asc files I need? Are those .asc files even provided by Linux Mint / X2Go or other repos?
The command I use for downloading keys at the moment is:
First example: apt-key adv --recv-keys --keyserver keyserver.ubuntu.com A6616109451BBBF2
Second example: apt-key adv --recv-keys --keyserver keyserver.ubuntu.com E1F958385BFE2B6E
How do I retrieve the .asc (or .gpg) files from those repos?
|
The apt-key-less equivalent to your apt-key adv command is
gpg --recv-keys --keyserver keyserver.ubuntu.com A6616109451BBBF2
gpg --export A6616109451BBBF2 | sudo tee /etc/apt/trusted.gpg.d/somenicename.gpg
This assumes that gpg is installed.
There’s no general rule regarding the availability of keyring files; if you have a download URL, you can use
sudo /usr/lib/apt/apt-helper download-file https://example.org/path/to/repokeyring.asc /etc/apt/trusted.gpg.d/repokeyring.asc
See Julian Klode’s Migrating away from apt-key post for details, and the section in the Debian 11 release notes on obsolete components:
bullseye is the final Debian release to ship apt-key. Keys should be managed by dropping files into /etc/apt/trusted.gpg.d instead, in binary format as created by gpg --export with a .gpg extension, or ASCII armored with a .asc extension.
| Migrating away from apt-key adv |
1,286,483,993,000 |
I've been experimenting with ZFS + DRBD + live migration (I want to understand it well enough to write my own automation scripts before I start playing with ganeti again, and then openstack cinder). I have the ZFS + DRBD (in dual-primary mode) working well for shared storage.
However, live migration is only partially working.
I have two hosts, with identical libvirt and drbd configurations, and even identical dedicated "volumes" pool for VM ZVOLs (both 2x1TB mirror pools - re-using some old disks from my old backup pool), and identical configurations for the VM (named "dtest")
"indra" is an AMD FX-8150 with 16GB RAM on an ASUS Sabertooth 990FX m/b
cpu flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf eagerfpu pni pclmulqdq monitor ssse3 cx16 sse4_1 sse4_2 popcnt aes xsave avx lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 nodeid_msr topoext perfctr_core perfctr_nb cpb hw_pstate vmmcall arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
"surya" is an AMD Phenom II X4 940 with 8GB RAM on an ASUS M3A79-T DELUXE m/b
cpu flags fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc extd_apicid eagerfpu pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt hw_pstate vmmcall npt lbrv svm_lock nrip_save
Both are running debian sid, with exactly the same versions of packages (incl. libvirt* 2.0.0-1:amd64 and qemu-system-x86 1:2.6+dfsg-3), and with the same liquorix kernel:
Linux indra 4.6-2.dmz.2-liquorix-amd64 #1 ZEN SMP PREEMPT Debian 4.6-3 (2016-06-19) x86_64 GNU/Linux
Linux surya 4.6-2.dmz.2-liquorix-amd64 #1 ZEN SMP PREEMPT Debian 4.6-3 (2016-06-19) x86_64 GNU/Linux
The VM itself is running debian sid, with a stock debian 4.6.0-1 kernel:
Linux dtest 4.6.0-1-amd64 #1 SMP Debian 4.6.3-1 (2016-07-04) x86_64 GNU/Linux
I can start the VM on either host, and it works perfectly.
I can migrate a VM from surya to indra with no problems whatsoever. When I try to migrate the VM from indra to surya, the migration appears to complete successfully, but the VM hangs with 100% CPU usage (for the single core allocated to it).
It makes no difference whether the VM was started on indra and then migrated to surya (where it hangs) or if it was started on surya, migrated to indra (OK so far) and then migrated back to surya (hangs).
The only thing I can do with the VM when it hangs is virsh destroy (force-shutdown) or virsh reset (force-reboot).
I've tried disabling kvm_steal_time with:
<qemu:commandline>
<qemu:arg value='-cpu'/>
<qemu:arg value='qemu64,-kvm_steal_time'/>
</qemu:commandline>
but that doesn't solve the problem.
Nothing gets logged on or from the VM itself. The only indication I get of any problem is the following message in /var/log/libvirt/qemu/dtest.log on surya.
2016-07-18T12:56:55.766929Z qemu-system-x86_64: warning: TSC frequency mismatch between VM and host, and TSC scaling unavailable
This would be due to the tsc_scale cpu feature - present on the 8150 CPU (indra), missing on the x4 940 (surya).
Anyone know what the problem is? Or how to fix it? or suggestions for debugging?
Is it even fixable, or is it a CPU bug in the several-generations-old Phenom II x4 940?
|
I found a solution.
As I suspected, the cause of the problem was the lack of tsc_scale in the feature flags of surya's CPU.
It turns out that you can migrate a VM from a host without tsc_scale to a host with it, but a VM running on a host with tsc_scale can ONLY be migrated to another host with it.
Time to submit a bug report.
I created another ZFS ZVOL-based DRBD, this time between surya and another machine on my network, my main server ganesh.
ganesh is an AMD Phenom II 1090T with 32GB RAM on an ASUS Sabertooth 990FX m/b
CPU Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf eagerfpu pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt nodeid_msr cpb hw_pstate vmmcall npt lbrv svm_lock nrip_save pausefilter
I can migrate a VM back and forth between between surya and ganesh, with no problems and I can migrate a VM from surya or ganesh to indra. But I can't migrate a VM from indra to either surya or ganesh.
I can live with this for now. ganesh is due to be upgraded when the new AMD Zen CPUs are released, and surya will get ganesh's current motherboard and RAM. I'll buy a new FX-6300 or FX-8320 for it at the same time, so all machines will have tsc_scale.
I have another machine (kali) on the network with an FX-8320 CPU (which also has the tsc_scale feature). I was already planning to add this to the ZVOL+DRBD+live-migration experiments as soon as I upgrade the main zpool on ganesh (from 4x1TB RAIDZ to 2x4TB mirrored) and free up some more old disks, so I'll be able to migrate VMs back-and-forth between indra and kali, or between surya and ganesh.
The next phase in my VM experimentation plan is to write scripts to completely automate the process of setting up a VM to use DBRD on ZVOL and migrate VMs between host machines.
When I've got that working well, I'll scrap it and start working with ganeti, which already does what i'm planning to write (but more complete and better).
And finally, when I've tired of that I'll switch to openstack and use cinder for the volume management. I'm tempted to skip ganeti and go straight to openstack, but ganeti is such cool technology that I want to play with it for a while....I haven't used it for years.
| 100% CPU utilisation and hang after virsh migrate |
1,286,483,993,000 |
Moving from OS X & Textmate to Ubuntu & gedit, the one feature of Textmate I am missing is it's command line tool.
With mate I was able to open a folder as a Textmate project using mate . from within the required directory. This is enormously useful as it speeds up my system navigation considerably.
Is there a way of doing the same or similar with gedit?
|
This was answered on Superuser, Gedit open current directory from terminal - Ubuntu 10.10. Looks like the answers are still relevant.
| open whole folder in gedit |
1,286,483,993,000 |
I bought new computer with Win 10 pre-installed. I installed Debian on a new partition (same disc) and everything went well. But now I added new disc and I would like to move Debian to this disk.
Is there an easy way to do it?
I tried to use dd to copy the Linux partition to the new disc, but I'm not sure how to update grub, because update-grub didn't add the new partition with Debian partition to its menu.
This is fdisk -l output:
Disk /dev/nvme0n1: 238.5 GiB, 256060514304 bytes, 500118192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 0F8FCBCA-F7B2-429C-B02B-4A420C815CB7
Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 739327 737280 360M EFI System
/dev/nvme0n1p2 739328 1001471 262144 128M Microsoft reserved
---------Win 10 partition-----------
/dev/nvme0n1p3 1001472 405315583 404314112 192.8G Microsoft basic data
---------Old Debian partition-----------
/dev/nvme0n1p4 405315584 484538367 79222784 37.8G Linux filesystem
/dev/nvme0n1p5 484538368 500117503 15579136 7.4G Linux swap
Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 77489E99-4F1D-4E2A-A984-6BE441B8A849
Device Start End Sectors Size Type
/dev/sda1 2048 15626239 15624192 7.5G Linux swap
---------New Debian partition-----------
/dev/sda2 15626240 488397134 472770895 225.4G Linux filesystem
/dev/nvme0n1 is old disc with Win10 and Debian I'm using now
/dev/sda is new disc, where I would like to migrate my current Debian
For now, I can safely boot into old Debian. Any advice on how to migrate it on /dev/sda?
I can format or change structure of the new disk if it is needed.
|
#You can update grub following this guide.
Boot from Linux Live Boot
Determine the partition number of your main partition. sudo fdisk -l, sudo blkid or GParted can help you here. Make sure you use the correct partition number for your system!
Mount your partition:
sudo mount /dev/sdaX /mnt
If you have a separate /boot, /var or /usr partitions, repeat steps 2 and 3 to mount these partitions to /mnt/boot, /mnt/var and /mnt/usr respectively.
Bind mount some other necessary stuff:
for i in /sys /proc /run /dev; do sudo mount --bind "$i" "/mnt$i"; done
chroot into your install:
sudo chroot /mnt
At this point, you're in your install, not the live session, and running as root. Update grub:
update-grub
Depending on your situation, you might have to reinstall grub:
grub-install /dev/sdX
update-grub
If Ubuntu is installed in EFI mode, and EFI partition UUID has changed, you may need to update it in /etc/fstab. Compare it:
blkid | grep -i efi
grep -i efi /etc/fstab
If everything worked without errors, then you're all set to exit and reboot.
However if you want to know more about how to move your current operating system to a new drive, there are a few different ways you can accomplish this task. I will be referencing this post as it is very much related to your question.
1. Use Clonezilla
You can create a Live Boot of Clonezilla to clone or create an image of your Linux installation and then migrate that to the new disk. The Clonezilla site has documentation on how to restore an image to a larger disk. Please make sure you read through their FAQ/Q&A first. Then you will need to install grub to your new drive.
2. Use Rsync
With this option you do not have to create a live boot. You can boot into your original Debian install and run rsync to back up your current install to the new disk. After which you will have to resize your partitions to fill the rest of the unallocated disk space. This step is best done using a live boot however.
The rsync command that should work in most cases is this:
sudo rsync -a / [/Path/to/Mounted/New/Disk] --exclude /sys --exclude /proc --exclude /dev --exclude /tmp --exclude /media --exclude /mnt --exclude /run
After that completes you will want to run mkdir sys proc dev tmp media mnt run inside of the new root directory to recreate the missing elements. You do not want to include them in the rsync command because at least one of them will contain the file system and mount points for your new disk causing a few issues as you would backup your backup in the process. Please reference the rsync documentation to learn more about the process before you complete the task. Again once completed you will have to update grub on the new drive.
Conclusion
Please reference all of the posts and links I have included before you take any action. If there is any misinformation in this post, I would really appreciate corrections. Best of Luck!
| Migrate Debian from one disc to another |
1,286,483,993,000 |
As opposed to people who want to get rid of systemd, I want to completely and safely remove sysvinit.
I've been using Debian since Debian 7.0 (Wheezy). Currently I'm using Debian 9.0 (Stretch). During one of the system upgrades there was move from sysvinit to systemd.
systemd works fine for me, but I've noticed that system did not removed sysvinit completely.
Why?
How can I safely get rid off sysvinit?
Is it safe to remove remains of sysvinit by:
aptitude purge initscripts sysvinit sysvinit-utils
or alternatively:
apt-get remove --purge initscripts sysvinit sysvinit-utils
BTW: AFAIK there is systemd-sysvcompat package (not installed) which probably somehow use sysvinit. I want to avoid problems which can be caused by removing remains of sysvinit which can be still somehow necessary for Debian.
|
Why?
Even though you can remove sysvinit a lot of system packages still rely on sysvinit style scripts.
How can I safely get rid off sysvinit?
Is it safe to remove remains of sysvinit by...
This depends on what you have installed on your system, if no system components depend on them then yes it is safe, apt will tell you. Note that you can use rdepends to check what depends on a package:
apt-cache rdepends initscripts sysvinit-utils sysvinit
If there is nothing you need printed as a dependency then removing them would be safe. This won't be the case!! You can do the removal as usual with apt-get remove.
Be aware that initscripts is not sysv specific, removing it will almost certainly destroy your system. See here
If you are to remove sysvinit the /etc/rc*.d directories will still be present. If you look at the debate on this page you can see that there are many packages which still have sysvinit style scripts. Even though the old sysvinit directories are used this is actually managed by systemd.
I'm not sure why you want to get rid of the last traces of sysvinit but I would say it's not time yet, you will likely run into trouble. To build a system with no traces of sysvinit you will probably have to build your own distro although somebody may have already done this.
| Completely remove remains of sysvinit |
1,286,483,993,000 |
I have decided to try out Linux Mint instead of the horrible Ubuntu Unity experince.
I have a separate partition for my /home folder, which I'd like to preserve of course. So I'm curious: is it possible for me to install the 64bit version of Mint without user dotfiles making trouble? More specifically, will it matter that the new OS is 64-bit? My instinct tells me - no, a config file is a config file, but I thought it would be better to check here, I simply don't want to lose all the app settings.
A more general question would be, will the user dotfiles cause any problem for Mint, regardless of bit-ness? Has this been tried by someone?
|
Generally speaking, user configuration files don't care about your architecture. I don't know of any exception in any current popular application. Of course, I can't exclude some weird program that stores binary data differently on x64 and amd64, but I wouldn't bet on you having any.
You can share dot files between different distributions; there's nothing specific about your distribution in them. What can be problematic sometimes is sharing dot files between different versions of a program. However, it's almost always the case that your dot files keep working as long as you're moving forward — replacing a program by a later version of that program. It's only going back that fails, and then what will happen is that the older version might completely ignore your settings, or even fail to start until you move them out of the way. There aren't many programs that have trouble that way, but there are popular programs that do have trouble, such as Firefox.
| Moving from Ubuntu 32bit: Mint 64bit or Mint 32bit? |
1,286,483,993,000 |
I'm not a system administrator, but my organization is considering replacing /bin/sh in Red Hat Enterprise Linux 6+ with a hard link to /bin/ksh. How foolhardy would this be?
The background to this question is that we're migrating a third-party application from AIX 5.3 to RHEL 6+. This application executes shell commands by invoking sh. The shell commands themselves are user-defined, and in practice have been written for the Korn shell (ksh). This works in AIX because IBM delivers sh as a hard link to ksh. Over the years, thousands of user-defined commands have been created and stored by our team.
We've found that some of these commands fail in Red Hat, because sh in Redhat is a symbolic link to bash. When invoked as sh, bash runs in sh emulation mode. The problem is that our ksh-specific commands (e.g., print) that used to work in "fake sh" in AIX do not work in "fake sh" in Red Hat. We don't yet know the full scope of the incompatibilities.
In chapter 10 of Learning the Korn Shell (ISBN 0-596-00195-9), Bill Rosenblatt and Arnold Robbins say: "[W]e want to emphasize something about the Korn shell that doesn't apply to most other shells: you can install it as if it were the standard Bourne shell, i.e., as /bin/sh." ... "Many installations have done this with absolutely no ill effects."
How foolhardy would it be to do this in Red Hat? My concern is that system or 3rd party scripts in the Red Hat installation might depend on idiosyncracies of how sh is emulated by bash. If so, solving our immediate problem with a hard link to ksh could cause unknown breakage throughout the system.
|
System-wide, this would be very foolhardy for exactly the reason you suspect - it will cause massive breakage in startup scripts and system utilities that depend on sh-compatible behavior. As Ulrich says, a much safer alternative is to crate a chroot, or simply set the default shell of all new users to /bin/ksh though this may not do exactly what you want.
| Installing ksh as the standard shell in Redhat: Foolhardy? |
1,286,483,993,000 |
I would like to copy my "settings" from my desktop to my laptop. I am running KDE on Arch. I am not sure what to do with ~/.config, ~/.local, and ~/.kde4 since they have subdirectories with names that match my desktop hostname. If I naively copy everything, I get all sorts of errors/warning when logging in and trying to open my email/calendar/akonadi.
|
This is a really, really lame (non-)feature of KDE. ~/.config and ~./local actually do not have anything to do with it -- they are XDG standard filesystem hierarchy things used by various independent applications, not KDE.
After you install, get out of X (so KDE is not running) and try copying just your old ~/.kde/share/config in, then restart X.
If you have a hard time stopping X because of XDM and system services, you could try doing it in a VT while KDE is still loaded, just do not go back to X from the VT -- kill it on the command line to force a re-start (or just plain halt and reboot).
| How to copy settings from one machine to another? |
1,286,483,993,000 |
I have the following procedure for replicating a Fedora workstation setup.
Boot from a Live CD, make tgz's of the filesystems.
Go to new machine, make filesystems, dump the tgz's in the proper places.
Adjust UUID's in /etc/fstab and /boot/grub/menu.lst
Run grub-install
Reboot!
The nice thing is that DHCP assigns the new machine an unique name, and users have /home mounted on the server. Graphics configuration aren't a worry either, since recent versions of Xorg are wicked smart in auto-detecting graphic adapters.
So everything works like a snap... with the exception of one small quirk:
In the first boot of the new machine, network startup fails. It turns out the machine thinks there's no such thing as an eth0, but there is an eth1 and it is the machine's onboard ethernet. So I have to go to /etc/sysconfig/network-scripts, rename ifcfg-eth0 to ifcfg-eth1, and edit the DEVICE= line in it. Then I reboot and everything works.
I believe somewhere, in some file, there is information associating eth0 with the MAC of the "Master Mold" machine's eth0. But where?
P.S.: I don't use NetworkManager.
|
On my machine it is
/etc/udev/rules.d/70-persistent-net.rules
This is a Debian squeeze machine, but it is probably similar for other Linux distributions. Mine looks like
# This file was automatically generated by the /lib/udev/write_net_rules
# program, probably run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single line.
# MAC addresses must be written in lowercase.
# Firewire device 00e081000026d042 (ohci1394)
SUBSYSTEM=="net", DRIVERS=="?*", ATTR{address}=="00:e0:81:00:00:26:d0:42", NAME="eth0"
# PCI device 0x10de:0x0057 (forcedeth)
SUBSYSTEM=="net", DRIVERS=="?*", ATTR{address}=="00:e0:81:70:18:22", NAME="eth1"
Tip: doing
/etc# grep -r eth0 * | less
will give you the answer in a couple of minutes, probably. That is what I did.
| After cloning Fedora 14 install to another machine, onboard NIC is seen as eth1 instead of eth0. Why? |
1,286,483,993,000 |
My motherboard suddenly died, but it was an old machine(running natty). So I put together a new Ubuntu system with a clean 14.04 install (same username) and mounted the old drive (with the full filesystem) as secondary device.
How can I get a full list of software installed on the old machine, from its disk mounted on the new one?
|
Mount the old drive, e.g. under /mnt/old and then do:
dpkg --root-dir /mnt/old --get-selections | grep -F ' install' ' | cut -f 1
dpkg has facilities built-in to install/list/de-install on a filesystem not based directly under /.
| How can I get a full list of software installed on a non-functioning system, from its disk mounted on a new one? |
1,286,483,993,000 |
I have an Ubuntu Server 10.04 VPS (Virtual Private Server) that hosts my website. I would like to clone this VPS and create a VirtualBox machine from it. How can I do this?
|
I think you want to look at the command VBoxManage convertfromraw or VBoxManage clonehd. I can't remember exactly how to do it, but I found these two guides: http://www.virtualbox.org/wiki/Migrate_Windows and http://www.pendrivelinux.com/boot-a-usb-flash-drive-in-virtualbox/.
Once you have the hard drive converted, you can just create a new Virtual Machine, but use your existing disk image you just created.
| VirtualBox image from running VPS |
1,286,483,993,000 |
I recently bought a new laptop and I would like to migrate to it with as little hassle as possible. I don't want to do a fresh install since I have made various tweaks to my current setup for things like automounting remote drives from my NAS, configuring networking etc. that I would prefer not to have to redo.
My current thinking is that I can just dump the contents of my hard drive to a file, then cat that file onto the new drive. The general idea will be:
On the old computer, cat the drive into a file on an external USB disk and (as root):
# cat /dev/sda > /mnt/externalUsb/sda.img
I then boot into a live system on the new computer, connect the external drive and (as root):
# cat /mnt/externalUsb/sda.img | sudo tee /dev/sda
Shut down the live session, reboot the machine and, I hope, find myself in a working system which is a perfect clone of my old machine.
Or, perhaps more realistically, something like:
Create the partitions I want on the new machine, making sure they are larger than the equivalent ones on my old machine.
On the old computer, cat the partitions into files on an external USB disk (as root):
for i in 5 6; do cat /dev/sda"$i" > /mnt/externalUsb/sda"$i".img; done
On the new machine, after making sure the numbers are the same or modifying the command accordingly:
for i in 5 6; do cat /mnt/externalUsb/sda"$i".img; > /dev/sda"$i"; done
Some relevant notes:
The hardware of the old and new machines is relatively similar as I will be moving from a ThinkPad T460P to a ThinkPad P14s Gen 2.
The new machine has a 1TB hard drive but the old one is only 512G.
I am using Arch, dual booted with a Windows 10. I am not particularly bothered about keeping the Windows install.
My current machine's disk setup:
$ sudo parted -l
Model: ATA SAMSUNG MZ7LN512 (scsi)
Disk /dev/sda: 512GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 274MB 273MB fat32 EFI system partition boot, hidden, esp
2 274MB 290MB 16.8MB Microsoft reserved partition msftres
3 290MB 86.4GB 86.1GB ntfs Basic data partition msftdata
5 86.4GB 136GB 50.0GB ext4
6 136GB 437GB 301GB ext4
9 437GB 485GB 47.3GB ntfs msftdata
8 485GB 495GB 10.5GB ext4
7 495GB 511GB 16.1GB linux-swap(v1) swap
4 511GB 512GB 1049MB ntfs Basic data partition hidden, diag
I am expecting the kernel to detect the new/different hardware the first time it boots and sort it out for me automatically. Am I missing something obvious here? Any specific problems I might encounter? The new drive is larger, so that shouldn't be a problem, right? I have an ecryptfs-encrypted directory (two of them, actually), am I right in assuming that won't be an issue? Will I need to do anything special to handle the EFI system partition perhaps?
I accepted MC68020's helpful answer, but I ended up taking a different approach: I booted a live system, created the root and /home partitions and then just copied all my files over using rsync as described in the Arch Wiki.
I managed to boot the "new" system, but it still needs some tweaks, notably for the graphics driver. This isn't an approach to be taken if you don't know your way around Linux and enjoy tinkering. Of course, if you don't enjoy it, it's unlikely you'll be using Arch.
|
The following stands here only for comfort of editing reasons. As it is not worth more than a comment, please feel free to remove it.
ext4 : Starting with linux-5.10, ext4 comes with some new, lighter-weight journaling method also known as fast-commit.
Benchmarks report 20-200% improvement for local filesystems and 30-75% improvement for NFS workloads.
If running some >=5.10 kernel, no doubt that you just want that.
But since fast commits are activated at filesystem creation time, then if your filesystems were created before the availability of that feature, you do need to recreate them with the fast_commit option explicitly enabled. (see man mke2fs and man ext4)
Since linux-5.10 was launched close to the eve of 2021, then if your ext4 fs have been created before (firing as root dumpe2fs *yourdevice* | grep created will tell you that) chances are poor for this feature to be supported.
It might appear more immediate to cat /proc/fs/ext4/*yourdevice*/options to check whether this feature is actually activated or not, though.
nvidia : My bad ! So in fact moving from NVIDIA GeForce 940MX 2GB to NVIDIA Quadro T500 4GB then ? Using Nvidia proprietary drivers ?
If so then just ensure your current driver version is >= to 450.102.04 (since support for T500 was added from that release)
I would anyhow run the nvidia-settings utility after cloning in order to take advantage of new features.
And BTW, 4GB ? Hmmm… might find this a little overkill ? Might want to reallocate to other purposes.
Wifi : (Intel Wi-Fi 6 AX210 I presume, in particular)
You should be aware that some distros have reported troubles with iwlwifi under rather recent kernel versions. See in particular Red-Hat bugzilla.
From what I understand, a patch should have been committed in 5.15 times (and almost certainly backported to LTSes) but prudence commands you crosscheck first.
Apart from this, you get the nasty troubles with wifi adaptors. They always need some firmware blob to run.
Then irrespective of the fact your kernel gets and might load the appropriate driver, it might not be able to find the firmware since, depending on your distro / manufacturer & copyrights, you can be requested to install some particular package or even download it from manufacturer's repos.
| Is this a safe way to migrate my data to a new computer? |
1,286,483,993,000 |
I'm a KDE user, and When I switch Linux distributions, I don't want to copy my entire home folder, since most of the configuration files there will be created automatically when I install\run programs on the new installation.
There are, however, some applications that I've put a lot of work into their configuration, and I like to hand-pick their configuration files that I want to migrate to the new installation.
Now, I'm having trouble doing that with the KDE configuration itself - I can't find my way around the .kde and .kde4. I don't want to migrate the entire folder - but I need some specific settings from there.
So, the question is - what do I need to do to migrate the following KDE settings:
File associations
Activities
That's it. I need a way to migrate those - be it copying specific files, copying parts of files, or using a tool.
Thanks in advance!
|
All the file associations are stored in
~/.local/share/applications/mimeapps.list
For the KDE acitivies have a look at these files
activitymanagerrc
plasma-desktop-appletsrc
| Migrating the KDE configuration files |
1,286,483,993,000 |
What packages do I need to install to migrate my CentOS 5.7 server to identical version of RHEL. I have RHN subscription but I don't want to create a fresh install and move files.
|
Despite the fact that both distributions are built from mostly the same sources the installed binaries are not the same.
For copyright reasons the CentOS team (just like the Unbreakable Linux team at Oracle) has to remove certain Red Hat owned material (logos etc.) and recompile.
So even if your install the necessary packages to make the system "look" like a RHEL system (package redhat-release is an obvious one) I doubt Red Hat would consider it a supported system.
It may not be the answer you're looking for but I suggest you do reinstall in order to avoid support issues at a time you need it.
You should be able to start from the kickstart file created by anaconda (/root/anaconda-ks.cfg) to quickly set up a new system identical to the existing system.
| How to migrate from CentOS to RHEL? |
1,286,483,993,000 |
On debianized linux distros there is dpkg --get-selections, dpkg --set-selections, dpkg -C to respectively list installed packages, select a list of packages for installations, and list packages that are in a partially installed or broken state. I am wondering if something like this exists for FreeBSD (ports, not packages). I can get the list from pkg_info, but is there a simple way to apply it without cut, for port in list, cd, make install?
|
/usr/ports/ports-mgmt/portmaster man page has example how to do bulk port re-install.
| Is there a port list migration command set for FreeBSD? |
1,286,483,993,000 |
My motherboard suddenly died, but it was an old machine(running natty). So I put together a new Ubuntu system with a clean 14.04 install (same username) and mounted the old drive (with the full filesystem) as secondary device.
How can I migrate mysql setup and data, and apache setup and data from the old drive to the new machine?
Any assistance will be much appreciated.
|
The Ubuntu 11 to 14.04 upgrade means your kernel libraries, and therefore, binaries will have to be different, including Apache and MySQL server. So, you have to install the binaries again and you can only copy text configuration files and hope that the configuration parameters still work.
For Apache, install Apache first, then:
stop apache
Copy all of the text files under the /etc/apache2 directory on the old drive to /etc/apache2 on the new drive, preserving relative paths and replacing all duplicates:
cp -r /path/to/old/etc/apache2 /etc/apache2
Manually set the symbolic links between mods-available and mods-enabled in the new /etc/apache2 to match those under the old /etc/apache2. If there are modules that are renamed or use different settings and Apache complains, edit your apache2.conf, as needed. Then,
start apache
For MySQL, install MySQL server first. @vembutech's answer is good, but he left out that if you had any custom MySQL settings, e.g. for memory, thread allocation, and so forth, you do not have them in the new MySQL install. So:
`
Save a copy of the new /etc/mysql/my.cnf:
sudo cp /etc/mysql/my.cnf /etc/mysql/my.cnf.orig
Copy /etc/mysql/my.cnf from the old disk to /etc/mysql/my.cnf on the new disk:
sudo cp /path/to/old/etc/mysql/my.cnf /etc/mysql/my.cnf
Missing parameters needed by the newer MySQL can be gotten from the saved copy of my.cnf.
| How to migrate MySQL and Apache data & settings, if a machine is not working but I can mount the old drive on a new one? |
1,383,839,643,000 |
I am migrating from CentOS 5.5 to 6.4 and have a custom installation that installs specific RPMs. The problem I running into is that some RPMs from CentOS 5.5 are no longer in the 6.4 distribution, so my make fails because it can't find a rpm in the source distribution.
Is there a good way to determine what RPMs I might need from 6.4 to replace the missing RPMs that were in 5.5?? Trying to figure out a good way to do this so I don't miss anything.
So far, I've tried looking at the files and information for the RPMs in 5.5 and search for similar information in the RPMs for 6.4. This seems like a bad idea and hasn't really helped me out. I would think that there is a better way to do this???
Here is the list of CentOS 5.5 RPMs that are missing in 6.4:
SysVinit-2.86-15.el5.i386.rpm
anacron-2.3-45.el5.centos.i386.rpm
apmd-3.2.2-5.i386.rpm
aspell-0.60.3-7.1.i386.rpm
beecrypt-4.1.2-10.1.1.i386.rpm
bluez-gnome-0.5-5.fc6.i386.rpm
bluez-utils-3.7-2.2.el5.centos.i386.rpm
cadaver-0.22.3-4.el5.i386.rpm
centos-release-notes-5.5-0.i386.rpm
chkfontpath-1.10.1-1.1.i386.rpm
dhcdbd-2.2-2.el5.i386.rpm
dhcpv6-client-1.0.10-18.el5.i386.rpm
dmalloc-5.3.0-3.i386.rpm
fbset-2.1-22.i386.rpm
firstboot-tui-1.4.27.8-1.el5.centos.i386.rpm
gnupg-1.4.5-14.i386.rpm
htmlview-4.0.0-2.el5.noarch.rpm
ibmasm-3.0-9.i386.rpm
ifd-egate-0.05-15.i386.rpm
ipsec-tools-0.6.5-13.el5_3.1.i386.rpm
irda-utils-0.9.17-2.fc6.i386.rpm
kudzu-1.2.57.1.24-1.el5.centos.i386.rpm
libFS-1.0.0-3.1.i386.rpm
libgssapi-0.10-2.i386.rpm
libjpeg-6b-37.i386.rpm
libtermcap-2.0.8-46.1.i386.rpm
libvolume_id-095-14.21.el5.i386.rpm
mkinitrd-5.1.19.6-61.i386.rpm
mktemp-1.5-23.2.2.i386.rpm
nash-5.1.19.6-61.i386.rpm
nss_ldap-253-25.el5.i386.rpm
oddjob-libs-0.27-9.el5.i386.rpm
pam_ccreds-3-5.i386.rpm
pam_smb-1.1.7-7.2.1.i386.rpm
pkinit-nss-0.7.6-1.el5.i386.rpm
portmap-4.0-65.2.2.1.i386.rpm
python-elementtree-1.2.6-5.i386.rpm
python-sqlite-1.1.7-1.2.1.i386.rpm
rhpl-0.194.1-1.i386.rpm
rng-utils-2.0-1.14.1.fc6.i386.rpm
setarch-2.0-1.1.i386.rpm
slrn-0.9.8.1pl1-1.2.2.i386.rpm
specspo-13-1.el5.centos.noarch.rpm
sysklogd-1.4.1-46.el5.i386.rpm
system-config-securitylevel-tui-1.6.29.1-5.el5.i386.rpm
termcap-5.5-1.20060701.1.noarch.rpm
util-linux-2.13-0.52.el5_4.1.i386.rpm
vixie-cron-4.1-77.el5_4.1.i386.rpm
xorg-x11-filesystem-7.1-2.fc6.noarch.rpm
xorg-x11-xfs-1.0.2-4.i386.rpm
yum-updatesd-0.9-2.el5.noarch.rpm
|
Remove the version numbers and you'll typically have to go through these lists by hand. I've never seen a automatic way to do it.
My usual tactic is to take that list minus the numbers and then get the list of packages from the next versions repo, and side-by-side diff them or use meld.
RPM Tools you'll likely use in this endeavor:
repoquery
repotrack
rpm
yum
I've written up a number of posts on the site that detail the use of repoquery. Look to those for potential ways to use it. Also a good tutorial on it's usage, titled: Centos 6/RHEL using Repoquery and Yum commands.
Cleaning up the package list
You can use this command to truncate your list of packages so they don't include the version numbers:
$ sed 's/-[0-9]\+.*//' file.txt
Example
Sample file.
$ head -5 file.txt
SysVinit-2.86-15.el5.i386.rpm
anacron-2.3-45.el5.centos.i386.rpm
apmd-3.2.2-5.i386.rpm
aspell-0.60.3-7.1.i386.rpm
beecrypt-4.1.2-10.1.1.i386.rpm
Sample run.
$ sed 's/-[0-9]\+.*//' file.txt | head -5
SysVinit
anacron
apmd
aspell
beecrypt
| CentOS installation rpms 5.5 vs. 6.4 |
1,383,839,643,000 |
All the posts I've found are about moving to new machines or migrating an entire OS from one HDD to another. I don't really know what the hell I'm doing, but I'm apparently the Linux expert.
We currently have a 1TB server with a RAID configuration. I'm not sure what type of RAID, yet. We will need to expand our HDD space in the next few months, and doubling the current size will buy us a few years at our current rate of data use.
The existing RAID is currently at /dev/md1 and we keep all of the files in question at /srv/Data.
What we would like to do is migrate this Data directory to a new 2TB RAID 1 setup and ideally keep the same mount point so that we don't have to change anything in our database. That is, we would like to move the existing data to this new set of HDDs but still call it /srv/Data so that we can start writing to it immediately without changing much on our software side.
Would this be as simple as mounting the new RAID device (presumably /dev/md2) as /srv/Data and copying the existing Data over?
|
Mount /dev/md2 first as something like /srv/DataNew, run a 1st round of copy as root (actually I'd suggest rsync, IMHO it's better for this kind of job):
rsync -a --delete /srv/Data/ /srv/DataNew
Optionally you can re-run the cmd - the 2nd execution should be faster (rsync is capable of skipping files already copied and up2date) and will give you a rough duration to use in estimating how much time you need to bring down the apps using the partition for the actual partition switch - see below.
Then temporarily stop and disable your applications using the /srv/Data partitions (maybe even reboot to ensure no transients writes that can lead to loss of data, making sure the apps are not restarting at boot) and repeatedly re-run the same rsync cmd above as root to update the new partition with any changes that may have happened in the old partition since the previous rsync execution.
It may take a few re-runs until the rsync cmd shows no more updates - which means the 2 partitions are in sync. Each such re-run will be taking rougly the amount of time of the 2nd execution mentioned above, if you opted for it
Then unmount /srv/Data and /srv/DataNew and modify the /etc/fstab file to mount /dev/md1 under /srv/DataOld and /dev/md2 under /srv/Data.
Then mount /srv/Data (and /srv/DataOld if you want to run yet another sync check) or optionally reboot if you like instead and the system should come up with the new partition in place.
If you want you can run another rsync check, this time with a slightly modified cmd for the new mount points (it should show no updates if there were no transient accesses):
rsync -a --delete /srv/DataOld/ /srv/Data
Now you can re-enable and re-start your apps which should not notice any difference (except for the extra disk space).
Finally, if no longer need it you can unmount /srv/DataOld and remove its entry from /etc/fstab - you're done.
If you didn't use rsync before you may want to dry-run the rsync portions but with some temporary/test dirs and as source some smaller dir which doesn't typically change (to not have uncontrolled transients which can't be avoided unless you stop the apps) just to get used to its operation, you can use the new partition once mounted, since it's empty:
mkdir /srv/DataNew/rsync_test
rsync -a --delete /some_smaller_dir /srv/DataNew/rsync_test
simulate a transient deleting a file in /srv/Data/some_smaller_dir:
touch /srv/DataNew/rsync_test/deleted_file_equivalent
ls -la /srv/DataNew/rsync_test/deleted_file_equivalent
next rsync should find and delete that file in the new dir (and maybe other transients?):
rsync -a --delete /srv/Data/some_smaller_dir /srv/DataNew/rsync_test
ls -la /srv/DataNew/rsync_test/deleted_file_equivalent
next rsync should not find the deleted_file_equivalent anymore (repeat if other uncontrolled transients show up):
rsync -a --delete /srv/Data/some_smaller_dir /srv/DataNew/rsync_test
Finally remove the test dirs:
rm -rf /srv/DataNew/rsync_test
| How do I migrate a RAID system to a larger set of HDDS? |
1,383,839,643,000 |
Is there a tool that handles generic migration of config? For example if I have httpd, postfix, MySQL and users and groups data, is there a tool that can extract the config data for each service so that I can apply it on another system.
Generally speaking is there a tool (or strategy) that handles this for all services?
|
One the popular accepted solutions to this problem is using a configuration management system. Some examples are puppet, chef, and saltstack.
These systems allow you to define exactly what a server (or in some cases an application stack) looks like. Using these tools you define a server's state, including its configuration.
Here is an example of a very basic apache configuration using Puppet with the puppetlabs/apache module:
class { 'apache': }
apache::vhost { 'first.example.com':
port => '80',
docroot => '/var/www/first',
}
This simple bit of puppet code ensures the following:
Apache is installed on the server
The webserver is running and listening on port 80
Contains a vhost with the docroot /var/www/first
You can then apply this manifest to many servers in a cluster. There are many reasons for the movement towards this type of configuration instead manually copying configuration files. It treats your server configuration and infrastructure in a very similar manner to how you treat code.
The configs for these systems are often stored in version control. This allows you to easily view changes, rollback, etc
Your server states can be unit tested and acceptance tested
Shared modules work like code libraries - you do not need to reinvent the wheel
Your servers are provisioned in a way which is repeatable (and more reliable)
Many consider use of these systems a big part of the devops movement.
| Tool for migration of service configuration |
1,383,839,643,000 |
I've got a linux box running Fedora 19 that I want to move to CentOS 6.4. Rather than trying to do something fancy with the current disk (which has also accumulated a lot of sludge over the years), I'm going to get a new disk, put CentOS on that, and then move the to-be-preserved bits of stuff from the old disk to the new one.
I haven't done this yet, but I presume it should be semi-straightforward -- do the CentOS install on the new disk, mount the old disk on /olddisk or somesuch, and start copying. However, I'm not sure how to handle getting the machine to recognize the new empty disk as the target of the CentOS install (I suppose I can just pull the old disk during the installation), remember that this is the intended boot disk once the install has happened), and tweak /etc/fstab (right?) to set up the old disk on the desired mount point. (Both disks are, or will be, SATA.) I could probably hack it together without losing too much hair or doing too much damage, but could anyone offer some advice that would get/keep me on the right track? Thanks!
|
I have always done it by pulling the disk I want to keep during the install. That way there is no chance of me picking the wrong disk while setting up partitions.
Move the disks around so the new OS drive is in the first port and by default the boot drive. Leave the other disk out during install. Then put it back in as the second disk when you are done.
This is the most foolproof way. Of course if you don't have physical access to the server it can be much more difficult than just choosing the right disks during install.
| Migrating from one linux install to another: How to keep the second disk around? |
1,383,839,643,000 |
So my trusty Linux box died after 10 years of faithful service. The memorial will be Friday afternoon.
Anyway, I am moving to a Mac (lesser of two evils - a Win 10 machine was the other option), and I don't think I want to run Linux on it directly - I booted with a live cd of Mint 18 and the video resolution was wonky and no extra drivers to install. So OS X it is.
My issue is that I've got LOTS of settings, etc. for various apps (all cross platform - firefox, thunderbird, geany, filezilla, virtualbox, netbeans, android studio, etc) that I'd like to move over as well. I've pulled the drive my /home directory was on, so I have all of the files.
Some things are trivial - my ssh private keys and config file for connecting to various hosts still go in ~/.ssh
But most of the other apps I can't find where to put my data/settings/preferences/profile files. Where do these go on OS X? I opened terminal and looked in my home directory, there is no ~/.mozilla or ~/.filezilla etc directory.
|
In OS X, there is a hidden directory ~/Library, whose subdirectories will contain that kind of user settings. This is for historical reasons: it is similar to pre-OS X versions of MacOS used to do.
Here is a MozillaZine KB article that has the exact paths for Firefox on Windows, Linux and Mac:
http://kb.mozillazine.org/Profile_folder_-_Firefox
| Migrate from Linux Mint to OS X [closed] |
1,383,839,643,000 |
I want to replace my system drive with a larger one and would prefer not to reinstall the main OS (OpenSUSE) or, even though seldom used, Windows. Installing Windows and Suse would take seemingly forever and I'd rather not have to rebuild my preferences and custom settings for the Suse install.
The other bootable systems are ephemeral for testing, so their loss would be insignificant. My system uses an MBR disk in BIOS boot with GRUB2 in the MBR of the first disk for the boot loader. All the GRUB stanzas use LABEL= rather than UUID= so that won't be a problem. Likewise, my /etc/fstab uses LABEL=, so changing, or duplicated UUIDs are not an issue. I believe I can use Clonzilla Live to migrate the Windows partition. I'm keeping that partition the same size and in the same place. If I understand Clonezilla correctly the UUID of that clone will be the same, so Windows shouldn't know anything has happened, or at least not enough to care.
My problems, or worries are that:
With GRUB installed on the MBR I cannot, I think, directly clone it as the partitions are changing
The Suse install uses BTRFS and I want to make its root partition larger
I also want to increase the size of the Suse /boot partition (had a few rare events where more space was needed on /boot)
I haven't found a Live USB distro which has partclone installed and drivers for BTRFS filesystems
I only want to take one shot at it
My understanding of partclone is that the system has to be off-line for it to clone a BTRFS system. I know Clonezilla Live has to be as it's a bootable system anyway. Data loss is not an issue since all data is on other disks uninvolved in the OS migration.
I have found a tool, btrfs-clone which I am hoping will simplify the process, and increase my chance for success. I've already tested it, and it will clone the live system. Keeping the system fairly quiet in the process is, of course a good idea. Again, the UUID of the partitions is immaterial to everything except possibly Windows, which only sees its own partition anyway.
Short version of the question is what steps need to be done to the new disk, and partitions, to clone the existing system onto the new disk and have it bootable when I swap the cables?
As an extra note, there is an existing question which seems to be just about right. My concern is that using rsync on the BTRFS system will not get all the snapshots and the system won't be "as it was" on the new system. It is possible that merely using btrfs-clone in place of rsync resolves all issues. As it is my main (only) desktop system, I'd like to get it right in one shot.
|
Since I needed to get this done I had to use a process which I was reasonably sure would work, even if not the ideal I had hoped for. Until someone has a better answer, for later users, I'll answer with what I did do, successfully.
The initial problem statement included the information that BIOS boot was in use and the disk was partitioned in the DOS, or MBR, format. As such, the possible complications, if any, using UEFI boot or GPT partitioning are not considered here.
Note: almost every important process requires root privileges. This can either be done by logging in as root (not an option on some distributions), switching to the root user with sudo su, or prepending every command with sudo. The presumption here is that everything is being done as root, using one of the first two options. The commands are all written as if executed from roots account, not the user's account.
Needed for this are:
The existing system in a running state
A second disk, HDD or SSD, accessible to the system (internally or by USB)
A bootable version, CD or USB, of Clonezilla Live
A bootable disc of USB drive with either a live distro, or the ability to enter a recovery mode.
The last item is best if it's a version of the system being cloned with either the same, or a very close, kernel. Which ever tool is chosen the ability to mount and chroot into the new system is needed. No other tools are needed from the disc or USB drive.
1. Verify the disk names involved.
Run the mount | grep '/dev/sd'. This will show all currently mounted partitions. In the case of a BTRFS file system, as given in the question, there is likely to be several different mount points listed for the partition which has on / type btrfs given. All such partitions can be ignored as the process of cloning the root will clone them as well. If other partitions are mounted from a different physical disk, and that disk will remain after the migration, they can be skipped as well.
Also needed, and harder to find, is the partition on which Windows is installed. If, as is common, Windows was installed first, the Windows partition is very likely to be /dev/sda1, especially if the Linux system is also on a partition of /dev/sda.
Common partitions to look for are /home and /boot.
2. Create the needed partitions on the new disk
If the new disk is able to be connected by USB that makes things a bit easier.
With the disk not connected run the command lsblk
Connect the disk and run lsblk a second time
The device in the second listing not it the first is the device to use
Caution: If you do a reboot at any time in the process, and have more than one USB disk attached, it is quite possible for the device name to change. Check each time to verify the correct device name before proceeding or you will probably loose data somewhere.
The tool to use is a personal choice, and depends on your preferences. Some options typically available in most systems could include parted, gparted, cfdisk, fdisk, and gdisk.
After the partitions are created, run partprobe to be sure the system "knows" about the new partitions. Some tools will notify the kernel that it needs to, others won't. Doing it yourself ensures that it did get done.
Verify that the partitions are made, and seen by the kernel with the lsblk command again.
3. Create the Linux file systems on new partitions
The BTRFS root partition can be made with the mkfs.btrfs command, and mkfs.ext4 could be used to make the /boot and /home partitions. For convenience sake, if the new disk has a swap partition, the new swap partition can be prepared with the mkswap command.
4. Clean up the BTRFS root partition to reduce the time spent on the clone operation
List the snapshots of the system, with snapper list and remove as many as possible, with snapper delete <number>. Rollback options will be quite limited as a result. If the system is being cloned it's probably in a stable state and rollbacks to prior conditions should not be needed.
Run the balance operation on the BTRFS filesystem. A full balance can be very time consuming! Limiting it to only balancing chunks which are utilized less than some percentage can accomplish a lot, while spending a lot less time waiting. Depending on how dirty the system is 50% might be the limit of your patience. My system is balanced often, so I can use a higher percentage (90%) and spend a tolerable amount of time. The -dusage option limits the balance operation to data chuncks utilized less than the given percentage.
For my system the command was btrfs balance start -dusage=90 /
5. Create mount points and mount the new partitions for cloning
The root partition is obvious in all cases. Also needed might be the /home and /bootpartitions.
6. Clone the non-BTRFS filesystems
The rsync command is a better choice than a simple cp as it can be used to preserve the ownerships and permissions of the copied files. It's also possible to have rsync restart the process if it gets interrupted.
An example would be rsync --archive -hh --hard-links --partial --info=stats1 --info=progress2 --modify-window=1 --one-file-system /boot/ /boot2/
7. Clone the root, BTRFS, partion
The btrfs-clone program was the tool I decided to trust. The documentation suggests that the best results (in terms of space used) are from the "generation" strategy, so that's the one I choose.
The command, simple as it is, to do that would be btrfs-clone --strategy generation / /mnt
Expect the operation to take a while.
8. Modify the "new" system
There may be changes you need to make to the /etc/fstab file to accommodate changes to the partitions you want mounted in the new system. The entire system does not have to remain exactly the same. Label names, UUIDs and even device names can be changed during the process of migration.
In the case of openSUSE there is another possible cause of trouble. The system setup includes a file used during GRUB configuration to hard code some kernel parameters in the grub.cfg menu. The file is /etc/default/grub and may contain a line similar to:
GRUB_CMDLINE_LINUX_DEFAULT="splash=silent resume=/dev/disk/by-label/Linux_swap quiet mitigations=auto"
Of concern is the section resume=..... This points to the swap partition used for suspend-to-disk operations. If the line is not commented out, has the resume=... in it, and the new disk has a different label, UUID or device name than the current disk, it has to be changed to reflect what the new version is. This will become important in a later step.
9. Use Clonezilla to clone the Windows partition.
I'm not an expert on Clonezilla. The option seems to be to use Clonezilla live on a bootable CD. Other than the langauge/keyboard options, the steps I followed were:
device-device work directly from a disk or partition to a disk or partition
Expert Expert mode: choose your own options
part_to_local_part local_partition_to_local_partition
Select the source partition
Select the target partition
Options menu changes
add Reinstall grub on target hard disk
drop Automatically adjust geometry ...
drop sfdisk uses CHS ...
drop Resize filesystem ...
add No GUI ...
add Remove NTFS volume dirty flag ...
-sfck Skip checking/repairing source file system
-p choose Choose reboot/shutdown/etc when everything is finished
After that completes the Windows partition has been cloned and GRUB might be added to the MBR.
10. Replace the old disk with the new disk
This is the point to get physical. Uninstall the old disk and install the new disk. If both disks stay installed, or connected, after this the system will probably get confused. As an alternative, if both disks are to be used, with the new one as the boot disk, the old, no-longer needed partitions need to be changed or reformatted, to give them new UUIDs and Labels.
11. Rebuild the boot process
From my experiment, the boot loader added to the MBR, if any, by Clonezilla is incomplete. The initrd of the old system may not be compatible with the new system, especially on openSUSE due to the /etc/default/grub setting above. Lastly, the grub.cfg file might have elements which are not compatible with the new system. Correcting all three is a straight forward process, if the preceding steps have been accomplished successfully.
Boot the live disc, and if needed select the option for recovery. If using a live CD, such as Ubuntu, it's possible to go all the way into the live system and then open a terminal. Using the terminal on a live system will also require the use of sudo su to have root privileges. Recovery mode on most systems is in root already.
The following presumes a disk of /dev/sdx with the following partitions:
/dev/sdx1 Windows
/dev/sdx2 `/boot`
/dev/sdx3 `/`
/dev/sdx4 swap
The partition numbers, and even the device name, could be different on your system, and the following steps need to be modified to match the situation
The steps
Make a directory to work in, commonly /mnt, which may already exist
Mount the new system's root partition:
mount /dev/sdx3 /mnt
Mount the new system's boot partition:
mount /dev/sdx2 /mnt/boot
Connect the existing system's processes to the new one
for item in proc sys dev run; do
mount /$item --rbind /mnt/$item
done
Switch into the mounted system
chroot /mnt
Mount any other partitions the new system normally mounts
mount -a
Create the new initrd
mkinitrd
Install GRUB to the MRB
grub2-install /dev/sdx
Generate new grub.cfg
grub2-mkconfig -o /boot/grub2/grub.cfg
Exit chroot and reboot
exit
reboot
Remove the disk of USB drive with the live or recovery system on it.
Check that the new disk can boot into both the Window and Linux systems.
| Migrate multi-boot system to larger disk without reinstalling |
1,383,839,643,000 |
I have Ubuntu 18.04 LTS installed on a 1000 GB HDD /dev/sda (93% free space) on my laptop:
/dev/sda1 -> 512M - vfat - EFI System Partition
/dev/sda2 -> 732M - ext4 - Linux File System (
/dev/sda3 -> 930.3G - crypto_LUKS - Linux File System (empty)
I would like to use a 120 GB SSD (not yet installed) for the OS on this laptop now. The old HDD should be simply used as an additional partition for file storage afterwards; no dual boot required. Instead of having to re-install Ubuntu again on the SSD, I am looking for a way to clone my existing system installation from HDD to the new SSD.
What would be the best way to achieve this?
|
option a)
install fresh and copy over your personal files / restore installed packages
option b)
save your disk encryption key from the old install
boot from CD/USB (I would use ubuntu install media, because there you can install all missing tools)
clone partition 1 and 2
create partion 3 new with cryptsetup/zuluCrypt using the old key
mount both encrypted partitions
clone files from sda3 to the new disk using rsync.
| How to clone Ubuntu 18.04 LTS from HDD to SSD? |
1,383,839,643,000 |
I have a mini home server running Debian 8.7 that during the initial installation had a 1tb hard drive mounted to / and a 60gb ssd mounted to /home. I would now like to remove the ssd for use in another project but am at a loss for how exactly to do so. I would like to have my home folder which has a bit of stuff from one account in it essentially migrated over to the 1tb drive.
My fstab currently reads.
# / was on /dev/sdb1 during installation
UUID=1159719b-3f5b-482a-99c1-4dd05e9c1cc7 / ext4 errors=remount-ro 0 1
# /home was on /dev/sda1 during installation
UUID=e39ea57f-7d07-4e53-8f2a-1571b23d06fe /home ext4 defaults 0 2
# swap was on /dev/sdb5 during installation
UUID=2ff79462-458d-429f-9b56-8bb6540ffa32 none swap sw 0 0
sda is the 60gb drive and sdb is the 1tb.
Is this easy to do? or would I be better backing up and setting everything up again?
|
You could (change <editor> to you text editor of choice):
sudo cp -Rp /home /home-copy
sudo <editor> /etc/fstab
In the editor, change:
UUID=e39ea57f-7d07-4e53-8f2a-1571b23d06fe /home ext4 defaults
To
# UUID=e39ea57f-7d07-4e53-8f2a-1571b23d06fe /home ext4 defaults
Then:
sudo mv /home /home-old
sudo mv /home-copy /home
sudo shutdown -P now
Remove the drive and reboot.
| Removing hard drive mounted to /home |
1,383,839,643,000 |
I'm using Linux Mint 17.3 and have created a new smaller disk with Mint 18. Now my plan is to mount the old partition from /mountpoint/oldroot/home/ into /home on the new system. So I create all 3 users with same user name and password as on the old system and then edit /etc/fstab, right?
Questions:
I need to make sure that the numerical uid and gid match, how do I do that?
Is there any other thing to watch out for?
Ideally, I'd like to avoid running chown on the old home, because I'd like to use the two systems in parallel until I'm confident the transition was successful. But I'm a bit worried that I missed something.
|
I ended up adjusting the GID and UID of the new install to those of the old installation using usermod -u <old-uid> <login> and groupmod -g <old-gid> <login> and made sure that the home directories are named the same on the new as on the old system. To change to the new home directories, I edited the fstab to mount them in /home, then renamed the default user directories mv /home/login /home/login_old and made empty directories mkdir /home/login as mount points used in the fstab file. After that, I immediately rebooted.
It worked fine and without errors, as Mint 17.3 and 18 were sufficiently similar.
I wouldn't recommend this in general for migrating from one distro to another or if the desktop environment is changed, because the old settings a user's home directory might cause problems.
| Remounting /home from other partition on new install without copying |
1,383,839,643,000 |
I have a laptop with Linux Mint Rosa (17.3) installed on it. It's the 32-bit version, as I didn't even consider the possibility this several year-old laptop could have a 64-bit processor. As it turns out, it does, and I'm considering making the swap to 64-bit.
I have spent quite some time fine-tuning my settings, however, and I was wondering if there is anything I can do to save (i.e. migrate) at least some of my settings to the new system. At the very least, I'd like to save the layout of my panels. Is it possible to save some folders/files on a USB stick and then, after installing the new system, copy them back? Or is it apples and oranges?
|
It's certainly possible to backup & restore the settings you have made, but how to do it depends on which settings you mean.
Generally I'd divide the settings into two categories:
User Settings
These are the settings like the mentioned layout of my panels, they are specific to your user and should usually be stored in /home/<username>.
If you backup the content of /home/<username> and then restore it on the new installation all the user settings should be restored as well. It's important though that the software versions on the new system are not older than the software version on the old system, otherwise they might not be able to understand your configurations due to their format being too new.
System Settings
These settings are not specific to your user, they apply to the entire system. Different services have their settings in different locations, but most which stick to the LSB standards should be in the /etc directory.
It might result in problems if you copy the entire /etc directory to the new system, especially if the software versions on the new system are not exactly the same as on the old one.
If you need to migrate some system settings it's better to first find the precise file which they are stored in and only copy that specific file, if the version of the software using that file isn't the same on both systems I would also recommend that you check if the format of the configuration file has been changed and if necessary migrate the format.
General advise
In order to save yourself as much hassle as possible I would recommend that on the new system you first install the exact same software version as you've had on the old one (only 64bit) to make sure that the configuration files are still compatible. Once that's done and running you can upgrade it to the latest versions. That way you are still running all of the upgrade procedures on the configuration files.
| Is it possible to migrate (some) settings when changing from 32-bit to 64-bit? |
1,383,839,643,000 |
There was Debian Jessie on server and I tried to migrate it to Debian Stretch.
I want to upgrade our server to actual version of Debian.
The command sudo apt-get update give me errors:
Err http://ftp.us.debian.org wheezy/main Sources
404 Not Found [IP: 64.50.233.100 80]
Err http://ftp.us.debian.org wheezy/contrib Sources
404 Not Found [IP: 64.50.233.100 80]
Err http://ftp.us.debian.org wheezy/non-free Sources
404 Not Found [IP: 64.50.233.100 80]
Err http://ftp.us.debian.org wheezy/main amd64 Packages
404 Not Found [IP: 64.50.233.100 80]
Err http://ftp.us.debian.org wheezy/contrib amd64 Packages
404 Not Found [IP: 64.50.233.100 80]
Err http://ftp.us.debian.org wheezy/non-free amd64 Packages
404 Not Found [IP: 64.50.233.100 80]
After running apt install apt -t stretch there are different errors but still about 'Wheezy':
W: The repository 'http://ftp.us.debian.org/debian wheezy Release' does not have a Release file.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://downloads.opsview.com/opsview-core/latest/apt wheezy InRelease: The following signatures were invalid: 3814C24CF407EC2F9EB07631327C70CD0FC6984B
W: Failed to fetch http://downloads.opsview.com/opsview-core/latest/apt/dists/wheezy/InRelease The following signatures were invalid: 3814C24CF407EC2F9EB07631327C70CD0FC6984B
E: Failed to fetch http://ftp.us.debian.org/debian/dists/wheezy/contrib/source/Sources 404 Not Found [IP: 208.80.154.15 80]
The command sudo apt-get upgrade executed ok.
The command sudo apt-get dist-upgrade also return error about "wheezy":
Err http://ftp.us.debian.org/debian/ wheezy/main sysvinit amd64 2.88dsf-41+deb7u1
404 Not Found [IP: 208.80.154.15 80]
The file /etc/apt/sources.list
#deb http://ftp.nl.debian.org/debian stretch main
deb http://ftp.nl.debian.org/debian stretch main non-free contrib
deb-src http://ftp.nl.debian.org/debian stretch main non-free contrib
deb http://security.debian.org/ stretch/updates main contrib non-free
deb-src http://security.debian.org/ stretch/updates main contrib non-free
# stretch-updates, previously known as 'volatile'
deb http://ftp.nl.debian.org/debian stretch-updates main contrib non-free
deb-src http://ftp.nl.debian.org/debian stretch-updates main contrib non-free
deb http://ppa.launchpad.net/webupd8team/java/ubuntu xenial main
# deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu xenial main
Probably I should change something in config files, but I don't know what and where.
|
There’s probably another configuration file in /etc/apt/sources.list.d referencing Wheezy. You should remove those entries too.
The errors are harmless, but I understand why you want to get rid of them!
| Migration to Debian Stretch ask about 'Wheezy' |
1,383,839,643,000 |
I have a Debian Wheezy i386 machine, and I have to migrate all packages to another machine with Wheezy amd64.
I tried to select all packages with dpkg --get-selections, but there are many libraries with *-i386 suffix, and I'm wondering what will happen if I try to install those packages on the other machine, because of its different arch.
Should I remove all the i386 packages from the selections list, or change their suffix to amd64?
|
Packages whose names contain i386 will in all likelihood need manual processing. There might be corresponding packages with amd64 in their name, e.g. kernel packages; those would be appropriate in this case. Others won’t have direct equivalents, e.g. ia32-libs-i386, and will have to be handled appropriately using multi-arch (if they’re still necessary).
Packages listed as :i386 (note the colon) are multiarch-capable packages and should be replaced with their corresponding :amd64 variant in most cases.
| Debian migration from i386 to amd64 arch |
1,383,839,643,000 |
I tar'd the repository folder (including the CVSROOT) and copied it to the new server, then un-tarred it. When I connect to CVS via Eclipse, I can get the sourcecode, however the server is not providing the versions, branches or history.
Is there some extra step I might have missed?
|
I ran the following commands (using user 'cvs' and repository folder 'repository') and now the branches, tags and versions are available.
chown -R cvs repository
chgrp -R cvs repository
chmod -R 775 repository
chmod -R +s repository
| Migrated CVS to new server - but where have the branches and versions gone? |
1,383,839,643,000 |
I am going on a trip (bus) and I will not be bringing my computer. However, I would like the comfort of my Linux operating system the way I like it, with my programs and files.
I was wondering if it is safe to simply take out my SSD from my main computer, put it in an anti-static bag, pad it with the clothes in my suit case, and take it with me.
Where I will be staying there is a desktop that also has Linux on it, booting from Grub, the same boot-loader I use. However, I believe I am on a more recent version of Grub than that computer.
I have heard it is not a good idea to transfer a hard drive from one computer to another with different system specs. I do not know why it was not advised, however. Maybe simply because Windows would think it was an illegal copy, I don't know. What possible problems could I have? Will dual-booting work fine just like it does on my main system? Will different Grub versions be a problem? I don't see a problem with taking this simple approach, and avoiding the hassle of trying to transfer files and programs. Does anyone know why it is not recommended?
Thank you very much for your help.
|
Modern Linux installations are generally fairly portable. Moving a bootable disk to another machine with different specs should generally work, at least to the point where you can log in to the console, and probably all the way through to a working graphical X11 session, as long as the CPU architecture remains the same.
Anything requiring special drivers is likely to work poorly, or in the worst case not at all.
Display hardware (in particular, graphics acceleration), some network cards, and some disks (high-performance RAID devices for example) might require you to install additional kernel drivers, or start the kernel with specific boot parameters. Some motherboards and bus architectures have similar issues. Ideally, there will be a fallback option to get working but slow, or otherwise below-spec performance; in the worst case, some things will not work at all (wifi, odd function keys, touchpads and other post-1995 input devices, basic VGA-level display graphics?)
If you can boot to the bare console, you can get command-line work done. If your work requires a fast high-resolution graphics display and lots of fast disk storage, you are somewhat less likely to get to the point where at least you can limp along.
There are "live CD" images for many distros which will boot on most reasonably standard contemporary hardware. Knoppix paved the way, once upon a time, but these days, most distros have a bootable USB image which allows you to work without touching the host system's hard drive at all. (Maybe put your home directory on a separate stick if you don't have one large enough to host both the OS and your personal files.)
| Is it Safe to Transfer SSD to Another Computer? |
1,383,839,643,000 |
High Level Description
So I currently have a convoluted ZFS setup and want to restructure it, reusing some of the existing hardware.
I know that the recommended way of doing something like this is to backup all data, destroy the old pools, create the new ones and restore the data, the question is on how to best do this
The Setup and Details
My current setup consists of 3×1TB and 3×4TB drives set up in the following way
Two of the 4TB drives are each formatted in one 1TB and one 3TB Partition.
media_pool is a raidz1 pool consisting of 5×1TB disks/ partitions (-> 4TB Avaiable, 3.26TB used)
three_t_pool is a mirror consisting of the 2 3TB partitions (-> 3TB Avaiable, 1.93TB used)
non_redundant is a pool just consisting of one 4TB drive (1.07TB used)
Each of those pools has exactly one encrypted dataset spanning the entire pool.
media_pool/media_encrypted, three_t_pool/backups, and non_redundant/nr_encrypted.
My future setup will retire the 1TB drives and add a 12TB drive as follows:
media_and_backups a raidz1 pool consisting of the 3 4TB drives and a 4TB partition of the 12TB drive ((4-1)×4TB = 12TB avaiable)
non_redundant_foo the remaining 8TB of the big drive.
And encrypted datasets
media_and_backups/media_encrypted, media_and_backups/backups, and non_redundant_foo/nr_encrypted.
(Although keeping the media and backup datasets separate is not a must if this would complicate things)
Now my migration process would probably look like this (With all 7 drives connected to the same machine):
Format the 12TB drive in the desired 4 and 8TB partitions
backup the data of the exiting pools/datasets to the 8TB partition
destroy the old three pools, create the media_and_backups pool and corresponding datasets (media_and_backups/media_encrypted, media_and_backups/backups), restore them from backup
[Optionally even move the nr_encrypted data to media_and_backups, destroy the filesystem on the 8TB partition create a new pool there and restore the nr_encrypted dataset to there]
But like I mentioned above the part where I'm unsure about is the backup and restore progress
One naïve method I can think of is to just use rsync, create a folder for each dataset, sync-in all the data, create the pools and datasets I desire and rsync back to those.
One obvious disadvantage would be that the data would be decrypted on backup and reencrypted on restore
It just feels wrong how would I for example recognize in-transfer data corruption
zfs send and zfs receive seems to be the way to go. But... how would I actually do that?
I read the answer to ZFS send/recv full snapshot but there are some key differences between my setup and theirs:
They directly create the new pool on the new server I want to first store the backup and then later restore it on the same server
I have three datasets from three different pools i want to store on the same pool
Also there's the thing about the available storage: the 8TB (= 7.28TiB) should be enough to hold the 3.26+1.93+1.07=6.26TiB of used data, but of course a 8TB pool would not be able to host 4+3+4TB worth of datasets
So I think I should zfs send the current datasets into one big file each (or somehow split those into blocks) on the 8TB pool, then when that's done destroy the old pools and create the new ones and zfs recv from the files to the pools
Is this the best way of doing this, or is there a better/ recommened way?
Or any best practicies for using zfs send/ zfs receive.
If I do it this way are there any caveats to watch out for?
|
So I found out that error correction happens on receive, so creating an intermediate file isn't a good idea.
I also found out that my datasets don't have a set volsize (checked with zfs get volsize) so I only have to watch out for the total used size, as opposed to the pool size.
I also cam across how to one-way mirror an entire zfs pool to another zfs pool from which I gathered that I can use the -d (discard) option on zfs receive so that the filesystem path matches the new pool.
I also needed the --raw flag on zfs send because since my datasets are encrypted and I wanted to avoid decrypting and reencrypting them.
Finally I since I was sending multiple datasets originating from different pools all to the same destination pool, I had to watch out to only send the dataset snapshot, not the pool snapshot, otherwise I would get errors about conflicting snapshots on the pool.
In total, my transfer command looked like this:
zfs send -R --raw origin_pool/dataset@transfer_some_timestamp | pv -Wbraft | zfs receive -Fduv destination_pool
zfs send
-R Generate replication stream package to preserve all properties, snapshots, ...
--raw For encrypted datasets, send data exactly as it exists on disk. Avoid reencryption
pv -Wbraft monitor status
zfs receive
-F` expand target pool
-d discard origin_pool part of filesystem name and use destination_pool instead
-u Don't mount
-v Print verbose information about the stream and the time required to perform the receive operation. (With hindsight I would have proably not needed both pv and the -v option but oh well)
So the above was the general takeaway but for completeness below the steps I took
Prerequisite: Set open up a tmux session so is could easily disconnect without stopping running processes
Format the 12T drive in 4T and 8T, taking care that the 4T partition is exactly the size the other 4T drives will also have (By looking at one of the existing partiotion tables using fdisk -l /dev/sdX)
Create the pool non_redundant_two on the 8T partition
Creating the recursive snapshot on the first source pool zfs snap -r drei_t_pool@transfer_$(date '+%Y%m%d%H%M%S')
Transferring to the non_redundant pool: zfs send -R --raw drei_t_pool@transfer_20231210194637| pv -Wbraft | zfs receive -Fduv non_redundant_two (note that this was not ideal, see below)
Create a snapshot of the next source pool zfs snap -r media_pool@transfer_$(date '+%Y%m%d%H%M%S')
Try to send it and get error, inspect snapshots (zfs list -t snapshot) see that there is a very small snapshot for the pool and a huge snapshot for the dataset. Delete the pool snapshot: zfs destroy non_redundant_two@transfer_20231210194637
I also considered and tried to create a "holder" dataset and sending the entire pool-snapshot to it
zfs create non_redundant_two/media_holder
zfs send -R --raw media_pool@transfer_20231210232518 | pv -Wbraft | zfs receive -Fduv non_redundant_two/media _holder
This also seemed to work but I canceled+destroyed the attempt it because I had a better Idea:
Send just the dataset: zfs send -R --raw media_pool/media_encrypted@transfer_20231210232518 | pv -Wbraft | zfs receive -Fduv non_redundant_two
Do the same with the last pool/dataset zfs snap -r non_redundant@transfer_$(date '+%Y%m%d%H%M%S') and zfs send -R --raw non_redundant/nr_encrypted@transfer_20231211121954 | pv -Wbraft | zfs receive -Fduv non_redundant_two
Test if I can open the backed up datasets and read files (did this for all pools):
zfs load-key non_redundant_two/backups
Inspect...
Unmount and unload key zfs unmount -a, zfs unload-key non_redundant/nr_encrypted
Backup properties of pools before I destroy them, e.g. pool get all drei_t_pool > zfs.get.drei_t_pool
Destroy origin snapshot, then dataset, then pool (I know this could be done with a recursive flag, but I wanted to be super sure of what I was doing) (For all source pools)
zfs destroy drei_t_pool@transfer_20231210194637, zfs destroy drei_t_pool/backups@transfer_20231210194637
zfs destroy drei_t_pool/backups
zpool destroy drei_t_pool
Create new pool I wanted to migrate to media_and_backups
Transfer datasets to there (no need for creating a snapshot again, since the data didn't change)
zfs send -R --raw non_redundant_two/backups@transfer_20231210194637 | pv -Wbraft | zfs receive -Fduv media_and_backups
zfs send -R --raw non_redundant_two/media_encrypted@transfer_20231210232518 | pv -Wbraft | zfs receive -Fduv media_and_backups
Check decrypting and mounting works as expected and destroy datasets from non_redundant_two
Destroy transfer snapshots
One additional curiosity: NFS wasn't working so I checked zfs get sharenfs pool_name/dataset_name where it showed my the preferences I had set, but the SOURCE tab showed remote. So I re-set that property, after which the SOURCE tab showed local and NFS was working again
| ZFS: Best Practices for Restructuring multiple pools with encrypted datasets at once |
1,383,839,643,000 |
I have the following scenario, I have two programs running one in the background and one in front. The back program is doing some stuff for the front program. once the back program has done the necessary configuration it signals that it has finished the backup support for first program and the now the front program needs to be killed and the back program will take control of first program.
Help is highly how would I accomplish this scenario in Linux. Any direction or hint is highly appreciated.
|
It is difficult to understand the requirement as specified, so first I will try to show where additional explanation may be needed.
You have tagged this post with "Migration", so I assume these programs already exist and are known to work on some non-Linux architecture. The concepts of inter-process communication and signalling are fairly universal, but stating the architecture and OS the programs presently run on would be very helpful.
I also see the tag "multithreading" although the text only mentions two distinct processes. Does either of the programs actually multithread?
I initially considered that "front" and "back" related to a foreground and background process started by a shell (also tagged). But that is not a true relationship between the processes themselves, only their relationship with their launching mechanism.
I believe you are referring to a "front-end" program that provides a GUI for the entry of parameters, and once these are passed to the "back-end" program it proceeds autonomously. It is also possible that the front-end may need to only be suspended, until the GUI can be used again to provide feedback or results.
A key question is the communication of parameters between the two programs. The methods I am aware of include: shared memory; piped streams (as unnamed pipes, named pipes, or sockets); and shared files. Signals are only suitable for events, not for data flows. The existing mechanism has to be understood so we can proceed.
It is not possible for a program to "take control" of another. There may exist a parent-child relationship (and either your front-end or back-end could be the parent), but the relationship cannot be reversed. The function of the parent is to pre-arrange the communication between parent and child: alternatively, the parent may launch two children (siblings) that can communicate with each other.
Either parent or child can signal the other, and in fact each signal can kill the other process. It is more usual to end the relationship by closing the channels of communication between them, which gives the ending program the capability of a tidy closedown.
If you already have these programs, it would be very helpful to know exactly how they interact at present, so that the closest match of Linux features can be recommended. There is no purpose in my suggesting a model that requires major changes in the existing code, when another approach could be a far better fit.
It is almost certain that Linux can provide the necessary environment for these programs to co-operate. The issue is that we still know nothing about the existing mechanisms that would need to be emulated, or indeed whether you have access to the source of the programs, and what language they are written in.
| How to switch from one process to another process and kill the first process |
1,539,294,126,000 |
Operating System Concepts says
Consider a sequential read of a file on disk using the standard
system calls open(), read(), and write(). Each file access requires
a system call and disk access.
Alternatively, we can use the virtual memory techniques discussed so
far to treat file I/O as routine memory accesses. This approach, known
as
memory mapping a file, allows a part of the virtual address space to be logically associated with the file. As we shall see, this can
lead to significant performance increases. Memory mapping a file is
accomplished by mapping a disk block to a page (or pages) in memory.
Initial access to the file proceeds through ordinary demand paging,
resulting in a page fault. However, a page-sized portion of the file is
read from the file system into a physical page (some systems may opt to
read in more than a page-sized chunk of memory at a time). Subsequent
reads and writes to the file are handled as routine memory accesses.
Manipulating files through memory rather than incurring the overhead of
using the read() and write() system calls simplifies and speeds up file
access and usage.
Could you analyze the performance of memory mapped file?
If I am correct, memory mapping file works as following. It takes a system call to create a memory mapping.
Then when it accesses the mapped memory, page faults happen. Page faults also have overhead.
How does memory mapping a file have significant performance increases over the standard I/O system calls?
|
Memory mapping a file directly avoids copying buffers which happen with read() and write() calls. Calls to read() and write() include a pointer to buffer in process' address space where the data is stored. Kernel has to copy the data to/from those locations. Using mmap() maps the file to process' address space, so the process can address the file directly and no copies are required.
There is also no system call overhead when accessing memory mapped file after the initial call if the file is loaded to memory at initial mmap(). If a page of the mapped file is not in memory, access will generate a fault and require kernel to load the page to memory. Reading a large block with read() can be faster than mmap() in such cases, if mmap() would generate significant number of faults to read the file. (It is possible to advise kernel in advance with madvise() so that the kernel may load the pages in advance before access).
For more details, there is related question on Stack Overflow: mmap() vs. reading blocks
| How does memory mapping a file have significant performance increases over the standard I/O system calls? |
1,539,294,126,000 |
I was going through documentation regarding mmap here and tried to implement it using this video.
I have a few questions regarding its implementation.
Does mmap provide a mapping of a file and return a pointer of that location in physical memory or does it return with an address of the mapping table? And does it allocate and lock space for that file too?
Once the file is stored on that location in memory, does it stay there till munmap is called?
Is the file even moved to memory or is it just a mapping table that serves as a redirection and the file is actually in the virtual memory - (disk)?
Assuming it is moved to memory, can other processes access that space to read data if they have the address?
|
Answering things in order:
It returns a pointer to the location in virtual memory, and virtual memory address space is allocated, but the file is not locked in any way unless you explicitly lock it (also note that locking the memory is not the same as locking the region in the file). An efficient implementation of mmap() is actually only possible from a practical perspective because of paging and virtual memory (otherwise, it would require reading the whole region into memory before the call completes).
Not exactly, this ties into the next answer though, so I'll cover it there.
Kind of. What's actually happening in most cases is that mmap() is providing copy-on-write access to that file's data in the page cache. As a result, the usual cache restrictions on data lifetime apply: if the system needs space, pages can be dropped (or flushed to disk if they're dirty) from the cache and need to be faulted in again.
No, because of how virtual memory works. Each process has its own virtual address space, with its own virtual mappings. Every program that wants to communicate data will have to call mmap() on the same file (or shared memory segment), and they all have to use the MAP_SHARED flag.
It's worth noting that mmap() doesn't just work on files, you can also do other things with it such as:
Directly mapping device memory (if you have sufficient privileges). This is actually used on many embedded systems to avoid the need to write kernel mode drivers for new hardware.
Map shared memory segments.
Explicitly map huge pages.
Allocate memory that you can then call madvise(2) on which in turn lets you do useful things like prevent data from being copied to a child process on fork(2), or mark data for KSM, Linux's memory deduplication feature.
| Understanding mmap |
1,539,294,126,000 |
I'm interested in the way Linux mmaps files into the main memory (in my context its for executing, but I guess the mmap process is the same for writing and reading as well) and which size it uses.
So I know Linux uses paging with usually 4kB pagesize (where in the kernel can I find this size?). But what exactly does this mean for the memory allocated: Assume you have a binary of size of a few thousned bytes, lets just say 5812B and you execute it.
What happens in the kernel: Does it allocate 2*4kB and then copy the 5812B into this space, wasting >3KB of main memory in the 2nd page?
It would be great if anyone knew the file in the kernel source where the pagesize is defined.
My 2nd question is also very simple I guess: I assumed 5812B as a filesize. Is it right, that this size is simply taken from the inode?
|
There is no direct relationship between the size of the executable and the size in memory. Here's a very quick overview of what happens when a binary is executed:
The kernel parses the file and breaks it into section. Some sections are directly loaded into memory, in separate pages. Some sections aren't loaded at all (e.g. debugging symbols).
If the executable is dynamically linked, the kernel calls the dynamic loader, and it loads the required shared libraries and performs link edition as required.
The program starts executing its code, and usually it will request more memory to store data.
For more information about executable formats, linking, and executable loading, you can read Linkers and Loaders by John R. Levine.
In a 5kB executable, it's likely that everything is code or data that needs to be loaded into memory except for the header. The executable code will be at least one page, perhaps two, and then there will be at least one page for the stack, probably one page or for the heap (other data), plus memory used by shared libraries.
Under Linux, you can inspect the memory mappings for an executable with cat /proc/$pid/maps. The format is documented in the proc(5) man page; see also Understanding Linux /proc/id/maps.
| Memory size for kernel mmap operation |
1,539,294,126,000 |
I am working on a system where we lock files in memory using mmap and MAP_LOCKED and MAP_POPULATE for performance. If we do this with a file that is in tmpfs, will it use the existing tmpfs memory area or will it make a copy for the mmap?
|
Tmpfs is a file system which keeps all files in virtual memory.
tmpfs lives completely in the page cache and on swap
mmap copies file data to the disk cache when it needs the data to be in memory. With tmpfs, all the data is already in the disk cache (or swapped out). So mmapped data won't be copied: it's already in the place where it would be copied to.
| If I mmap a file from tmpfs, will it double the memory usage? |
1,539,294,126,000 |
Playing with strace, it appears to me that ld.so.cache and libc.so.6 are opened and mapped to memory for almost every process. At least those processes that I experimented with. Doesn't this mean that these processes are mapped into process memory many many many times?
Sure, these files are pretty small, but isn't that a little wasteful of memory?
The strace output shows that these are being mmap'ed with MAP_PRIVATE set, which makes it copy-on-write, but there still appears to be a new mapping for every process.
My questions:
Have I properly understood what is happening? That is, is there really a new copy of these files mapped into memory on every process that needs them (which appears to be every single one)?
Is there some type of memory-sharing going on? That is, since the mapping is copy-on-write, are lots of processes looking at the same physical memory locations?
|
Yes, every process gets its own mapping of the libraries it needs.
Yes, most of the data is shared, so every process “sees” the same physical memory (at different linear addresses), assuming the same version of each file is shared.
You can see the various mappings by looking at the maps file inside each process’ /proc/ directory; for libc you’ll see entries such as
7f1014062000-7f10141f7000 r-xp 00000000 fd:0d 1444681 /lib/x86_64-linux-gnu/libc-2.24.so
7f10141f7000-7f10143f7000 ---p 00195000 fd:0d 1444681 /lib/x86_64-linux-gnu/libc-2.24.so
7f10143f7000-7f10143fb000 r--p 00195000 fd:0d 1444681 /lib/x86_64-linux-gnu/libc-2.24.so
7f10143fb000-7f10143fd000 rw-p 00199000 fd:0d 1444681 /lib/x86_64-linux-gnu/libc-2.24.so
or
7f4d7a8ec000-7f4d7aa81000 r-xp 00000000 fd:0d 1444681 /lib/x86_64-linux-gnu/libc-2.24.so
7f4d7aa81000-7f4d7ac81000 ---p 00195000 fd:0d 1444681 /lib/x86_64-linux-gnu/libc-2.24.so
7f4d7ac81000-7f4d7ac85000 r--p 00195000 fd:0d 1444681 /lib/x86_64-linux-gnu/libc-2.24.so
7f4d7ac85000-7f4d7ac87000 rw-p 00199000 fd:0d 1444681 /lib/x86_64-linux-gnu/libc-2.24.so
The read-only, executable mapping corresponds to the shared executable code in the library; the read-only mapping gives access to the shared, read-only data in the library; and the read-write mapping is a private mapping for variables in the library. As you can see above, the linear addresses are different (thanks to address-space layout randomisation, and different load orders); the underlying physical addresses for the shared parts are the same, once they’re loaded into memory (since the mappings map the underlying files, not shared memory directly).
| ld.so.cache and libc.so.6 memory-mapped for every call? |
1,539,294,126,000 |
Varnish, a HTTP accelerator, uses a ~80MB file backed SHM log that is mlock()ed into memory. The Varnish docs recommend to store the file on tmpfs to avoid unnecessary disk access. However if the entire file is locked into memory, does the Linux kernel still write to the backing file?
I tried to monitor this using inotify and fatrace, however since this interaction presumably happens all inside the kernel, no file activity was visible to these tools. There is clearly some kind of update happening either to the file or the filesystem, as monitoring the backing file with ls showed the file time changing, and sha1sum showed the contents were changing, but does this actually involve disk access or is it all happening in memory?
Basically I'm trying to avoid having to do the tmpfs workaround, as using SHM to back SHM seems like an ugly workaround for a problem that might not even exist.
|
Varnish appears to use a plain memory-mapped file for its shared memory (instead of, e.g., POSIX shm_open). From the source:
loghead = mmap(NULL, heritage.vsl_size,
PROT_READ|PROT_WRITE,
MAP_HASSEMAPHORE | MAP_NOSYNC | MAP_SHARED,
heritage.vsl_fd, 0);
On BSD, MAP_NOSYNC requests that the kernel not write the shared data to disk unless forced (e.g., to free up memory). When it's mlocked as well, that should almost never happen. Unfortunately, Linux does not support MAP_NOSYNC.
So Linux will wind up routinely writing dirtied (changed) pages from the cache to disk. Putting the cache on a tmpfs will avoid that. So too would Varnish using POSIX or SysV shared memory (actually, POSIX shared memory is implemented on Linux with a tmpfs mounted at /dev/shm, so using the tmpfs should be fine).
| File backed, locked shared memory and disk interaction |
1,539,294,126,000 |
On a modern 64-bit x86 Linux, how is the mapping between virtual and physical pages set up, kernel side? On the user side, you can mmap in pages from the page cache, and this will map 4K pages in directly into user space - but I am interesting in how the pages are mapped in the kernel side.
Does it make use of the "whole ram identity mapping" or something else? Is that whole ram identity mapping generally using 1GB pages?
|
On a modern 64-bit x86 Linux?
Yes. It calls kmap() or kmap_atomic(), but on x86-64 these will always use the identity mapping. x86-32 has a specific definition of it, but I think x86-64 uses a generic definition in include/linux/highmem.h.
And yes, the identity mapping uses 1GB hugepages.
LWN article which mentions kmap_atomic.
I found kmap_atomic() by looking at the PIO code.[*]
Finally, when read() / write() copy data from/to the page cache:
generic_file_buffered_read -> copy_page_to_iter -> kmap_atomic() again.
[*] I looked at PIO, because I realized that when performing DMA to/from the page cache, the kernel could avoid using any mapping. The kernel could just resolve the physical address and pass it to the hardware :-). (Subject to IOMMU). Although, the kernel will need a mapping if it wants to checksum or encrypt the data first.
| How is the page cache mapped in the kernel on 64-bit x86 architectures? |
1,539,294,126,000 |
Based on my research on mmap(), I understand that mmap uses demand paging to copy in data to the kernel page cache only when the virtual memory address is touched, through page fault.
If we are reading files that are bigger than the page cache, then some stale page in the page cache will have to be swapped out reclaimed. So my question is, will the page table be updated to map the corresponding virtual memory address to the address of the old stale page in the cache (now containing new data)? How does this happen? Is this part of the mmap() system call?
|
will the page table be updated to map the corresponding virtual memory address to the address of the old stale page in the cache (now containing new data)? How does this happen?
When mmap() is called, it creates a mapping in the process's virtual address space to the file specified. This mapping merely sets up the ability for these pages to be loaded when they are actually accessed, it doesn't load anything into memory yet. When you then access the pages, a page fault is generated, the page table entries are updated to map the virtual addresses to the physical addresses of the newly loaded pages, and you can then access the file. This happens in filemap_fault.
This is also how it works if you access a mapped page which has been evicted: the kernel handles the page fault, puts the file content back into the pages, and from the application's perspective, nothing happened.
There's nothing special about mmap() here per se -- this is how demand paging works inside the Linux kernel in general, as used for almost everything -- even regular program memory and file cache entries.
[...] map the corresponding virtual memory address [...]
Note that, when reading in with mmap(), the kernel typically will use readahead in order to load more content than just the single page you've generated a page fault on, unless there is an indication that this would be unhelpful, like MADV_RANDOM (indicated by user), or MMAP_LOTSAMISS (kernel heuristic).
| Does mmap() update the page table after every page fault? |
1,539,294,126,000 |
I have a large tar file (60GB) containing image files. I'm using mmap() on this entire file to read in these images, which are accessed randomly.
I'm using mmap() for the following reasons:
Thread safety -- I cannot seek an ifstream from multiple threads.
I can avoid extra buffering.
I get some caching (in the form of a requested page already being resident.)
The question is what happens when I've read every image in that 60GB file? Certainly not all
of the images are being used at once -- they're read, displayed, and then discarded.
My mmap() call is:
mmap(0, totalSize, PROT_READ, MAP_SHARED | MAP_NORESERVE, fd, 0);
Here's the question: does the kernel see that I've mapped read-only pages backed by a file and simply purges the unused pages on memory pressure? I'm not sure if this case is recognized. Man pages indicate that MAP_NORESERVE will not require backing swap space, but there doesn't seem to be any guarantee of what happens to the pages under memory pressure. Is there any guarantee that the kernel will purge my unneeded pages before it, say, purges the filesystem cache or OOM's another process?
Thanks!
|
A read-only mmap is largely equivalent to open followed by lseek and read. If a chunk of memory that's mapped in a process is backed up by a file, the copy in RAM is considered part of the disk cache, and will be freed under memory pressure, just like a disk cache entry created by reading from a file.
I haven't checked the source, but I believe MAP_NORESERVE makes no difference for read-only mappings.
| Behavior of mmap'd memory on memory pressure |
1,539,294,126,000 |
I am trying to understand what happens when a file, which has been mapped into memory by the mmap system call, is subsequently written to by other processes.
I have mmaped memory with PROT_READ protection in "process A". If I close the underlying file descriptor in process A, and another process later writes to that file (not using mmap; just a simple redirection of stdout to the file using > in the shell), is the mmaped memory in the address space of process A affected? Given that the pages are read-only, I would expect them not to change. However, process A is being terminated by SIGBUS signals as a result of invalid memory accesses (Non-existent physical address at address 0x[...]) when trying to parse the mapped memory. I am suspecting that this is stemming from writes to the backing file by other processes. Would setting MAP_PRIVATE be sufficient to completely protect this memory from other processes?
|
If I close the underlying file descriptor in process A,
closing the file descriptor doesn't change anything at all
another process later writes to that file (not using mmap; just a simple redirection of stdout to the file using > in the shell), is the mmaped memory in the address space of process A affected?
It may be. The manpage of mmap(2) says:
MAP_PRIVATE
...
It is unspecified whether changes made to the file
after the mmap() call are visible in the mapped region.
In practice, changes made by other processes seem to be reflected in the content of the mmaped region, at least for regular files.
However, process A is being terminated by SIGBUS signals as a result of invalid memory accesses (Non-existent physical address at address 0x[...]) when trying to parse the mapped memory.
I'm expecting that to happen when you truncate a mmaped file.
Would setting MAP_PRIVATE be sufficient to completely protect this memory from other processes?
No, MAP_PRIVATE only prevent modifications to the memory from being carried through to the backing file, not the reverse.
| mmap: effect of other processes writing to a file previously mapped read-only |
1,539,294,126,000 |
I want to create a number of named memory regions in my program, and mmap them somewhere so that other processes can read them. I can't guarantee that only one instance of my program will run at a time. Ideally, I'd like to put the blocks under /proc/self/<blockname> or such. Is this possible? Or is there another place I can put the mapped files? (My program will normally not run as root.)
I don't want to use /proc/self/fd or /proc/self/map_files, since that doesn't allow naming them (as far as I know).
|
No, you cannot add your structure in a meaningful way to /proc because it is generated (not a "real" filesystem). Likewise /sys on some machines. Changing the structure of /proc isn't straightforward (see for example Creating a folder under /proc and creating a entry under the folder).
Further reading:
Linux Filesystem Hierarchy: Chapter 1. Linux Filesystem Hierarchy: 1.14. /proc
mmap, munmap - map or unmap files or devices into memory
Is it possible to create a directory and file inside /proc/sys?
@mark-plotnick suggested POSIX shared memory, which does support names.
Further reading:
Posix shared memory vs mapped files (versus mmap, for example)
shm_overview - overview of POSIX shared memory
shm_open, shm_unlink - create/open or unlink POSIX shared memory objects (These are named objects)
The operation of shm_open() is analogous to that of open(2). name
specifies the shared memory object to be created or opened. For
portable use, a shared memory object should be identified by a name
of the form /somename; that is, a null-terminated string of up to
NAME_MAX (i.e., 255) characters consisting of an initial slash,
followed by one or more characters, none of which are slashes.
shm_open - open a shared memory object (REALTIME) (POSIX)
| Can I add to /proc/self? |
1,539,294,126,000 |
I learned default stack size for each process is limited to 8MB and mmap_base is calculated based on stack size in rlimit and random value. Code below is mmap_base function which calculates mmap_base address in x86(linux/include/uapi/asm-generic/resource.h).
static unsigned long mmap_base(unsigned long rnd)
{
unsigned long gap = rlimit(RLIMIT_STACK);
if (gap < MIN_GAP)
gap = MIN_GAP;
else if (gap > MAX_GAP)
gap = MAX_GAP;
return PAGE_ALIGN(TASK_SIZE - gap - rnd);
}
I am wondering what if program stack size is greater than 8MB+rnd value? I mean what if stack size grows above mmap_base ?
If I allocate stack memory above 8MB is it just fail with segmentation fault?
If kernel enlarge stack size automatically is it possible to move contents in mmap_base to other spaces?
|
The process main thread stack size cannot grow larger than the set limit. The default value of this limit is 8 MB. Exceeding this limit will result in a segmentation fault and the process will be sent a SIGSEGV signal, by default killing it. The maximum size of the stack can be changed with ulimit -s before starting the program. The kernel does not move around memory areas (like the mmap area) after the program has been started, and could not do so, because there are usually pointers pointing into this area that would point to wrong addresses after the move.
However, the check for stack overflow is performed when the stack memory is accessed, so just performing a large allocation on the stack, or otherwise changing the value of the stack pointer, does not necessarily trigger a fault.
There was some talk in the summer of 2017 about the possibility to exploit this behaviour. If some attacker can trick a program to allocate a large amount of memory, this can result in the stack pointer skipping a guard area and point into a valid, but different area instead. This opens up opportunities for some clever tricks to take control of the process. See this lwn.net article for a discussion of the issue.
| program stack size |
1,539,294,126,000 |
Why is it that I cannot mmap /dev/random or /dev/urandom on Linux?
I get errno 19 which is ENODEV.
When I try the same code with /dev/zero it works.
int fd = open(path, O_RDONLY);
assert (fd > 0);
void* random = mmap(NULL, size, PROT_READ, MAP_PRIVATE | MAP_FILE, fd, 0);
int err = errno;
assert (random != MAP_FAILED);
|
You cannot mmap() /dev/random or /dev/urandom. Nor can you seek() them for that matter. And as a general rule, you cannot mmap() unseekable things. Pipes are another example of things you cannot mmap() because they are not seekable.
/dev/random and /dev/urandom are fundamentally stream-based, sequential access, devices. They produce bytes on demand when you read them. Random access to these devices has no meaning. mmap() implies random access.
| mmap /dev/random |
1,539,294,126,000 |
I'm kind of confused by mmap.
Well, I know that when we malloc a big size of memory, we will invoke the function mmap, which will allocate an area in memory. In this case, mmap just allocate some memory for some process.
However, I've heard that mmap is a kind of technique, which allows us to map a file, which is located on the hard drive, to the memory so that we can have a better performance comparing with normal IO (read & write).
For me, the two things above are totally two independent stories: one is about allocation of memory, the other is about reading and writing files with a better way.
But why are both of them called mmap? Is it just a coincidence or they are actually the same technique?
|
mmap provides a way to map pages of memory. In Linux (among others), those pages of memory can have different backing devices: notably, files, and nothing at all (for anonymous mappings, MAP_ANONYMOUS), or rather a swap device or file.
While the use cases are completely different, there is a common theme: allocating address space for a process, and defining how pages will be mapped there.
There are other use cases for mmap, in particular shared memory.
| mmap a file vs mmap in malloc |
1,539,294,126,000 |
I am conducting some research on Grsecurity on Hardened Gentoo, see http://en.wikibooks.org/wiki/Grsecurity. To be more specific, I am trying to find an example where subject mode x makes a difference.
As said in the wiki: subject mode x: Allows executable anonymous shared memory for this subject.
Now, the kernel rejects
mem = mmap(NULL, MAP_SIZE, PROT_WRITE|PROT_EXEC, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
as well as
mem = mmap(NULL, MAP_SIZE, PROT_WRITE, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
mprotect(mem, MAP_SIZE, PROT_EXEC);
or vice versa. On the other hand
mem = mmap(NULL, MAP_SIZE, PROT_READ|PROT_EXEC, MAP_ANONYMOUS | MAP_SHARED, -1, 0);
works fine.
For all of the above it does not matter whether grsec is active or not, and if it is, it does not matter whether subject mode x is set or not - the kernel simply does not allow shared memory that is (or was) writable and executable.
Therefore: what is subject mode x good for, and for what piece of code would it make a difference?
|
According to Brad Spengler the subject mode x applies to System V shared memory only, see http://forums.grsecurity.net/viewtopic.php?f=5&t=3935. On top of that PaX strikes unless MPROTECT is disabled for the binary under consideration.
| Grsecurity subject mode x |
1,539,294,126,000 |
I am reading the Linux Progamming Interface.
49.9 MAP_NORESERVE and Swap Space Overcommitting
Some applications create large (usually private anonymous) mappings,
but use only a small part of the mapped region. For example, certain
types of scientific applications allocate a very large array, but
operate on only a few widely separated elements of the array (a
so-called sparse array).
If the kernel always allocated (or reserved) enough swap space for the
whole of such mappings, then a lot of swap space would potentially be
wasted. Instead, the kernel can reserve swap space for the pages of a
mapping only as they are actually required (i.e., when the application
accesses a page). This approach is called lazy swap reservation,
and has the advantage that the total virtual memory used by
applications can exceed the total size of RAM plus swap space.
To put things another way, lazy swap reservation allows swap space to
be overcommitted. This works fine, as long as all processes don’t
attempt to access the entire range of their mappings. ...
As far as I know, swap space is a chunk of space in disk, reserved for memory swapping. When those pages in memory are inactive, they are swapped into swap space in disk. It's like a second level cache for memory/ram.
Then what the hell is this lazy swap reservation mechanism?
Let me demostrate my confusion with an example.
Some applications create large (usually private anonymous) mappings....
Ok, then assume I malloc a big array 16384(4096*4) bytes (create large (usually private anonymous) mappings), and operate on only a few widely separated elements of the array.
Then some inactive pages are swapped into swap space, right? Let's say 0-4095(4096B), 8192-12287(4096B) are in memory, and all the other inactive pages, 4096-8191(4096B), 12288-16383(4096B) are swapped into swap space.
Then what does it mean by saying:
Instead, the kernel can reserve swap space for the pages of a
mapping only as they are actually required (i.e., when the application
accesses a page).
Where else can these inactive pages (4096-8191(4096B) and 12288-16383(4096B) ) stay in, if not staying in the swap space? The text seems to indicate that there is a 3rd level cache for swap space.
memory -> swap space (disk) -> ????
|
Swap isn’t really a second-level cache for memory; it’s one of several backing stores for memory. When the kernel needs to allocate a page of physical memory, but doesn’t have enough free memory, it needs to evict another page; it can only do that if the contents of the evicted page are either discardable, or can be restored from somewhere else. That somewhere else is backing store: it can be a file on disk (e.g. for executables, or mapped files), or some area of swap.
Swap reservation comes into play when memory accounting tracks overcommitting (see table 49-4 in LPI). When overcommitting isn’t allowed, the kernel needs to determine, at allocation time, whether an allocation is possible. For private writable mappings and shared anonymous mappings, this means that it has to have enough address space, and enough room in swap (so that the kernel can guarantee that the contents of the mapped memory can be written there, thus guaranteeing that writes to the mapped memory will never cause SIGSEGV).
Lazy swap reservation is required for overcommit: it means that the kernel can allocate a swap-backed memory map without reserving the corresponding swap space. As mentioned in LPI, this allows programs to allocate much more memory than is really available, and should be requested using MAP_NORESERVE. The reservation then only happens when a page is written to, which means that writes can fail with SIGSEGV or result in the OOM killer stepping in.
This becomes significant for much larger allocations than your 16KiB example. Imagine you want a sparse 64GiB, 262,144×262,144 array, to make your program easier to write: with strict reservation, you’d need to have all that memory available; without strict reservation, you don’t, and only the pages you write to will actually be allocated.
Note that this is all Linux-specific and tightly tied to the chosen system overcommit policy (/proc/sys/vm/overcommit_memory): in modes 1 (always overcommit) and 2 (never overcommit), MAP_NORESERVE doesn’t change anything, it only has an effect in mode 0.
| What is lazy swap reservation? |
1,539,294,126,000 |
I have a very large disk drive (2TB), but not very much RAM (8GB). I'd like to be able to run some big data experiments on a large file (~200GB) that exists on my disk's filesystem. I understand that will be very expensive in terms of disk bandwidth, but I don't mind the high I/O usage.
How could I load this huge file into a C++ array, so that I could perform reads and writes to the file at locations of my choosing? Does mmap work for this purpose? What parameter options should I be using to do this? I don't want to trigger the OOM killer at any point of running my program.
I know that mmap supports file-backed and anonymous mappings but I'm not entirely sure which to use.
What about between using a private vs shared mapping?
|
It only makes sense to use a file-backed mapping to mmap a file, not an anonymous mapping. If you want to write to the mapped memory and have the changes get written back to the file, then you need to use a shared mapping. With a file-backed, shared mapping, you don't need to worry about the OOM killer, so as long as your process is 64-bit, there's no problem with just mapping the entire file into memory. (And even if you weren't 64-bit, the problem would be lack of address space, not lack of RAM, so the OOM killer still wouldn't affect you; your mmap would just fail.)
| Mmaping tremendously large files |
1,539,294,126,000 |
We're emulating a Cortex M3 cpu and would like to pass some parameters to the guest during run-time. The simplest idea seems to be to write directly to some memory area. I tried simply adding -mem-path /tmp/qemu.ram which did nothing. Adding
-object memory-backend-file,id=mem,size=128K,mem-path /tmp/qemu.ram \
worked in that qemu opened it at least. But nothing is written to it during run-time and there seems to be no connection between the guest memory map and the file at all.
To clarify, what I expected to happen is that QEMU, instead of mallocing guest RAM, mmaps the file and uses that instead. This would enable me to seek, read and write from this file during run-time. What am I missing? Is there any other convenient way to get write access to RAM/MMIO of the guest during run-time?
|
I have successfully demonstrated this in qemu 8.0.3 (and failed in 4.2.1)
qemu-system-ppc -M ppce500,memory-backend=foo.ram -cpu e500 -m 256M,slots=2,maxmem=1g -d guest_errors,unimp -bios $PWD/test.elf -s -object memory-backend-file,size=256m,id=foo.ram,mem-path=$PWD/realmemory,share=on,prealloc=on
The key is "memory-backend=foo.ram" and "-object memory-backend-file" that crossreferences id foo.ram. Specifically (to answer the top line question) adding "share=on" is critical to allow writes from inside the VM to be seen on the outside. Without that, reads will be seen but writes become local.
At least on this OS/program, I had to dig a bit into the backing file to see any memory changes, the first few pages of addresses were zero.
| Mapping guest RAM to file in qemu |
1,539,294,126,000 |
I have a latency sensitive application running on an embedded system, and I'm seeing some discrepancy between writing to a ext4 partition and an ext2 partition on the same physical device. Specifically, I see intermittent delays when performing many small updates on a memory map, but only on ext4. I've tried what seem to be some of the usual tricks for improving performance (especially variations in latency) by mounting ext4 with different options and have settled on these mount options:
mount -t ext4 -o remount,rw,noatime,nodiratime,user_xattr,barrier=1,data=ordered,nodelalloc /dev/mmcblk0p6 /media/mmc/data
barrier=0 didn't seem to provide any improvement.
For the ext2 partition, the following flags are used:
/dev/mmcblk0p3 on /media/mmc/data2 type ext2 (rw,relatime,errors=continue)
Here's the test program I'm using:
#include <stdio.h>
#include <cstring>
#include <cstdio>
#include <string.h>
#include <stdint.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdint.h>
#include <cstdlib>
#include <time.h>
#include <stdio.h>
#include <signal.h>
#include <pthread.h>
#include <unistd.h>
#include <errno.h>
#include <stdlib.h>
uint32_t getMonotonicMillis()
{
struct timespec time;
clock_gettime(CLOCK_MONOTONIC, &time);
uint32_t millis = (time.tv_nsec/1000000)+(time.tv_sec*1000);
return millis;
}
void tune(const char* name, const char* value)
{
FILE* tuneFd = fopen(name, "wb+");
fwrite(value, strlen(value), 1, tuneFd);
fclose(tuneFd);
}
void tuneForFasterWriteback()
{
tune("/proc/sys/vm/dirty_writeback_centisecs", "25");
tune("/proc/sys/vm/dirty_expire_centisecs", "200");
tune("/proc/sys/vm/dirty_background_ratio", "5");
tune("/proc/sys/vm/dirty_ratio", "40");
tune("/proc/sys/vm/swappiness", "0");
}
class MMapper
{
public:
const char* _backingPath;
int _blockSize;
int _blockCount;
bool _isSparse;
int _size;
uint8_t *_data;
int _backingFile;
uint8_t *_buffer;
MMapper(const char *backingPath, int blockSize, int blockCount, bool isSparse) :
_backingPath(backingPath),
_blockSize(blockSize),
_blockCount(blockCount),
_isSparse(isSparse),
_size(blockSize*blockCount)
{
printf("Creating MMapper for %s with block size %i, block count %i and it is%s sparse\n",
_backingPath,
_blockSize,
_blockCount,
_isSparse ? "" : " not");
_backingFile = open(_backingPath, O_CREAT | O_RDWR | O_TRUNC, 0600);
if(_isSparse)
{
ftruncate(_backingFile, _size);
}
else
{
posix_fallocate(_backingFile, 0, _size);
fsync(_backingFile);
}
_data = (uint8_t*)mmap(NULL, _size, PROT_READ | PROT_WRITE, MAP_SHARED, _backingFile, 0);
_buffer = new uint8_t[blockSize];
printf("MMapper %s created!\n", _backingPath);
}
~MMapper()
{
printf("Destroying MMapper %s\n", _backingPath);
if(_data)
{
msync(_data, _size, MS_SYNC);
munmap(_data, _size);
close(_backingFile);
_data = NULL;
delete [] _buffer;
_buffer = NULL;
}
printf("Destroyed!\n");
}
void writeBlock(int whichBlock)
{
memcpy(&_data[whichBlock*_blockSize], _buffer, _blockSize);
}
};
int main(int argc, char** argv)
{
tuneForFasterWriteback();
int timeBetweenBlocks = 40*1000;
//2^12 x 2^16 = 2^28 = 2^10*2^10*2^8 = 256MB
int blockSize = 4*1024;
int blockCount = 64*1024;
int bigBlockCount = 2*64*1024;
int iterations = 25*40*60; //25 counts simulates 1 layer for one second, 5 minutes here
uint32_t startMillis = getMonotonicMillis();
int measureIterationCount = 50;
MMapper mapper("sparse", blockSize, bigBlockCount, true);
for(int i=0; i<iterations; i++)
{
int block = rand()%blockCount;
mapper.writeBlock(block);
usleep(timeBetweenBlocks);
if(i%measureIterationCount==measureIterationCount-1)
{
uint32_t elapsedTime = getMonotonicMillis()-startMillis;
printf("%i took %u\n", i, elapsedTime);
startMillis = getMonotonicMillis();
}
}
return 0;
}
Fairly simplistic test case. I don't expect terribly accurate timing, I'm more interested in general trends. Before running the tests, I ensured that the system is in a fairly steady state with very little disk write activity occuring by doing something like:
watch grep -e Writeback: -e Dirty: /proc/meminfo
There is very little to no disk activity. This is also verified by seeing 0 or 1 in the wait column from the output of vmstat 1. I also perform a sync immediately before running the test. Note the aggressive writeback parameters being provided to the vm subsystem as well.
When I run the test on the ext2 partition, the first one hundred batches of fifty writes yield a nice solid 2012 ms with a standard deviation of 8 ms. When I run the same test on the ext4 partition, I see an average of 2151 ms, but an abysmal standard deviation of 409 ms. My primary concern is variation in latency, so this is frustrating. The actual times for the ext4 partition test looks like this:
{2372, 3291, 2025, 2020, 2019, 2019, 2019, 2019, 2019, 2020, 2019, 2019, 2019, 2019, 2020, 2021, 2037, 2019, 2021, 2021, 2020, 2152, 2020, 2021, 2019, 2019, 2020, 2153, 2020, 2020, 2021, 2020, 2020, 2020, 2043, 2021, 2019, 2019, 2019, 2053, 2019, 2020, 2023, 2020, 2020, 2021, 2019, 2022, 2019, 2020, 2020, 2020, 2019, 2020, 2019, 2019, 2021, 2023, 2019, 2023, 2025, 3574, 2019, 3013, 2019, 2021, 2019, 3755, 2021, 2020, 2020, 2019, 2020, 2020, 2019, 2799, 2020, 2019, 2019, 2020, 2020, 2143, 2088, 2026, 2017, 2310, 2020, 2485, 4214, 2023, 2020, 2023, 3405, 2020, 2019, 2020, 2020, 2019, 2020, 3591}
Unfortunately, I don't know if ext2 is an option for the end solution, so I'm trying to understand the difference in behavior between the file systems. I would most likely have control over at least the flags being used to mount the ext4 system and tweak those.
noatime/nodiratime don't seem to make much of a dent
barrier=0/1 doesn't seem to matter
nodelalloc helps a bit, but doesn't do nearly enough to smooth out the latency variation.
The ext4 partition is only about 10% full.
Thanks for any thoughts on this issue!
|
One word: Journaling.
http://www.thegeekstuff.com/2011/05/ext2-ext3-ext4/
As you talk about embedded im assuming you have some form of flash memory? Performance is very spiky on the journaled ext4 on flash. Ext2 is recommended.
Here is a good article on disabling journaling and tweaking the fs for no journaling if you must use ext4: http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html
| Ext4 exhibits unexpected write latency variance vs. ext2 |
1,539,294,126,000 |
It is possible to write on /dev/mem without using mmap?
I'm enabling pull-up resistors on a Raspberry Pi inside an LKM and the function void *mmap (caddr_t addr, size_t len, int prot, int flags, int fd, off_t offset) doesn't exists.
I've tried to use open (to later convert it into filp_open) but it does nothing:
#include <stdio.h>
#include <stdarg.h>
#include <stdint.h>
#include <stdlib.h>
#include <ctype.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <fcntl.h>
#include <sys/mman.h>
#include <time.h>
#include <errno.h>
// From https://github.com/RPi-Distro/raspi-gpio/blob/master/raspi-gpio.c
#define PULL_UNSET -1
#define PULL_NONE 0
#define PULL_DOWN 1
#define PULL_UP 2
#define GPIO_BASE_OFFSET 0x00200000
#define GPPUD 37
#define GPPUDCLK0 38
#define BASE_READ 0x1000
#define BASE_SIZE (BASE_READ/sizeof(uint32_t))
uint32_t getGpioRegBase(void) {
const char *revision_file = "/proc/device-tree/system/linux,revision";
uint8_t revision[4] = { 0 };
uint32_t cpu = 0;
FILE *fd;
if ((fd = fopen(revision_file, "rb")) == NULL) {
printf("Can't open '%s'\n", revision_file);
exit(EXIT_FAILURE);
}
else {
if (fread(revision, 1, sizeof(revision), fd) == 4) cpu = (revision[2] >> 4) & 0xf;
else {
printf("Revision data too short\n");
exit(EXIT_FAILURE);
}
fclose(fd);
}
printf("CPU: %d\n", cpu);
switch (cpu) {
case 0: // BCM2835 [Pi 1 A; Pi 1 B; Pi 1 B+; Pi Zero; Pi Zero W]
//chip = &gpio_chip_2835;
return 0x20000000 + GPIO_BASE_OFFSET;
case 1: // BCM2836 [Pi 2 B]
case 2: // BCM2837 [Pi 3 B; Pi 3 B+; Pi 3 A+]
//chip = &gpio_chip_2835;
return 0x3f000000 + GPIO_BASE_OFFSET;
case 3: // BCM2711 [Pi 4 B]
//chip = &gpio_chip_2711;
return 0xfe000000 + GPIO_BASE_OFFSET;
default:
printf("Unrecognised revision code\n");
exit(1);
}
}
int writeBase(uint32_t reg_base, uint32_t offset, uint32_t data) {
int fd;
if ((fd = open("/dev/mem", O_RDWR | O_SYNC | O_CLOEXEC) ) < 0) return -1;
if (lseek(fd, reg_base+offset, SEEK_SET) == -1) return -2;
if (write(fd, (void*)&data, sizeof(uint32_t)) != sizeof(uint32_t)) return -3;
if (close(fd) == -1) return -4;
return 0;
}
int setPull(unsigned int gpio, int pull) {
int r;
int clkreg = GPPUDCLK0 + (gpio / 32);
int clkbit = 1 << (gpio % 32);
uint32_t reg_base = getGpioRegBase();
r = writeBase(reg_base, GPPUD, pull); // base[GPPUD] = pull
if (r < 0) return r;
usleep(10);
r = writeBase(reg_base, clkreg, clkbit); // base[clkreg] = clkbit
if (r < 0) return r;
usleep(10);
r = writeBase(reg_base, GPPUD, 0); // base[GPPUD] = 0
if (r < 0) return r;
usleep(10);
r = writeBase(reg_base, clkreg, 0); // base[clkreg] = 0
usleep(10);
return r;
}
int main(int argc, char *argv[]) {
int gpio, r;
if (argc!=2) {
printf("GPIO pin needed!\n");
return 1;
}
gpio = atoi(argv[1]);
printf("Enabling pull-up on GPIO%d...\n", gpio);
r = setPull(gpio, PULL_UP);
printf("Return value: %d\n", r);
if (r != 0) printf("%s\n", strerror(errno));
return r;
}
This is a fragment of raspi-gpio that does what I want:
#include <stdio.h>
#include <stdarg.h>
#include <stdint.h>
#include <stdlib.h>
#include <ctype.h>
#include <unistd.h>
#include <errno.h>
#include <string.h>
#include <fcntl.h>
#include <sys/mman.h>
#include <time.h>
// From https://github.com/RPi-Distro/raspi-gpio/blob/master/raspi-gpio.c
#define PULL_UNSET -1
#define PULL_NONE 0
#define PULL_DOWN 1
#define PULL_UP 2
#define GPIO_BASE_OFFSET 0x00200000
#define GPPUD 37
#define GPPUDCLK0 38
uint32_t getGpioRegBase(void) {
const char *revision_file = "/proc/device-tree/system/linux,revision";
uint8_t revision[4] = { 0 };
uint32_t cpu = 0;
FILE *fd;
if ((fd = fopen(revision_file, "rb")) == NULL)
{
printf("Can't open '%s'\n", revision_file);
}
else
{
if (fread(revision, 1, sizeof(revision), fd) == 4)
cpu = (revision[2] >> 4) & 0xf;
else
printf("Revision data too short\n");
fclose(fd);
}
printf("CPU: %d\n", cpu);
switch (cpu) {
case 0: // BCM2835 [Pi 1 A; Pi 1 B; Pi 1 B+; Pi Zero; Pi Zero W]
return 0x20000000 + GPIO_BASE_OFFSET;
case 1: // BCM2836 [Pi 2 B]
case 2: // BCM2837 [Pi 3 B; Pi 3 B+; Pi 3 A+]
return 0x3f000000 + GPIO_BASE_OFFSET;
case 3: // BCM2711 [Pi 4 B]
return 0xfe000000 + GPIO_BASE_OFFSET;
default:
printf("Unrecognised revision code\n");
exit(1);
}
}
volatile uint32_t *getBase(uint32_t reg_base) {
int fd;
if ((fd = open ("/dev/mem", O_RDWR | O_SYNC | O_CLOEXEC) ) < 0) return NULL;
return (uint32_t *)mmap(0, /*chip->reg_size*/ 0x1000,
PROT_READ|PROT_WRITE, MAP_SHARED,
fd, reg_base);
}
void setPull(volatile uint32_t *base, unsigned int gpio, int pull) {
int clkreg = GPPUDCLK0 + (gpio / 32);
int clkbit = 1 << (gpio % 32);
base[GPPUD] = pull;
usleep(10);
base[clkreg] = clkbit;
usleep(10);
base[GPPUD] = 0;
usleep(10);
base[clkreg] = 0;
usleep(10);
}
int main(int argc, char *argv[]) {
if (argc!=2) {
printf("GPIO pin needed!\n");
return 1;
}
uint32_t reg_base = getGpioRegBase();
volatile uint32_t *base = getBase(reg_base);
if (base == NULL || base == (uint32_t *)-1) {
printf("Base error");
return 1;
}
printf("Base: %p\n", base);
setPull(base, atoi(argv[1]), PULL_UP);
return 0;
}
And here's the KML fragment that enables the pull-up (I need to remove the mmap part):
#include <linux/types.h> // uint_32
#include <linux/fs.h> // filp_open/filp_close
#include <linux/delay.h> // udelay
#define PULL_DOWN 1
#define PULL_UP 2
#define GPIO_BASE_OFFSET 0x00200000
#define GPPUD 37
#define GPPUDCLK0 38
static uint32_t getGpioRegBase(bool *error) {
uint8_t revision[4] = { 0 };
uint32_t cpu = 0;
struct file *fd;
ssize_t rc = 0;
if (IS_ERR(( fd = filp_open("/proc/device-tree/system/linux,revision", O_RDONLY | O_SYNC | O_CLOEXEC, 0) ))) {
*error = true;
return 0;
}
if ((rc = kernel_read(fd, revision, sizeof(revision), 0)) == 4) cpu = (revision[2] >> 4) & 0xf;
else {
*error = true;
return 0;
}
filp_close(fd, NULL);
*error = false;
switch (cpu) {
case 0: // BCM2835 [Pi 1 A; Pi 1 B; Pi 1 B+; Pi Zero; Pi Zero W]
return 0x20000000 + GPIO_BASE_OFFSET;
case 1: // BCM2836 [Pi 2 B]
case 2: // BCM2837 [Pi 3 B; Pi 3 B+; Pi 3 A+]
return 0x3f000000 + GPIO_BASE_OFFSET;
case 3: // BCM2711 [Pi 4 B]
return 0xfe000000 + GPIO_BASE_OFFSET;
default:
*error = true;
return 0;
}
}
static volatile uint32_t *getBase(uint32_t reg_base) {
struct file *fd;
volatile uint32_t *r;
if (IS_ERR(( fd = filp_open("/dev/mem", O_RDWR | O_SYNC | O_CLOEXEC, 0) ))) return NULL;
r = (uint32_t*)mmap(0, 0x1000, PROT_READ|PROT_WRITE, MAP_SHARED, fd, reg_base);
filp_close(fd, NULL); // TODO the original didn't have this
return r;
}
static void setPull(volatile uint32_t *base, uint32_t gpio, int pull) {
int clkreg = GPPUDCLK0 + (gpio / 32);
int clkbit = 1 << (gpio % 32);
base[GPPUD] = pull;
udelay(10);
base[clkreg] = clkbit;
udelay(10);
base[GPPUD] = 0;
udelay(10);
base[clkreg] = 0;
udelay(10);
}
/**
* Equivalent to 'raspi-gpio set <gpio> <pu/pd>'
* @param gpio Valid GPIO pin
* @param pull PULL_DOWN/PULL_UP
*/
static int setGpioPull(uint32_t gpio, int pull) {
bool error;
uint32_t reg_base;
volatile uint32_t *base;
reg_base = getGpioRegBase(&error);
if (error) return -1;
base = getBase(reg_base);
if (base == NULL || base == (uint32_t*)-1) return -1;
setPull(base, gpio, pull);
return 0;
}```
|
/proc and device nodes in /dev are intended for user-space; the kernel doesn’t need them, and kernel modules shouldn’t use them.
Instead, to access GPIOs, you should use ioremap and the various ioread... and iowrite... functions: ioremap to get an address corresponding to the physical address you’re after, and the other functions to perform the IOs.
I don’t know off-hand how to retrieve the device-tree information you’re getting from /proc, but there should be in-kernel functions to do so.
| Write in /dev/mem without using mmap |
1,539,294,126,000 |
I'm curious because today the only way I know how to give two different processes the same shared-memory is through a memory-mapped file, in other words, both processes open the same memory-mapped file and write/read to/from it.
That has penalties / drawbacks as the operating system needs to swap between disk and memory.
Apologies in advance if that is a silly question, but is there such a thing as a pure shared memory between processes, not backed by a file. If yes, how would the processes get a hold of it if not using a memory-mapped file or /dev/shm file?
|
Apologies in advance if that is a silly question, but is there such a thing as a pure shared memory between processes, not backed by a file.
Not a silly question!
There is, it's the default way of getting it; (SYSV) shmget is the function you use to get these shared memory buffers. You assign a string name to it, a key, and another process can use that key with shmget to get access to the same. The POSIX way is shm_open with very similar semantics: you give your segment a name, which might look a lot like a file name, but isn't backed by some hard drive, and subsequent shm_open calls on the same name (with compatible/no flags) will grant access to the same memory.
| Is it possible for two processes to use the same shared-memory without resorting to a file to obtain it, be it a memory-mapped file or /dev/shm file? |
1,539,294,126,000 |
I'm not 100% certain about whether this is a U&L question or a SO question. On balance I'm posting it on U&L as it's OS related.
Background
As far as I know, Linux will load shared libraries (.so files) by memory mapping them as copy-on-write. One advantage of this is that multiple processes which share the same large library will all share the same physical RAM for much of that library's content.
This doesn't necessarily happen with Docker because processes run in their own "container" based off an "image" and each image contain's it's own copy of shared libraries. This is deliberate. It allows programs to ship with their own dependencies (libraries) which may be substantially different to the libraries already installed on the system.
So a program running natively on a docker host will not share the same memory for libraries as a program running inside a docker container because the program in the docker container has mapped to different copies of the libraries.
Docker layers explained
Docker images are created in layers. Each layer adds to the lower one, sometimes overwriting existing files. Not every file is changed in every layer.
Docker allows you create new images by adding new layers to an older image. When this happens you end up with multiple images sharing the same layers. The images share identical copies of some of the same files.
Docker keeps the layers separately, at least before runtime. Eg: wen pulling an image from Docker Hub, Docker fetches images by fetching each image's constituent layers. It only fetching the layers it doesn't already have.
What I don't know
When creating or running a container Docker must assemble the layers into a single coherent file system. I don't know how it does this. It could:
Copy the files into one place
Create hard links into one place
Use an overlay file system
Depending on what it does, files which originated from the same layer may be identical copies, or they may the exact same file on the file system.
This will ultimately affect what happens when the files are memory mapped by multiple processes.
What am I really trying to discover?
I want to know if running two containers from two different images will share the same RAM for a single shared library that originated in a single layer.
|
At least in some configurations, yes, containers can share memory mappings for files in the same layer in different images.
Here’s an experiment to demonstrate this. I’m using two different images, one based on the other:
$ docker history 5f35156022ae
IMAGE CREATED CREATED BY SIZE COMMENT
5f35156022ae 7 weeks ago COPY scripts/shared/ . # buildkit 1.05MB buildkit.dockerfile.v0
<missing> 7 weeks ago WORKDIR /opt/shipyard/scripts 0B buildkit.dockerfile.v0
...
$ docker history 569bf4207a08
IMAGE CREATED CREATED BY SIZE COMMENT
569bf4207a08 7 weeks ago /bin/sh -c #(nop) CMD ["sh"] 0B
ed9510deb54e 7 weeks ago /bin/sh -c #(nop) ENTRYPOINT ["/opt/shipyar… 0B
c3e0351f0dd2 7 weeks ago /bin/sh -c #(nop) WORKDIR /go/src/github.com… 0B
a476f9f2b118 7 weeks ago /bin/sh -c #(nop) ENV DAPPER_OUTPUT=/go/src… 0B
29a76c4ff3e7 7 weeks ago /bin/sh -c #(nop) ENV DAPPER_ENV=QUAY_USERN… 0B
2f4a590d61ef 7 weeks ago /bin/sh -c #(nop) ARG PROJECT 0B
5f35156022ae 7 weeks ago COPY scripts/shared/ . # buildkit 1.05MB buildkit.dockerfile.v0
<missing> 7 weeks ago WORKDIR /opt/shipyard/scripts 0B buildkit.dockerfile.v0
...
I started two containers, just using the entrypoint shell:
$ pstree -p
...
├─containerd-shim(530457)─┬─bash(530477)
│ ├─{containerd-shim}(530458)
...
├─containerd-shim(530622)─┬─entry(530643)───sh(530685)
│ ├─{containerd-shim}(530624)
...
Let’s examine the C library used by both these shells:
$ sudo grep libc-2.33 /proc/{530477,530685}/maps
/proc/530477/maps:7fc127f81000-7fc127fa7000 r--p 00000000 00:1f 3117 /usr/lib64/libc-2.33.so
/proc/530477/maps:7fc127fa7000-7fc1280f4000 r-xp 00026000 00:1f 3117 /usr/lib64/libc-2.33.so
/proc/530477/maps:7fc1280f4000-7fc128140000 r--p 00173000 00:1f 3117 /usr/lib64/libc-2.33.so
/proc/530477/maps:7fc128140000-7fc128141000 ---p 001bf000 00:1f 3117 /usr/lib64/libc-2.33.so
/proc/530477/maps:7fc128141000-7fc128144000 r--p 001bf000 00:1f 3117 /usr/lib64/libc-2.33.so
/proc/530477/maps:7fc128144000-7fc128147000 rw-p 001c2000 00:1f 3117 /usr/lib64/libc-2.33.so
/proc/530685/maps:7f6a5df94000-7f6a5dfba000 r--p 00000000 00:1f 3117 /usr/lib64/libc-2.33.so
/proc/530685/maps:7f6a5dfba000-7f6a5e107000 r-xp 00026000 00:1f 3117 /usr/lib64/libc-2.33.so
/proc/530685/maps:7f6a5e107000-7f6a5e153000 r--p 00173000 00:1f 3117 /usr/lib64/libc-2.33.so
/proc/530685/maps:7f6a5e153000-7f6a5e154000 ---p 001bf000 00:1f 3117 /usr/lib64/libc-2.33.so
/proc/530685/maps:7f6a5e154000-7f6a5e157000 r--p 001bf000 00:1f 3117 /usr/lib64/libc-2.33.so
/proc/530685/maps:7f6a5e157000-7f6a5e15a000 rw-p 001c2000 00:1f 3117 /usr/lib64/libc-2.33.so
Both mapped libraries have the same device and inode, so they’re the same file, and their mappings will be shared where possible.
| Do docker containers share RAM for files memory mapped from the same layer but a different image? |
1,539,294,126,000 |
I have a process that reads data from a hardware device using DMA transfers at a speed of ~4 * 50MB/s and at the same time the data is processed, compressed and written to a 4TB memory mapped file.
Each DMA transfer should (and do on average) take less than 20ms. However, a few times every 5 minutes the DMA transfers can take up to 300ms which is a huge issue.
We believe this might be related to the kernel flushing dirty memory mapped pages to disk. Since if we stop writing to the mapped memory the DMA transfers durations are just fine. However, we are confused as to how/why this could affect the DMA transfers and whether there is a way to avoid this?
The hardware device has some memory to buffer data but when the DMA transfers are this slow we are loosing data.
Currently we’re doing testing on Arch Linux with a 4.1.10 lts kernel, but we’ve also tried Ubuntu 14.04 with mostly worse results. Hardware is a HP z820 workstation, 32GB RAM and dual Xeon E5-2637 @ 3.50Ghz (http://www8.hp.com/h20195/v2/GetPDF.aspx/c04111177.pdf).
We also tried a Windows version of our software as well that does not suffer from this specific issue but has lots of other issues.
|
Linux has some realtime options, though it is not a realtime kernel as such.
This allows a process to demand that it is scheduled before non-realtime processes, as soon as it is ready, and to hold on to the cpu for as long as necessary.
By default processes are given scheduling policy SCHED_OTHER. You can
set this to realtime SCHED_FIFO for a given running pid with chrt -f -p prio pid,
or prefix the command with chrt -f prio when you start it. The prio
priority is independent of normal processes, and is only used when realtime processes compete for resources. ps shows these priorities as negative values (eg -21 for realtime prio 20).
ionice --class 1 -p pid can also help scheduling your process with preferential realtime io queueing.
| mmap and slow DMA transfers |
1,539,294,126,000 |
As I understand, 'MAP_SHARED' flag in mmap() shares any changes made by a process to the memory map immediately with other processes and eventually writes the changes back to the file. Is it possible to share the in-memory changes with other processes but not write the changes back to file? Does it need a new type of flag? How complex would be to implement that kind of flag (e.g. 'MAP_SHARED_NOT_WRITE_BACK)?
=======================================
Added: The usecase I have in mind: Process A mmap's the code segment of a shared library foo.so and makes changes to the code (for example, encrypt the code). I want other processes B, C, etc. created later on and using foo.so share the modified code. I, however, don't want the changes written back to foo.so file. I would prefer a scalable solution that works for multiple processes and many shared libraries.
|
tl;dr; you should use a file which only lives in RAM
like on Linux are the files returned by memfd_create(2) or by opening a file from a tmpfs filesystem [1].
In that case the memory will be backed by the swap instead of a regular file or device -- if there's any swap configured. Beware that if the file is big, this will put pressure on your system and severely degrade its performance for zero benefit.
NB: If you're concerned about your "secrets" being inadvertently written to permanent storage, better look into what encrypted storage solutions there are for your system.
[1] shm_open(3) is implemented on Linux by simply opening a file on a tmpfs mounted on /dev/shm.
| mmap(): Is it possible to prevent writing back to file with MAP_SHARED flag? |
1,539,294,126,000 |
I'm following this answer, trying to generate some major page faults with mmap:
#include <fcntl.h>
#include <stdio.h>
#include <sys/mman.h>
#include <sys/stat.h>
int main(int argc, char ** argv) {
int fd = open(argv[1], O_RDONLY);
struct stat stats;
fstat(fd, &stats);
posix_fadvise(fd, 0, stats.st_size, POSIX_FADV_DONTNEED);
char * map = (char *) mmap(NULL, stats.st_size, PROT_READ, MAP_SHARED, fd, 0);
if (map == MAP_FAILED) {
perror("Failed to mmap");
return 1;
}
int result = 0;
int i;
for (i = 0; i < stats.st_size; i++) {
result += map[i];
}
munmap(map, stats.st_size);
return result;
}
I tried to map a 1.6G file then read but only 1 major page fault occurred.
Major (requiring I/O) page faults: 1
Minor (reclaiming a frame) page faults: 38139
When I read data randomly by
// hopefully this won't trigger extra page faults
unsigned int idx = 0;
for (i = 0; i < stats.st_size; i++) {
result += map[idx % stats.st_size];
idx += i;
}
the page faults surged to 16415
Major (requiring I/O) page faults: 16415
Minor (reclaiming a frame) page faults: 37665
Is there something like prefetching in kernel to preload mmap data? How can I tell this by /usr/bin/time or perf?
I'm using gcc 6.5.0 and Ubuntu 18.04 with 4.15.0-54-generic.
|
Yes, the kernel does readahead by default (what you called prefetching), see https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/mm/filemap.c?h=v5.5#n2476
You can disable readahead on this memory region by calling posix_madvise() after mmap() with the POSIX_MADV_RANDOM advice.
| Weird major page fault number when reading sequentially / randomly in mmap region |
1,539,294,126,000 |
I have an app that uses multiple memory mapped files. If I check the major page faults numbers (with /proc/<pid>/stat), they skyrocket.
I was wondering if it's possible to monitor somehow what memory mapped files are affected by the page swap ins and outs for a process?
At least I would like to see see what mmap-ed files are accessed for a process. I tried with strace, but I found no reads, because I guess no system calls are needed for the direct access to memory.
I would be happy also to know the virtual address in process space where they happen, so at least I could map them manually to the files in pmap output
|
perf trace -F maj
http://man7.org/linux/man-pages/man1/perf-trace.1.html
To connect to an existing process, use -p $PID. If you don't want to show system calls, pass --no-syscalls as well. The system call arguments won't be shown with the same level of detail as strace.
| Monitoring page cache / memory mapped files access |
1,539,294,126,000 |
I'm writing my own data store directly on top of a block device. To ensure durability I want to sync to disk. But here's the thing: I want to sync only part of it.
I'm keeping a journal for crash recovery, and write my future changes to the journal before applying them to the actual place on disk. Then I want to ensure the journal changes are written to disk, and only then make the actual changes to the rest of the disk (which I don't care about fsyncing, until I checkpoint my journal).
I could simply fsync the entire block device, but that forces a lot of things that aren't urgent to be written out.
I have thought of two options, but I'm surprised there is no partial fsync(2) call and nobody asking for it from what I've found.
mmap(2) the full block device and use msync(2) to sync part of it.
open(2) the block device twice, once with O_SYNC and use one for lazy writes and one for my journal writes.
|
There is a Linux-specific system call: sync_file_range()
(Sidenote, using block devices is not portable to FreeBSD)
| Partial fsyncs when writing to block device |
1,496,166,008,000 |
I'm not sure about when to use nc, netcat or ncat.
If one is the deprecated version of another?
If one is only available on one distribution?
If it is the same command but with different names?
In fact I'm a bit confused. My question comes from wanting to do a network speed test between two CentOS 7 servers. I came across several examples using nc and dd but not many using netcat or ncat.
Could someone clarify this for me please?
|
nc and netcat are two names for the same program (typically, one will be a symlink to the other). Though—for plenty of confusion—there are two different implementations of Netcat ("traditional" and "OpenBSD"), and they take different options and have different features.
Ncat is the same idea, but from the Nmap project. There is also socat, which is a similar idea. There is also /dev/tcp, an (optional) Bash feature.
However, if you're looking to do network speed tests then all of the above are the wrong answer. You're looking for iperf3 (site 1 or site 2 or code).
| What are the differences between ncat, nc and netcat? |
1,496,166,008,000 |
In a solution of an exercise I found this:
nc -z [serverip] [port]
What does it do?
On nc man page I found
-z zero-I/O mode [used for scanning]
not very explanatory... Searching on the web I found the Netcat Cheat Sheet which says:
-z: Zero-I/O mode (Don’t send any data, just emit a packet without payload)
So, why should I send a packet without anything? It's like a ping?
|
It can be more useful to think of -z option as meaning "immediately close the connection". My version of nc has this to say about port scanning:
PORT SCANNING
It may be useful to know which ports are open and running services on a target machine. The -z flag can
be used to tell nc to report open ports, rather than initiate a connection. Usually it's useful to turn on
verbose output to stderr by use this option in conjunction with -v option.
For example:
$ nc -zv host.example.com 20-30
Connection to host.example.com 22 port [tcp/ssh] succeeded!
Connection to host.example.com 25 port [tcp/smtp] succeeded!
The port range was specified to limit the search to ports 20 - 30, and is scanned by increasing order (un‐
less the -r flag is set).
You can also specify a list of ports to scan, for example:
$ nc -zv host.example.com http 20 22-23
nc: connect to host.example.com 80 (tcp) failed: Connection refused
nc: connect to host.example.com 20 (tcp) failed: Connection refused
Connection to host.example.com port [tcp/ssh] succeeded!
nc: connect to host.example.com 23 (tcp) failed: Connection refused
The ports are scanned by the order you given (unless the -r flag is set).
Alternatively, it might be useful to know which server software is running, and which versions. This in‐
formation is often contained within the greeting banners. In order to retrieve these, it is necessary to
first make a connection, and then break the connection when the banner has been retrieved. This can be
accomplished by specifying a small timeout with the -w flag, or perhaps by issuing a "QUIT" command to the
server:
$ echo "QUIT" | nc host.example.com 20-30
SSH-1.99-OpenSSH_3.6.1p2
Protocol mismatch.
220 host.example.com IMS SMTP Receiver Version 0.84 Ready
You can use tcpdump to see what nc sends with and without -z.
Without -z:
carbon# nc -v localhost 25
Connection to localhost 25 port [tcp/smtp] succeeded!
220 carbon.home ESMTP Postfix (Ubuntu)
tcpdump -i lo port 25:
15:59:07.956294 IP6 localhost.41584 > localhost.smtp: Flags [S], seq 717573315, win 65476, options [mss 65476,sackOK,TS val 4044858638 ecr 0,nop,wscale 7], length 0
15:59:07.956309 IP6 localhost.smtp > localhost.41584: Flags [S.], seq 3478976646, ack 717573316, win 65464, options [mss 65476,sackOK,TS val 4044858638 ecr 4044858638,nop,wscale 7], length 0
15:59:07.956320 IP6 localhost.41584 > localhost.smtp: Flags [.], ack 1, win 512, options [nop,nop,TS val 4044858638 ecr 4044858638], length 0
15:59:07.956536 IP6 localhost.smtp > localhost.41584: Flags [P.], seq 1:41, ack 1, win 512, options [nop,nop,TS val 4044858639 ecr 4044858638], length 40: SMTP: 220 carbon.home ESMTP Postfix (Ubuntu)
15:59:07.956548 IP6 localhost.41584 > localhost.smtp: Flags [.], ack 41, win 512, options [nop,nop,TS val 4044858639 ecr 4044858639], length 0
15:59:14.917615 IP6 localhost.41584 > localhost.smtp: Flags [F.], seq 1, ack 41, win 512, options [nop,nop,TS val 4044865599 ecr 4044858639], length 0
15:59:14.917754 IP6 localhost.smtp > localhost.41584: Flags [F.], seq 41, ack 2, win 512, options [nop,nop,TS val 4044865600 ecr 4044865599], length 0
15:59:14.917773 IP6 localhost.41584 > localhost.smtp: Flags [.], ack 42, win 512, options [nop,nop,TS val 4044865600 ecr 4044865600], length 0
With -z:
carbon# nc -zv localhost 25
Connection to localhost 25 port [tcp/smtp] succeeded!
tcpdump:
15:59:22.394593 IP6 localhost.41592 > localhost.smtp: Flags [S], seq 449578009, win 65476, options [mss 65476,sackOK,TS val 4044873076 ecr 0,nop,wscale 7], length 0
15:59:22.394605 IP6 localhost.smtp > localhost.41592: Flags [S.], seq 3916701833, ack 449578010, win 65464, options [mss 65476,sackOK,TS val 4044873076 ecr 4044873076,nop,wscale 7], length 0
15:59:22.394615 IP6 localhost.41592 > localhost.smtp: Flags [.], ack 1, win 512, options [nop,nop,TS val 4044873076 ecr 4044873076], length 0
15:59:22.394683 IP6 localhost.41592 > localhost.smtp: Flags [F.], seq 1, ack 1, win 512, options [nop,nop,TS val 4044873076 ecr 4044873076], length 0
15:59:22.394828 IP6 localhost.smtp > localhost.41592: Flags [P.], seq 1:41, ack 2, win 512, options [nop,nop,TS val 4044873077 ecr 4044873076], length 40: SMTP: 220 carbon.home ESMTP Postfix (Ubuntu)
15:59:22.394840 IP6 localhost.41592 > localhost.smtp: Flags [R], seq 449578011, win 0, length 0
You can see that the server still sent the greeting (220 carbon.home ESMTP Postfix (Ubuntu)) but nc did not print it (and presumably did not read it).
| What is `nc -z` used for? |
1,496,166,008,000 |
I know this is not a very descriptive title (suggestions are welcome), but the fact is that I've been pulling my hair over this for hours and I have no clue where the root of the problem might lie.
I wrote a simple Bash script for CLI chat between peers on a local network:
#!/usr/bin/env bash
# Usage: ./lanchat <local_ip>:<local_port> <remote_ip>:<remote_port>
# set -x
set -o errexit -o nounset -o pipefail
IFS=':' read -a socket <<< "$1"
LOCAL_IP=${socket[0]}
LOCAL_PORT=${socket[1]}
IFS=':' read -a socket <<< "$2"
REMOTE_IP=${socket[0]}
REMOTE_PORT=${socket[1]}
RECV_FIFO=".tmp.lanchat"
trap "rm '$RECV_FIFO'; kill 0" EXIT
mkfifo "$RECV_FIFO"
# EDIT: As per @Kamil Maciorowski's suggestion, removing the `-q 0` part below solves the issue.
while true; do nc -n -l -q 0 -s "$LOCAL_IP" -p "$LOCAL_PORT" > "$RECV_FIFO"; done &
TMUX_TOP="while true; do cat '$RECV_FIFO'; done"
TMUX_BOTTOM="while IFS= read -r line; do nc -n -q 0 '$REMOTE_IP' '$REMOTE_PORT' <<< \$line; done"
tmux new "$TMUX_TOP" \; split -v "$TMUX_BOTTOM"
The machine on IP 172.16.0.2 is a VPS running Debian 11, and on 172.16.0.100 is my local computer running Arch.
When I run the commands manually at the prompt on both sides, I get the desired result, which confirms that there is no issue with network communication and that the logic of the script is correct.
## VPS (Debian) side as follows; exchange IPs for local (Arch) side.
$ mkfifo .tmp.lanchat
$ while true; do nc -n -l -q 0 -s 172.16.0.2 -p 1234 > .tmp.lanchat; done &
$ tmux new "while true; do cat .tmp.lanchat; done" \; split -v "while IFS= read -r line; do nc -n -q 0 172.16.0.100 1234 <<< \$line; done"
## Test communication in both directions: all right; then CTRL-C twice to exit both tmux panels
$ kill %1; rm .tmp.lanchat
when I run both sides as a script, however, only the local side (Arch) prints messages from the server (Debian). The server prints nothing from my local computer. When I trace the execution with set -x, everything on both sides looks exactly like the commands that I enter manually, with the right values in place of variables.
Now the odd thing is that if I run the script on the Arch side and commands at the prompt (like above) on the Debian side, then everything works fine again. Furthermore, if I execute the script on the Arch side but source it on the Debian side, that too works fine.
Adding verbose output to both nc calls on the Arch side even prints Connection to 172.16.0.2 1234 port [tcp/*] succeeded!. However, adding a tee log.txt to the call to nc in listening mode on the Debian side does not capture anything:
#...
while true; do
nc -n -l -q 0 -s "$LOCAL_IP" -p "$LOCAL_PORT" | tee log.txt > "$RECV_FIFO";
done &
#...
I tried establishing the connection in all possible orders between the two peers. I even restarted both the server and my local machine to make sure that there were no orphaned or zombie instances of nc hugging the socket that had somehow evaded detection.
Now, Debian and Arch run different versions of nc. So, on the face of it, it sounds like this could be a possible explanation. But doesn't the fact that sourcing the script on Debian's side works fine rule out that possibility?
What the heck is going on, here?
|
I have tested your script in Debian 12 (localhost to localhost, separate working directories) and I confirm the problem. My nc is from netcat-traditional 1.10-47 (i.e. not from netcat-openbsd).
The problem is in -q 0 of the listening nc. From man 1 nc:
-q seconds
after EOF on stdin, wait the specified number of seconds and then quit. If seconds is negative, wait forever.
It seems the listening nc waits for an incoming connection before quitting because of -q 0, it does not wait for incoming data though. Establishing a connection and transmitting data are separate events and because of -q 0 the tool usually quits in between. It's a race; in my tests the listening nc sometimes did relay incoming data to the pipe.
The EOF that triggers the unexpected behavior happens immediately because when a shell without job control runs an asynchronous command (terminated by &, this is how you run the loop with the listening nc), it is obliged to redirect its stdin to /dev/null or to an equivalent file.
When you source the script, your interactive shell interprets it. It's probably bash with job control enabled (the default behavior for an interactive bash). If so, it runs the background loop in a separate process group, but its stdin is still connected to the terminal (in general this allows us to fg a background job and type to it). For a background job the inability to steal input from the terminal comes from SIGTTIN, EOF never happens. This way, when the script is being sourced, the listening nc does not suffer from -q 0 that is the problem when you run the script without sourcing.
Specifying -q 1 for the listening nc will help in practice (while still being racy in theory, I guess), but I think it's best to use -q -1 (wait forever) or simply omit -q (in my tests the default behavior seems to be "wait forever").
-q 0 for the connecting nc (the one inside tmux) makes sense, you do want this nc to quit immediately after sending the payload.
nc on your Arch behaved differently maybe because it's different, or maybe because the overall stress on the OS at that time affected the race.
The lesson is: in case of a nc+nc -l pair that sends data in only one direction (you use one such pair for each line), -q 0 is a useful option for the sender; but for the receiver it's unnecessary, in some circumstances even harmful.
There is more to improve, e.g.:
there is a code injection vulnerability (./lanchat <local_ip>:<local_port> <remote_ip>:<remote_port>"'; rogue command'");
there are short time windows when there is no listening nc on one end or the other;
one pair of ncs is enough to handle a whole "session".
I won't address these here, however I can give you a sketch of an alternative script:
#!/usr/bin/env bash
target="$(tmux new -dP 'tail -f /dev/null')"
uptty="$(tmux display-message -p -F '#{pane_tty}' -t "$target")"
tmux split -t "$target" -v "
rlwrap tee >(sed -u 's/^/ < /' | ts %H:%M >${uptty@Q}) \
| nc ${*@Q} > >(sed -u 's/^/> /' | ts %H:%M >${uptty@Q})
"
tmux a -t "$target"
The script does require bash (for itself and inside tmux). You run it with arguments you want to provide to nc, so e.g.
first a listening side: ./lanchat -n -l -s 192.168.11.22 -p 2345,
then a connecting side: ./lanchat 192.168.11.22 2345.
A single nc to nc connection handles all the communication in both directions. The script uses ts for timestamps (you can remove both instances of | ts %H:%M if you want) and rlwrap for line editing with readline (you can remove rlwrap if you want). sed -u is not portable; sed without -u will cause buffering issues, unless you also get rid of ts.
Tested in bash 5.2.15, tmux 3.3a.
| Odd inconsistency between executing and sourcing Bash script |
1,496,166,008,000 |
I'm wondering if there's any way to get telnet to send only a \n, not a \r\n.
For example, if one process is listening on a port like this, to print the bytes of any traffic received:
nc -l 1234 | xxd -c 1
Connecting to it from netcat with nc localhost 1234, and typing "hi[enter]":
0000000: 68 h
0000001: 69 i
0000002: 0a .
Connecting to it from telnet with telnet localhost 1234, and typing "hi[enter]"
0000000: 68 h
0000001: 69 i
0000002: 0d .
0000003: 0a .
Telnet is sending 0x0d0a instead of 0x0a for the newline. I understand that this is a CRLF as opposed to LF. It also sends the CRLF if I use ^M or ^J.
I thought I had found a solution that directly addresses this problem, by using toggle crlf, but even with this option set, Telnet is always sending the \r\n. I've also tried this on various telnet clients, so I'm guessing I'm misunderstanding what the toggling is supposed to do.
Any way to send just a \n through telnet, with enter or otherwise?
|
You can negotiate binary mode. Once in this mode you cannot leave it. Negotiation means the telnet client will send a special byte sequence to the server, which you will have to ignore if you are not implementing the protocol.
Subsequent data is sent unchanged, in line mode. Client:
$ telnet localhost 1234
Connected to localhost.
Escape character is '^]'.
^]
telnet> set binary
Negotiating binary mode with remote host.
hi
^]
telnet> quit
and server
$ nc -l 1234 | xxd -c 1
00000000: ff .
00000001: fd .
00000002: 00 .
00000003: ff .
00000004: fb .
00000005: 00 .
00000006: 68 h
00000007: 69 i
00000008: 0a .
Your telnet client may have an option to start off in binary mode, or you can put an entry in ~/.telnetrc
localhost
set binary
You can apply the binary mode independently in each direction, so you might prefer set outbinary.
| Any way to send just "\n" in Telnet? |
1,496,166,008,000 |
Can anybody tell me why I am getting bad request while executing this command
echo -e "GET http://www.yellowpages.com.eg/Mjg3NF9VUkxfMTEwX2h0dHA6Ly93d3cubG90dXMtYWlyLmNvbV8=/Lotus-Air/profile.html HTTP/1.1\n\n" | nc www.yellowpages.com 80
The same web site opens fine in the browser.
|
The headers in an HTTP request must use CRLF (Windows) line endings. (See Wikipedia or RFC 2616.) Many servers support LF (Unix) line endings as an extension, but not this one.
In addition, HTTP 1.1 requires a Host: header line, as Warren Young pointed out. (See Wikipedia or RFC 2616).
echo -e "GET http://www.yellowpages.com.eg/Mjg3NF9VUkxfMTEwX2h0dHA6Ly93d3cubG90dXMtYWlyLmNvbV8=/Lotus-Air/profile.html HTTP/1.1\r\nHost: www.yellowpages.com.eg\r\n\r\n" | nc www.yellowpages.com 80
or more legibly
sed $'s/$/\r/' <<EOF | nc www.yellowpages.com 80
GET http://www.yellowpages.com.eg/Mjg3NF9VUkxfMTEwX2h0dHA6Ly93d3cubG90dXMtYWlyLmNvbV8=/Lotus-Air/profile.html HTTP/1.1
Host: www.yellowpages.com.eg
EOF
But why not use wget or curl, which will construct a valid request without sweating and still allow you to specify custom headers if necessary?
| How do I get a URL over HTTP with netcat? |
1,496,166,008,000 |
I used following command for port scanning of my machine
nc -zv 192.168.1.1 1-100
but I want to filter only succeeded message from following output.Hence i used the following command
nc -zv 192.168.1.1 1-100|grep succeeded
But no use, still it shows full output
nc: connect to 192.168.1.1 port 1 (tcp) failed: Connection refused
nc: connect to 192.168.1.1 port 2 (tcp) failed: Connection refused
nc: connect to 192.168.1.1 port 3 (tcp) failed: Connection refused
nc: connect to 192.168.1.1 port 4 (tcp) failed: Connection refused
nc: connect to 192.168.1.1 port 5 (tcp) failed: Connection refused
nc: connect to 192.168.1.1 port 6 (tcp) failed: Connection refused
nc: connect to 192.168.1.1 port 7 (tcp) failed: Connection refused
nc: connect to 192.168.1.1 port 8 (tcp) failed: Connection refused
nc: connect to 192.168.1.1 port 9 (tcp) failed: Connection refused
|
Change your command to this:
nc -zv 192.168.1.1 1-100 2>&1 | grep succeeded
2>&1 causes stderr of a program to be written to the same file descriptor as stdout. nc writes to stderr by default, pipe will only get stdout hence grep will miss the data.
See section 3.5 here for more info All about redirection.
| How to filter the success message when using nc port scan |
1,496,166,008,000 |
Today I was reading the nc man page and stumbled on this command. I know that:
mkfifo /tmp/f is creating a named pipe at /tmp/f.
cat /tmp/f is printing whatever is written to that named pipe
and the output of cat /tmp/f is been piped to /bin/sh
/bin/sh is running interactively and stderr is redirected to stdout.
`the output is then piped to nc which is listening on port 1234
and the out put is finally redirected to the named pipe again.
when run, connecting to the remote server on that port i.e 1234 opens a shell prompt and the client can execute arbitrary commands. But how does it work that way?
|
Such a command is taking advantage of IO redirection and and sh interactive mode which is on by default when attached to a TTY.
Note that cat stays open on a FIFO. Thats your first clue. When sh runs all its ever doing is directing its stdout and strerr to the TTY. Instead sh is not attached to a TTY. Normally sh automatically goes into interactive mode when attached to a TTY but since its not the -i option is added. This means it will continue to take input for new commands. The output of those commands is directed to the stdin of nc and the output of nc (which is the commands coming over the network) is redirected to the FIFO.
The FIFO is essentially being used as a named pipe to complete the ring of redirection.
You can think of it more simply as sh and nc are redirecting to each other in a loop. The rest of the command is just fluff.
| How does this command work? mkfifo /tmp/f; cat /tmp/f | /bin/sh -i 2>&1 | nc -l 1234 > /tmp/f |
1,496,166,008,000 |
When I try to run nc -l 1337 -e /bin/bash, it says:
nc: invalid option -- e
usage: nc [-46AacCDdEFhklMnOortUuvz] [-K tc] [-b boundif] [-i interval]
[-p source_port] [--apple-delegate-pid pid] [--apple-delegate-uuid uuid]
[-s source_ip_address] [-w timeout] [-X proxy_version]
[-x proxy_address[:port]] [hostname] [port[s]]
I want to run commands remotely, but instead it just remotely prints text. Why is this not working and how can I fix it?
|
You don't have to use nc -l 1337 -e /bin/bash. Instead, an alternative that works exactly the same is nc -l 1337 | /bin/bash outputs everything it receives into /bin/bash.
| Is nc (netcat) on MacOS missing the "-e" flag? |
1,496,166,008,000 |
I'm using curl to request a specific URL and getting 200 OK response:
curl -v www.youtypeitwepostit.com
* About to connect() to www.youtypeitwepostit.com port 80 (#0)
* Trying 54.197.246.21...
* Connected to www.youtypeitwepostit.com (54.197.246.21) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.29.0
> Host: www.youtypeitwepostit.com
> Accept: */*
>
< HTTP/1.1 200 OK
...
If I save headers to file as:
GET / HTTP/1.1
User-Agent: curl/7.29.0
Host: www.youtypeitwepostit.com
Accept: */*
and try to execute nc command (netcat):
nc www.youtypeitwepostit.com 80 < file
HTTP/1.1 505 HTTP Version Not Supported
Connection: close
Server: Cowboy
Date: Wed, 02 Nov 2016 04:08:34 GMT
Content-Length: 0
I'm getting another response. What's the difference and how can I get 200 OK using nc?
I tried with different versions of HTTP in request header, tried to type request manually to avoid wrong CRLFs, tried to exclude optional headers. The results are similar.
|
The relevant RFC, Hypertext Transfer Protocol (HTTP/1.1): Message Syntax and Routing contains the answer to your question: that each line of a HTTP request should end with CR/LF.
The grammar for the HTTP Message Format specifies that each header line should end with a Carriage Return character (0x0d in ASCII) followed by a line feed character (0x0a):
HTTP-message = start-line
*( header-field CRLF )
CRLF
[ message-body ]
This is expressed more clearly in the description of the Request Line:
A request-line begins with a method token, followed by a single space (SP), the request-target, another single space (SP), the protocol version, and ends with CRLF.
request-line = method SP request-target SP HTTP-version CRLF
Since curl is a specifically developed for HTTP requests it already uses the appropriate line-endings when making HTTP requests. However, netcat is a more general purpose program. As a Unix utility, it uses line-feed characters for line endings by default, thus requiring the user to ensure that lines are terminated correctly.
You can use the unix2dos utility to convert the file containing the request headers to use Carriage Return / Line Feed endings.
If you want to type the HTTP request by hand and have a recent version of nc, you should use its -C option to use CRLF for line endings:
nc -C www.youtypeitwepostit.com 80
By the way, it’s worth noting that most popular Internet protocols (e.g., SMTP) use CR/LF line endings.
Note that some web servers (e.g. Apache) are more forgiving and will accept request lines that are terminated only with a Line Feed character. The HTTP specification allows for this, as mentioned in the Message Parsing Robustness section:
Although the line terminator for the start-line and header fields is the sequence CRLF, a recipient MAY recognize a single LF as a line terminator and ignore any preceding CR.
| What's the difference between using netcat (nc) and curl for HTTP requests? |
1,496,166,008,000 |
The following variable include for example this values
echo $SERVERS
server1,server2,server3,server4,server5
and when I want to pipe them on different lines then I do the following
echo $SERVERS | tr ',' '\n'
server1
server2
server3
server4
server5
now I want to add another pipe ( echo $SERVERS | tr ',' '\n' | ..... ) , in order to print the following expected results
1 ……………… server1
2 ………………… server2
3 ………………… server3
4 ………………… server4
5 ………………… server5
6 ……………… server6
7 ………………… server7
8 ………………… server8
9 ………………… server9
10 ………………… server10
11 ………………… server11
12 ………………… server12
Not sure how to do it but maybe with nc command os similar
Any suggestion?
|
With awk:
$ servers='server1,server2,server3,server4,server5'
$ awk -v RS=, '{print NR "........" $0}' <<<"$servers"
1........server1
2........server2
3........server3
4........server4
5........server5
or, to output the line numbers with left-padding
awk -v RS=, '{printf "%3d........%s\n",NR,$0}' <<<"$servers"
(choose the field width 3 as appropriate for the size of your server list).
| convert one line values to multiple lines with numbering order |
1,496,166,008,000 |
Consider /var/run/acpid.socket. At any point I can connect to it and disconnect from it. Compare that with nc:
$ nc -l -U ./myunixsocket.sock
Ncat: bind to ./myunixsocket.sock: Address already in use. QUITTING.
nc apparently allows only single-use sockets. Question is then, how do I create a socket analogous to /var/run/acpid.socket, for multiple use and reuse ?
|
You do it with the -k option to nc.
-k Forces nc to stay listening for another connection after its cur-
rent connection is completed. It is an error to use this option
without the -l option. When used together with the -u option,
the server socket is not connected and it can receive UDP data-
grams from multiple hosts.
Example:
$ rm -f /tmp/socket # unlink the socket if it already exists
$ nc -vklU /tmp/socket # the server
Connection from mack received!
yes
Connection from mack received!
yes
...
It's recommended to unlink() the socket after use -- but, in fact, most programs check if it exists and remove it before calling bind() on it; if the socket path exists in the filesystem and you try to bind() to it, you will get an EADDRINUSE error even when no program is using it in any way.
One way to avoid this whole mess on linux is to use "abstract" unix sockets, but they don't seem to be supported by netcat.
| How to create a public unix domain socket? |
1,496,166,008,000 |
I've read the manual of nc, it tells me that nc -l means
Used to specify that nc should listen for an incoming connection rather than initiate a connection to a remote host. It is an error to use this option in conjunction with the -p, -s, or -z options. Additionally, any timeouts specified with the -w option are ignored.
As my understanding, the action of nc -l looks like a server. For example, on the same server, one terminal with nc -l 9000 will listen on the port 9000. The other terminal with nc localhost 9000 will become a client. So I can send messages from the second terminal to the first terminal.
However, today I'm learning the Apache Flink. Here is the Hello World of Flink: https://ci.apache.org/projects/flink/flink-docs-stable/getting-started/tutorials/local_setup.html
$ nc -l 9000
lorem ipsum
ipsum ipsum ipsum
bye
It seems that the action of nc -l here is a kind of sending messages, instead of listening on the port.
I'm confused now.
|
Being the target of an incoming connection doesn't prevent netcat from sending data. Once a client has connected, it can both send data to and receive data from the client. In this case, it's sending data to the Flink client.
| How to understand the action of nc -l |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.