date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,526,978,710,000
I am planning to replace two soft RAID1 disks (2TB) with two identical (4TB disks). The current disks are configured with soft RAID 1 running debian 11. The disks are not root. My plan is to replace (hot swap is supported) one 2TB disk with a new 4TB disk and wait for the disks to sync with mdadm (I'm not even sure how to do that but I guess I'll Google it). Once the sync is over I am planning to do the same hot swap with the remaining 2TB disk replacing it with the other new 4TB disk and wait for mdadm to finish to sync. At this point, I will still (hopefully) find myself with two LUKS disks with 2TB partitions that I need to enlarge. This operation is a pain in the ass but I have done it before on my laptop, but never with a RAID1 configuration. Do you think my plan makes sense? Can you give some guidance on how to enlarge a RAID 1 LUKS partition (that is, the last step of my plan)? Is there any other smarter option? As per request in the comments, here's the output of lsblk: root@server:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 3.6T 0 disk `-sda1 8:1 0 3.6T 0 part `-md4 9:4 0 3.6T 0 raid1 `-4tb 253:1 0 3.6T 0 crypt /media/4tb sdb 8:16 0 3.6T 0 disk `-sdb1 8:17 0 3.6T 0 part `-md4 9:4 0 3.6T 0 raid1 `-4tb 253:1 0 3.6T 0 crypt /media/4tb sdc 8:32 0 119.2G 0 disk `-sdc1 8:33 0 119.2G 0 part `-md127 9:127 0 119.2G 0 raid1 / sdd 8:48 0 119.2G 0 disk `-sdd1 8:49 0 119.2G 0 part `-md127 9:127 0 119.2G 0 raid1 / sde 8:64 0 465.8G 0 disk `-sde1 8:65 0 465.8G 0 part `-md0 9:0 0 465.6G 0 raid1 sdf 8:80 0 2.7T 0 disk `-sdf1 8:81 0 2.7T 0 part `-3tb 253:2 0 2.7T 0 crypt /media/3tb sdg 8:96 1 931.5G 0 disk `-sdg1 8:97 1 931.5G 0 part `-md1 9:1 0 931.4G 0 raid1 `-vm 253:3 0 931.4G 0 crypt /media/vm sdh 8:112 1 1.8T 0 disk `-sdh1 8:113 1 1.8T 0 part `-md2 9:2 0 1.8T 0 raid1 `-backup 253:0 0 1.8T 0 crypt /media/backup sdi 8:128 1 931.5G 0 disk `-sdi1 8:129 1 931.5G 0 part `-md1 9:1 0 931.4G 0 raid1 `-vm 253:3 0 931.4G 0 crypt /media/vm sdj 8:144 1 1.8T 0 disk `-sdj1 8:145 1 1.8T 0 part `-md2 9:2 0 1.8T 0 raid1 `-backup 253:0 0 1.8T 0 crypt /media/backup
All your /media/* mounts seem to use the disk->part->raid1->crypt layering. Note that if your existing 2 TB disks are partitioned in MBR style, you can't really do that with the new larger disks, as you'll be hitting the MBR maximum capacity limit. Fortunately, Linux software RAID does not require you to have the same type of partitioning on the individual halves of the RAID set (or in fact, with non-boot disks, you would have the option of not using any partitioning scheme at all). So, assuming that md2 is the RAID set you wish to migrate to the larger disks, and you'll want to swap sdh first: Mark sdh1 as a failed RAID1 component: mdadm --manage /dev/md2 --fail /dev/sdh1 Remove it from the md2 RAID set: mdadm --manage /dev/md2 --remove /dev/sdh1 Tell the kernel that the disk will be hot-unplugged: echo 1 > /sys/block/sdh/device/delete. Depending on the controller and the disk model, the disk may or may not actually spin down as you do this. Physically replace the sdh disk with a new one. Find out the new disk device name (it may or may not be /dev/sdh; let's call it sdX). If you wish to use partitioning, use GPT partitioning type and create a single partition that covers the whole disk. Set the partition type to "Linux RAID" (GPT partition type GUID A19D880F-05FC-4D3B-A006-743F0F84911E; most GPT partitioning tools have some more user-friendly way to specify that.) This partition will be bigger than the existing half of this RAID set; this is entirely fine at this point. mdadm will only use as much as needed to mirror the sdi1 partition until you'll give it permission to use the full capacity in step #10 later. (If you use partitioning, you may want to use some specific size instead of just using the disk in full, in order to guard against the possibility that you might need to replace the disk in the future and might not be able to source replacement disks with the exact same number of blocks. Then again, future disks are likely to be bigger than the current ones, which would make this a non-issue.) Add the new disk to the RAID set. If you partitioned the disk, use mdadm --manage /dev/md2 --add /dev/sdX1; if you decided to use the whole disk as unpartitioned RAID device, use mdadm --manage /dev/md2 --add /dev/sdX. Monitor /proc/mdstat to see when the synchronization is complete. Repeat steps 1-8 for the second disk sdj. When both disks have been replaced and are in sync, use mdadm --grow /dev/md2 --size=max to allow the md2 RAID device to fully use the increased capacity. (If you chose not to use partitioning in step #6 above, you could also use a specific size instead of --size=max here, for the same reason as in step #6.) Once the md2 device has been successfully resized, use cryptsetup resize /dev/mapper/backup to resize the encrypted device on top of the md2 RAID set. Finally, resize the filesystem on top of the encrypted device, with either fsadm resize /dev/mapper/backup, or by using a filesystem-specific tool (resize2fs /dev/mapper/backup for ext2/ext3/ext4 filesystem types, xfs_growfs /media/backup for XFS, etc.)
Replacing soft RAID1 LUKS disks with larger ones
1,526,978,710,000
I have this line in /etc/crypttab for my swap partition: luks-4205519b-f3fe-468f-b05e-44f25f6882a4 UUID=4205519b-f3fe-468f-b05e-44f25f6882a4 /crypto_keyfile.bin luks,keyscript=/bin/cat I commented it out so it now looks like this: # luks-4205519b-f3fe-468f-b05e-44f25f6882a4 UUID=4205519b-f3fe-468f-b05e-44f25f6882a4 /crypto_keyfile.bin luks,keyscript=/bin/cat I rebooted but the mapper file /dev/mapper/luks-4205519b-f3fe-468f-b05e-44f25f6882a4 still gets created. Why?
You need to update your initramfs; there’s a copy of crypttab there. On Debian derivatives, run sudo update-initramfs -u
Why does the swap mapper file get created even though I removed it from /etc/crypttab?
1,526,978,710,000
Working on creating a systemD service to retrieve a key file from a remote SSH server and then use it to auto mount an encrypted Luks disk on a server(not the root drive). [Unit] Description=Open encrypted data volume After=network-online.target Wants=network-online.target StopWhenUnneeded=true [Service] Type=oneshot ExecStart=/bin/sh -c '/etc/luks/key.sh | /sbin/cryptsetup -d - -v luksOpen /dev/disk/by-uuid/13b051b5-7f4f-4030-92da-d59f12422f40 Data_Crypt' RemainAfterExit=true ExecStop=/sbin/cryptsetup -d - -v luksClose Data_Crypt This appears to work correctly, however every time I run systemctl start unlock-data.service I check the systemd logs and I can see it both unlocked the drive and then locked it. Both ExectStart and ExecStop are firing. If I completely remove the ExecStop line from it and run "systemctl start" again, it unlocks the drive exactly as expected. I've also tried chaning the type to "simple" but that didn't work either. I believe oneshot is correct for what I'm doing. This is on Debian 11.3. Fresh install today. Why is ExecStop firing every time this starts?
The behaviour is mostly likely due to the StopWhenUnneeded=true setting in the unit section. As per the manpage, the definition of unneeded is this: StopWhenUnneeded= Takes a boolean argument. If true, this unit will be stopped when it is no longer used. Note that, in order to minimize the work to be executed, systemd will not stop units by default unless they are conflicting with other units, or the user explicitly requested their shut down. If this option is set, a unit will be automatically cleaned up if no other active unit requires it. Defaults to false. Since no other unit depends on this custom unit, systemd would stop it as soon as it starts.
Systemd to ExecStart always executing?
1,526,978,710,000
I recently installed a fresh Ubuntu 20.04 image on my laptop while doing so i chose a full disk encrpytion using a password. (I think it is LUKS but I do not know how to verify this). For some reason I can not decrypt the disk in the standard "decrypt screen" (I believe it is because of the keyboard layout which I chose to be Korean) but I can not verify this because I can only see asteriks (*) when typing in the passphrase. After typing in the (apparently wrong) password I get the error message: cryptsetup: ERROR: keystore-rpool: cryptsetup failed, bad password or options? After three failed attempts the screen closes and a Terminal shows up with initramfs as prompt. I tried figuring out which device is the encrypted device but standard commands such as df -h or lsblk don't work in this environment. Because my keyboard seems to be working fine in this prompt, my questions is how to decrypt the encrypted disk and continue the normal boot process (presumably by mounting the device?) I figured out that there is a command called cryptsetup but I am unsure how to use it and on which device. When I type cryptsetup --help the ouput is too big for my laptop screen and I can't pipe the output into a pager to read the manual. I am unsure how to proceed, any suggestions are welcome. Update: The only problem was me noting down a wrong password in my password manager. But I want to summarize all useful information on the way: As suggested in the answer in: https://askubuntu.com/questions/1087230/ubuntu-18-04-cryptsetup-fails-to-recognize-passphrase-unlocking-from-live-usb I used cryptsetup --debug luksDump /dev/<device> to find the encrypted device. Also useful was knowing that I can switch between the password screen with F1 or Alt + Tab to look for other debug messages.
You can open and map the device with: cryptsetup --verbose luksOpen /dev/sda1 SECRET Where /dev/sda1 is your device and SECRET is the mapping If you succeed to unlock your device, you need to mount it: sudo mount /dev/mapper/SECRET /mnt Maybe it would be more comfortable to plug in a live USB and try opening the device from another system... Also, a link you may find helpful https://askubuntu.com/questions/1087230/ubuntu-18-04-cryptsetup-fails-to-recognize-passphrase-unlocking-from-live-usb
Decrypting a full disk LUKS encryption manually from initramfs
1,526,978,710,000
I need to resize LVM on LUKS on Debian to take space from home and give it to var. └─sda5 8:5 0 931G 0 part └─sda5_crypt 253:0 0 931G 0 crypt ├─my-vg-root 253:1 0 23.3G 0 lvm / ├─my-vg-var 253:2 0 9.3G 0 lvm /var ├─my-vg-swap_1 253:3 0 976M 0 lvm [SWAP] ├─my-vg-tmp 253:4 0 1.9G 0 lvm /tmp └─my-vg-home 253:5 0 802.8G 0 lvm /home I'm following the ResizeEncryptedPartitions tutorial: Boot the desktop, live CD. Install & configure the tools (lvm2 and cryptsetup). Reduce the (root) file system with resize2fs. Reduce the (root) (LVM) Logical Volume with lvreduce. Reduce the (LVM) Physical Volume with pvresize. Reduce the Crypt with cryptsetup. Reboot to reduce the Partition storing the crypt with fdisk. The tutorial continues, instructing the reverse, Detailed resizing ~ Enlarging an encrypted partition This section will be shorter, it is basically the reverse of the above. My question. Do I need to reduce the (LVM) Physical Volume #4 and reduce the Crypt #5, if I'm giving this space over to another partition? The tutorial gives a reason for resizing the LVM Physical Volume Resize your (LVM) Physical Volume. The physical volume used by LVM can become "fragmented" in that the (LVM) Logical Volumes within the (LVM) Physical Volume are not always in order. There is no defragmentation tool, so if you may need to manually move the logical partitions (back up the data, delete the (LVM) Logical Volume, re-create a replacement (LVM) Logical Volume, restore data from backup). I'm thinking of taking home down, 800g-200g and var up 9-200g, and leave 400g free to move later depending on how they both fill up. I get the idea--delete my swap and tmp LVM partitions, then change the var size. I guess the article seems more generic, and so I'm asking here about my particular case. Also on SE: Resize an existing LVM partition and add the space to another LVM partition
You don't need to resize the LVM physical volume or the LUKS device or the partition, if everything you want is to "move" some space from the /home logical volume to /var logical volume, you'll be working only on the logical volume level. Your steps will be (from a LiveCD, you'll need to unlock the encrypted drive first either from the file manager or manually with cryptsetup): lvreduce --resizefs -L 200g my-vg/home to reduce /home to 200 GiB, --resizefs will take care of resizing the filesystem and lvextend --resizefs -L 200g my-vg/var to grow /var to 200 GiB. And that's all. As always, something can go wrong with storage operation so backing up your data is recommended.
Moving Gigs on LVM-on-Luks from one partition to another
1,526,978,710,000
I've setup a server with luks devices (not used for root partition), they are listed in /etc/crypttab this way # <target name> <source device> <key file> <options> luks_device_1 /dev/mapper/vg-lv_1 none luks ... I've also setup a tang server and bound the devices to tang using the command clevis luks bind -d /dev/mapper/vg-lv_1 tang '{"url":"http://svr"}' finally I enable the units clevis-luks-askpass.path and clevis-luks-askpass.service to have the automatic unlocking mechanism working at boot. However the devices are not unlocked at boot, the password is asked on the console, unless I add in the file /etc/crypttab the string _netdev in the options section. But I'm not really fond of that because _netdev is supposed to be used for network devices. Did I miss something ?
Actually, according the manpage clevis-luks-unlockers(7) having the option _netdev in /etc/crypttab is necessary to trigger the automatic unlocking. After a reboot, Clevis will attempt to unlock all _netdev devices listed in /etc/crypttab when systemd prompts for their passwords. This implies that systemd support for _netdev is required.
tang / clevis: automatic unlocking for luks device not triggered unless defined as _netdev
1,526,978,710,000
I've created a unit file to mount the /srv partition automatically. It will check first if /dev/mapper/srv exists and then start it. I'd like to take it one step further and only let it be able to start if /dev/mapper/srv is a LUKS encrypted block device, with the ConditionPathIsEncrypted option. But I get the warning: /etc/systemd/system/srv.mount:4: Unknown lvalue 'ConditionPathIsEncrypted' in section 'Unit' I tried giving it a boolean value, that also didn't work. Putting it in the [Mount] category also didn't solve it. [Unit] Description=srv mount ConditionPathExists=/dev/mapper/srv #ConditionPathIsEncrypted=/dev/mapper/srv [Mount] What=/dev/mapper/srv Where=/srv Type=ext4 Options=defaults [Install] WantedBy=multi-user.target What am I doing wrong?
ConditionPathIsEncrypted= only exists in versions v264-rc1 and newer. If you want to look what conditions the version you are using supports, i would suggest you take a look at the 'systemd.unit' manpage. man systemd.unit There is a section with 'Conditions and Asserts' - the systemd version shipping with Ubuntu 20.04 for example is v245 and thus is missing the ConditionPathIsEncrypted= condition.
ConditionPathIsEncrypted not supported?
1,526,978,710,000
So I am trying to install Arch Linux with separate root and home partitions (also swap and boot ofc). Basically I've partitioned them, I've mounted them, and I've encrypted / and /home (using cryptsetup luksFormat). It looks like this under lsblk: (sorry, cannot copy the text from the virtual machine at this stage) Now I am trying to achieve the following things: I want to decrypt all the partitions at system startup, without having to type the passphrase for each one (I've made them identical by the way) I want to configure GRUB for the encrypted partitions, but so far I've only seen configurations with GRUB_CMDLINE_LINUX="cryptdevice=/dev/sdXY:cryptroot" and I have two of them, so I don't really know what should I put here (maybe only / one?) So for now I am stuck at the point when I want to run mkinitcpio and grub-install/grub-mkconfig but I can't since I probably won't be able to boot my system without a proper GRUB configuration. Do you guys know how would I achieve this? The second one is more important, since there are docs on the first issue, just wanted to put it there for a one-liner advice I guess, it's the second one I've been scratching my head about for the last two hours.
Welcome to the Unix & Linux StackExchange! The job of the initramfs file generated by mkinitcpio is only to unlock and mount the root filesystem; mounting other filesystems like /home will happen a bit later in the boot process, after the root filesystem is unlocked and mounted. GRUB does not need to know anything about the /home filesystem. The cryptdevice option supplies information for the scripts within the initramfs file for unlocking the encryption of the root filesystem. This allows you to easily change the name of the device that is assumed to hold the encrypted root filesystem, should your system configuration change later. For robustness in the face of unexpected changes to system configuration, you might actually want to use the UUID=<UUID_of_sda3> syntax in place of the device name. So you could configure the encryption of /home use a key file stored somewhere within the root filesystem. Since the key file would be located within an encrypted partition, it will be protected when the system is not running. And after the root filesystem is accessible, /etc/crypttab will be able to refer to that file, and so the encryption of the /home filesystem can be unlocked automatically. According to the crypttab paragraph of the Arch wiki, the entry for your /home filesystem in /etc/crypttab might look like this: crypthome /dev/sda4 /etc/cryptsetup-keys.d/crypthome.key You might want to use the UUID=<UUID of sda4> here also instead of the device name. You would ensure /etc/cryptsetup-keys.d/ is accessible by root only (chmod 700), and write the passphrase for the /home filesystem into the crypthome.key file. If an intruder could read this, it means the intruder has effectively root access, so they could e.g. replace your cryptsetup command with one that emails any passphrases to the intruder, no matter whether they are typed or read from a file, so at that point you will have bigger worries anyway.
Encrypt separate root and home partitions
1,526,978,710,000
I have an encrypted external disk on a linux server. On the server, I can do this locally to decrypt cryptsetup -d keyfile luksOpen /dev/sdx1 /mnt/decrypted but I prefer to avoid doing that on the server side. I want to access the server (via ssh/sshfs) and only decrypt the data remotely on my client machine. To access and decrypt the data remotely, I have to mount the encrypted /dev/sdx1 locally on the server (without decrypting it!!) to /mnt/encrypted mount /mnt/encrypted via sshfs on a client machine (then use luksOpen to decrypt) How can I do step 1 without decrypting data? Thanks, Chris ps: maybe I should just use an encrypted container (a file on the server's file system) and not a whole partition? This way I could mount the folder containing the encrypted container/file remotely via sshfs? (and only decrypt it on the client machine)
I can mount and decrypt luks remotely (via sshfs) if I use a luks container (and not a luks partition) to hold the encrypted data. I just had to create a luks container (a file that holds internally the encrypted filesystem), this file is a normal file on a mounted partition so it can be mounted remotely via sshfs and decrypted later (via loop device -> mapper device -> mount). I have tested this and I can confirm it works.
mount crypto_LUKS partition without decrypting (locally)
1,526,978,710,000
I wanted to create bootable usb (dd bs=4M if=input.iso of=/dev/sdc) but sdc is my hard disk with two partitions: 1 simply ext4 and 2 encrypted LUKS. After this action I got Windows installed on my hard disk. How to recover encrypted partition after run dd comand?
It depends whether you have overwritten the beginning of the encrypted partition. The beginning of the encrypted partition is where the key is stored¹. If you've lost that, the data is undecipherable, and the only solution is to restore from a backup. If you only overwrote the first 4MB of the disk, and if the non-encrypted partition was before the encrypted partition, then you've lost the non-encrypted partition but not the encrypted partition. (You may be able to recover some files from the non-encrypted partition even if the beginning has been overwritten, but don't get your hopes up: it's unreliable.) If the encrypted partition is intact, all you need to do is find where it started. When you overwrote the beginning of the disk, that overwrote the partition table, which indicates where partitions are located. Get Testdisk and ask it to locate partitions — it looks for magic values that indicate the beginning of a filesystem or other volume types including LUKS volumes. Provided that the partition is intact, Testdisk should find it and you should be able to recover it. However, you mention installing Windows; that is likely to have overwritten the encrypted partition as well. If you've installed Windows, just write off the data on the disk and restore from backup. ¹ The encryption key is not directly derived from your password. Rather, the encryption key is stored at the beginning of the volume, itself encrypted with a key derived from the password. This allows having multiple passwords (keep multiple copies of the encrypted key, each encrypted with a different password) and changing the password without reencrypting the whole partition (just reencrypt the key slot).
How to recover encrypted partition after dd command [closed]
1,526,978,710,000
A FIDO2 security token should be used for decrypting all disks in a linux machine at boot. systemd allows this since version 248. Can the FIDO2 Security Token be removed after boot when using LUKS for full disk encryption, or does it need to remain plugged in for the disk to be usable for read/write operations?
Accessing an external FIDO2 token on every disk operation would make an encrypted disk device very slow, to the point of being practically unusable. With LUKS, any password, FIDO2 token or other means of unlocking the encryption is used to decrypt an encrypted master key from one of the keyslots in the LUKS header. This master key is then used with a symmetric cipher that is suited for block-oriented use; this cipher is then used for encrypting/decrypting any blocks outside the LUKS header on the encrypted device. Once the device is unlocked, the decrypted master key must be kept in RAM to access the encrypted device. As a result, unplugging the security token won't remove kernel's access to the master key of the LUKS device. So the answer is, yes, the token can be removed. Of course, it would be possible to implement a watchdog program that will perform the necessary steps to stop accessing the encrypted device and destroy the in-memory key on token removal. But that would be separate from systemd-cryptsetup. Such a watchdog program would have to be prepared to kill -9 any processes that would be in the way of closing the encrypted device, if necessary: otherwise it won't be able to guarantee closing the device quickly.
Can a FIDO2 Security Token be removed after unlocking a LUKS volume at boot?
1,526,978,710,000
I'm trying to automatize fdisk with my Bash scripts. In my script, I have the following code block: echo "Creating root filesystem partition..." ( echo n echo 3 echo echo echo w ) | fdisk ${DEVICE} Where the DEVICE is physical disks like /dev/sda /dev/nvme0n1 etc. but not partitions. However, if the disk has a encrypted file system created before, fdisk asks for removing crypto_LUKS signature in simple Yes/No prompt. I can simply add a echo Y line on the block, but that would lead to a problem for the disks that does not contain any crypto_LUKS signature. I've tried calling wipefs --all --force ${DEVICE} and wipefs --all --force ${DEVICE}[1-9]* before calling that block, however it only removes some ordinary file systems and partition tables, not working for LUKS signatures.
So far echoing options to fdisk from standard input/output seems very hardcoded approach, by suggestion of @phuclv using sgdisk is very versatile way automatize your partitioning. For example, I changed the those lines in my main post to sgdisk based lines, it's very efficent way for that: # Create a GPT on the disk: sgdisk -o ${DEVICE} # Create the EFI partition: sgdisk -n 1:0:+512M ${DEVICE} # Create the /boot partition: sgdisk -n 2:0:+$512M ${DEVICE} # Create the / partition: sgdisk -n 3:0:0 ${DEVICE} In that triple number group, first number indicates that desired partition number, second number indicates first sector number (put zero if you want to first available sector), and the last number indicates for the partition (you can either use a fixed location or relative location to first sector by size units as I did, or put zero if you want to use a partition along to latest available sector). If you ask how sgdisk helps to remove LUKS signatures, I can say that sgdisk forces to remove those signatures, not asking like fdisk's CLI prompt.
Scripting fdisk with filesystem signature issues
1,526,978,710,000
I've read that hibernation often causes trouble in Linux environments, e.g. system fails to wake-up or freezes and sometimes even refuses booting after reset. I really like the idea of hibernating the system into a zero-power state, especially for traveling. But I don't wanna hurt my system's stability. So I'm wondering, how is the situation nowadays? Is hibernation in Ubuntu reliable? I'll also be using LUKS for full-disk-encryption if that changes the equation.
Try it (when you have all files saved). The problem is that not all hardware is (fully) supported, or Ubuntu doesn't know much about some devices (if it is save to switch them off). There are two main classes of problems: not all devices can be set to hibernation (e.g. often some external devices, and Ubuntu doesn't know if it can switch them off), and not all devices can restore status from hibernation (or they will not start automatically). So the best way it is to test it. So you will see if the hardware support it. And if some hardware is not 100% ok with it, you can search again in this site (and others), to find a work-around (e.g. putting modules on some black/white list, force to reload modules after hibernation, etc.). It should be safe to test: since a lot of time we have standardized ACPI and other tools to control power, and the more standard components (protocol-wise) CPUs, motherboards and disks should be fully supported. So test it, and if you are not full happy remember to do a full power-off and restart so to have hardware in a well defined state.
Is it risky to use hibernation in Ubuntu?
1,526,978,710,000
I'm using EndeavorOS (basically Arch), but with systemd-boot and dracut for initrd. I have a simple setup with an unencrypted boot partition and LUKS-encrypted root and swap partitions. Specifically, the setup is described in the output below: $ cat /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> UUID=8A2F-4076 /efi vfat defaults,noatime 0 2 /dev/mapper/luks-81733cbe-81f5-4506-8369-1c9b62e7d6be / ext4 defaults,noatime 0 1 /dev/mapper/luks-9715a3f9-f701-47b8-9b55-5143ca88dcd8 swap swap defaults 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 $ lsblk -f NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS nvme0n1 ├─nvme0n1p1 vfat FAT32 8A2F-4076 915.6M 8% /efi ├─nvme0n1p2 crypto_LUKS 1 81733cbe-81f5-4506-8369-1c9b62e7d6be │ └─luks-81733cbe-81f5-4506-8369-1c9b62e7d6be ext4 1.0 endeavouros d8d14c59-8704-4fb8-ad02-7d20a26bc1e1 843.6G 2% / └─nvme0n1p3 crypto_LUKS 1 9715a3f9-f701-47b8-9b55-5143ca88dcd8 └─luks-9715a3f9-f701-47b8-9b55-5143ca88dcd8 swap 1 swap b003ea64-a38d-464c-8609-7278e21f8a0f [SWAP] The problem is that each time I boot up the computer, I need to enter my password twice; once for the root partition and once of the swap (note I use the same password for both if that helps). This has become nuisance. So my question is: Is there a way to automatically decrypt my swap partition upon a successful passphrase for the root? There has been a question very similar to this with a sensible answer, but did not work. The first part of the answer is Debian-centric with a script option not present in other distributions. The second part uses crypttab to specify the location of a keyfile used to decrypt other partitions. As of now, my crypttab in initrd looks like this, which specifies a /crypto_keyfile.bin that exists in the root partition to open either of the partitions: $ lsinitrd --file /etc/crypttab luks-81733cbe-81f5-4506-8369-1c9b62e7d6be /dev/disk/by-uuid/81733cbe-81f5-4506-8369-1c9b62e7d6be /crypto_keyfile.bin luks luks-9715a3f9-f701-47b8-9b55-5143ca88dcd8 /dev/disk/by-uuid/9715a3f9-f701-47b8-9b55-5143ca88dcd8 /crypto_keyfile.bin luks This approach does not work for two reasons: Contrary to what the linked answer suggests (being that the user is queried for the partitions by the order of crypttab entries), the order is random at each boot. Even if I could automatically open my swap partition after opening the root, if swap comes first, then I am still forced to enter the password for root since keyfile is on root. It seems to me that after entering password for root, the filesystem is not mounted immediately. The /crypto_keyfile.bin is actually searched inside the initrd filesystem, which explains the following errors in journal appearing twice: systemd-cryptsetup[460]: Failed to activate, key file '/crypto_keyfile.bin' missing. So if I am on the right track, how could I ensure systemd-cryptsetup to query me first for the root partition and second for the swap each time, and how can I ensure that after opening root, the filesystem is mounted and /crypto_keyfile.bin is successfully found to open the swap partition? Otherwise, if I am completely off track here, is there a way to achieve what I want? Thanks.
Not an Arch expert, the Debian-centic script works well for me on Debian, but according to this Archwiki page should, or at least is expected to work. Passwords entered during boot are cached in the kernel keyring by systemd-cryptsetup(8), so if multiple devices can be unlocked with the same password (this includes devices in crypttab that are unlocked after boot), then you will only need to input each password once. What seems wrong in your setup is that you have your crypttab configured to use a keyfile for the root partition while the keyfile is stored in the encrypted root partition. Since you don't mind to enter a password and use the same password for both, setting none as keyfile in crypttab might solve your problem. The systemd-cryptsetup manpage also explicit mentions password caching so the order in which they are opened should not matter for you. Just in case: if you do not use hibernate/resume you could also encrypt the swap partition with a random key.
How to open all LUKS volumes with use of a single password?
1,526,978,710,000
I know that by configuring crypttab I can automatically unlock LUKS drives by using keyfile stored on another drive in the machine which has been manually unlocked in advance. I have been wondering if a keyfile stored in a different machine/device can also be used, accessed e.g. over ssh. Here is what I would like to achieve. A machine with an encrypted LUKS drive would at startup look at a particular IP address in the local network and if a device (android phone) is present, it would read a keyfile from the device and use it to unlock its drive, otherwise it would ask for password. Are there any better approaches to this? I have found that there exists blueproximity which does something similar for lock-screen.
This is actually possible with LUKS 2 and cryptsetup-ssh. You can simply add the SSH token with cryptsetup-ssh add --ssh-keypath=<path> --ssh-path=<path> --ssh-server=<ip/url> --ssh-user=<username> <device> Where ssh-keypath is a path to SSH key to use to connect to the server, ssh-server is IP address or URL of the server, ssh-user is username to use when connecting to the server and ssh-path is path on the server where the key file can be found. This will a so called token to the LUKS metadata (basically a small metadata blob containing above) and when you try to open the device with cryptsetup open it will try to use the data from the token to get the password. Note that this is a relatively new feature (it was added in cryptsetup 2.4.0) and is still considered experimental and it is possible that your distribution doesn't ship cryptsetup-ssh. I am also not sure how well this works with systemd during boot. An alternative is called Network Bound Disk Encryption (NBDE) which uses the Clevis-Tang client and server together with LUKS (both LUKS 1 and LUKS 2 are supported). You can read more about it in this RHEL documentation.
Unlock luks by other device
1,526,978,710,000
I have a Debian 11 installation with the following partition layout: path format mount point /dev/nvme0n1p7 ext4 (no encryption) /boot (Debian 11) /dev/nvme0n1p8 dm-crypt LUKS2 LVM2 (named vg_main) /dev/mapper/vg_main-lv_swap swap - /dev/mapper/vg_main-lv_debian ext4 / (Debian 11) /dev/mapper/vg_main-lv_ubuntu ext4 / (Ubuntu 22.04) The /boot for Ubuntu, lives inside its root file system (/dev/mapper/vg_main-lv_ubuntu). I'd like to kexec the Ubuntu kernel after booting the Debian kernel that lives in the unencrypted /boot partition that unlocks the LUKS2 partition. I'd like to use the systemd kexec strategy described here. Is there a way to pass any specific kernel parameter to Debian 11 (that I will do in a specially created GRUB2 entry for this) to tell systemd to simple kexec the Ubuntu 22.04 kernel? Solution: Worked as per @telcoM suggestion, with just few adjustments: /etc/systemd/system/ubuntu-kexec.target [Unit] Description=Ubuntu kexec target Requires=sysinit.target ubuntu-kexec.service After=sysinit.target ubuntu-kexec.service AllowIsolate=yes /etc/systemd/system/ubuntu-kexec.service [Unit] Description=Ubuntu kexec service DefaultDependencies=no Requires=sysinit.target After=sysinit.target Before=shutdown.target umount.target final.target [Service] Type=oneshot ExecStart=/usr/bin/mount -o defaults,ro /dev/mapper/vg_main-lv_ubuntu /mnt ExecStart=/usr/sbin/kexec -l /mnt/boot/vmlinuz --initrd=/mnt/boot/initrd.img --command-line="root=/dev/mapper/vg_main-lv_ubuntu resume=UUID=[MY-UUID-HERE] ro quiet splash" ExecStart=/usr/bin/systemctl kexec [Install] WantedBy=ubuntu-kexec.target
You might want to set up a ubuntu-kexec.target which would be essentially a stripped-down version of multi-user.target, with basically: [Unit] Description=Kexec an Ubuntu kernel from within an encrypted partition Requires=basic.target #You might get by with just sysinit.target here Conflicts=rescue.service rescue.target Wants=ubuntu-kexec.service After=basic.target rescue.service rescue.target ubuntu-kexec.service AllowIsolate=yes This would invoke a ubuntu-kexec.service, which you would create to run your kexec command. The kernel parameter would then be: systemd.unit=ubuntu-kexec.target, similar to how rescue.target or emergency.target can be invoked when necessary. The idea is that ubuntu-kexec.target will pull in basic.target (or even just sysinit.target) to get the filesystems mounted, and then pull in the ubuntu-kexec.service which runs the actual kexec command line. As far as I know, you can specify just one systemd.unit= option, and since you need to specify "boot as usual up to sysinit.target/basic.target, then pull in ubuntu-kexec.service, you'll need a unit of type *.target to specify all the necessary details.
How to chainload another kernel with kexec inside a LUKS2 + LVM2 partition?
1,526,978,710,000
I have an Ubuntu machine that has been through several kernel upgrades. At the start of the day, I had 3 kernels installed: 5.11.0-34, 5.11.0-46, and 5.11.0-49. I had to upgrade a bunch of packages, and afterward took the opportunity to remove the middle kernel to open up room in my boot partition. Now, I cannot get either remaining kernel to boot. Neither of them prompts for the password to decrypt the drive where Linux is installed. It doesn't matter whether I boot into recovery mode or not, they print messages and eventually drop into a shell like this: Unable to init MCE device (rc: -5) Volume group "vgubuntu" not found Cannot process volume group vgubuntu Gave up waiting for suspend/resume device Gave up waiting for root file system device. Common problems: - Boot args (cat /proc/cmdline) - Check rootdelay= (did the system wait long enough?) - Missing modules (cat /proc/modules; ls /dev) ALERT! /dev/mapp/vgubuntu-root does not exist. Dropping to a shell! BusyBox v1.30.1 (Ubuntu 1:1.30.1-6ubuntu2.1) built-in shell (ash) Long ago I added mce=off as a kernel parameter. It is present in every GRUB menu option. How can I fix my installation to boot?
Something got borked somewhere and I had to run update-initramfs. I found very similar instructions in three separate places: https://ubuntuforums.org/showthread.php?t=2409754&s=e1f324bf5e566b3bb93374cd07bdcc17&p=13828993 https://askubuntu.com/a/868726/538768 https://feeding.cloud.geek.nz/posts/recovering-from-unbootable-ubuntu-encrypted-lvm-root-partition/ Here's how I got there. I loaded Ubuntu off a live USB and ran fdisk -l to see my partitions and guess which one was encrypted. I saw these (among others): /dev/nvme2n1p1: 512M EFI System /dev/nvme2n1p2: 732M Linux filesystem /dev/nvme2n1p3: 1.8T Linux filesystem <-- I guessed it was this one. Then I decrypted the partition and mounted it like this: sudo -i cryptsetup open /dev/nvme2n1p3 $name vgchange -ay mkdir /mnt/root mount /dev/mapper/$name /mnt/root That let me inspect /etc/crypttab to see which device name to use when decrypting the partition (nvme0n1p3_crypt in this case): nvme0n1p3_crypt UUID=743ab129-75bb-429b-8366-9c066f00c4fe none luks,discard Then I looked at /etc/fstab to see which partitions were the boot partition and EFI partition: # /boot was on /dev/nvme0n1p2 during installation UUID=773ceeb2-5c0f-4838-baad-a1182d7fdd80 /boot ext4 defaults 0 2 # /boot/efi was on /dev/nvme0n1p1 during installation UUID=5C17-FB32 /boot/efi vfat umask=0077 0 1 At installation, these partitions were named like nvme0n1p*, but no longer. I could find their current names by listing /dev/disk/by-uuid: $ ls -l /dev/disk/by-uuid/ lrwxrwxrwx 1 root root 15 Jan 31 12:29 5C17-FB32 -> ../../nvme2n1p1 lrwxrwxrwx 1 root root 15 Jan 31 12:29 743ab129-75bb-429b-8366-9c066f00c4fe -> ../../nvme2n1p3 lrwxrwxrwx 1 root root 15 Jan 31 12:29 773ceeb2-5c0f-4838-baad-a1182d7fdd80 -> ../../nvme2n1p2 Now I had all the pieces I needed to follow the instructions. Here are the actual commands I executed: sudo -i cryptsetup open /dev/nvme2n1p3 nvme0n1p3_crypt mount /dev/mapper/nvme0n1p3_crypt /mnt/root mount /dev/nvme2n1p2 /mnt/root/boot mount /dev/nvme2n1p1 /mnt/root/boot/efi mount --bind /dev /mnt/root/dev mount --bind /run /mnt/root/run chroot /mnt/root mount -t proc proc /proc mount -t sysfs sys /sys update-initramfs -c -k all Then I was able to restart the machine and boot into one of the installed kernels.
How do I get Linux to find the root filesystem on an encrypted partition?
1,526,978,710,000
The commands I invoke are the following Create image file dd if=/dev/zero of=benj.luks bs=1k count=666000 Set up LUKS container cryptsetup luksFormat benj.luks Set up loop device and open the LUKS container cryptsetup luksOpen benj.luks benjImage Check that the loop device has been set up and mapped lsblk Output NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 650.4M 0 loop └─benjImage 254:1 0 634.4M 0 crypt Create file system ext4 on benjImage sudo mkfs.ext4 -b 2048 -F -F /dev/mapper/benjImage Command fails mke2fs 1.46.5 (30-Dec-2021) mkfs.ext4: Invalid argument while setting blocksize; too small for device
cat /sys/block/loop0/queue/physical_block_size cat /sys/block/loop0/queue/logical_block_size revealed, that the loop device was mounted as a 4096 bytes block device on which no 2048 byte file system can be created. hence the solution is to set up the loop device manually and define the sector size at 2048 by utilising the -b option as in sudo losetup -b 2048 -f benj.luks before step 2 and then applying consecutive commands on /dev/loop0 (or whichever loop device is assigned) instead of the image file, ie cryptsetup luksFormat /dev/loop0 cryptsetup luksOpen /dev/loop0 benjImage sudo mkfs.ext4 -b 2048 /dev/mapper/benjImage voila
Why can mkfs.ext4 not create a 2048 block size file system on 650 MB image file?
1,526,978,710,000
I have data on a disk that I want to encrypt by cloning the full filesystem of that disk (source) to a virtual block device (devicemapper/cryptsetup) based on an additional disk (target) of identical capacity. I have already setup the LUKS device on the target disk. The source disk has been initialized as partitionless filesystem. That meas I would need to shrink that filesystem by 2MiB (4096 blocks a 4096 bytes) to account for the additional LUKS2 header, and then dd the data from the filesystem on the source disk to the LUKS device. I did a resize2fs /dev/sda <newsize> with <newsize> being the number of total blocks minus 4096, which seemed to work as expected. However, since the source disk is partitionless, dd would still copy the full disk - including the 4096 blocks by which the filesystem has been shrunk. My question now is: can I safely assume that the free blocks from the resize2fs operation are located at the end of the physical device (source), and thus pass count=<newsize> bs=4096 as argument to dd? Will this clone/copy the complete filesystem? Or any other pitfalls I did not consider? Bonus question: In order to double check, is there already a tool available that computes the md5sums of a disk block-wise (instead of file-wise of a filesystem)?
My question now is: can I safely assume that the free blocks from the resize2fs operation are located at the end of the physical device Yes, that's the assumption you'd need to make even if it was a partition and you were going to shrink it. and thus pass count=<newsize> bs=4096 as argument to dd? Well, probably. dd is a bit weird in that dd count=N bs=M does not mean that N*M bytes will be copied, just that it will issue N reads of M bytes each, and a corresponding write for each. The reads might return less than the requested number of bytes, in which case the total read and written would be less than what you wanted. In practice, I've never seen Linux block devices return partial reads, so it should work. You should check the output, it should say something like "N+M records in" where the first number is the amount of full blocks read, and the second the number of partial blocks read. GNU dd should also warn about incomplete reads. In any case, you might as well use head -c $(( nblocks * 4096 )). See: dd vs cat -- is dd still relevant these days? and When is dd suitable for copying data? (or, when are read() and write() partial) (Anyway, double-check your numbers before doing anything based on a stranger's post on the Internet. It's your filesystem, and you don't want to mess it up due to someone else's typo. You probably knew that already, but anyway.) In order to double check, is there already a tool available that computes the md5sums of a disk block-wise You should be able to just run md5sum /dev/sdx, or head -c $bytes /dev/sdx | md5sum. MD5 should work fine for checking accidental corruption or a truncated copy, but note that in general, it's considered broken. Distinct files with the same hash can be created with some ease. For serious use, use the SHA-2 hashes instead, i.e. sha256sum or sha512sum.
Shrink partitionless filesystem
1,526,978,710,000
I have successfully removed the second slot(is 1, because luks slots start from 0) from an encrypted device. $> echo mypass | cryptsetup luksKillSlot /dev/loop0 1 Now I have this situation $> cryptsetup luksDump /dev/loop0 |grep -iw luks2 0: luks2 2: luks2 The question is: is possible to rename 2: luks2 to 1: luks2 to get something like this? $> cryptsetup luksDump /dev/loop0 |grep -iw luks2 0: luks2 1: luks2
The 2 is key slot ID so it's not possible to "rename" it -- the second key is in slot 2 and the slot 1 is still there just empty (because you wiped it with kill slot). (You always have 8 slots (with LUKS 1, with LUKS 2 you can have up to 32 slots), some of them unused and some of them with keys, luksKillSlot just wipes the content of the slot, it doesn't remove the slot.) It's not possible to simply move the key from one key slot to the other so if you really want to change this, you need to add a new key to the key slot number 1 with cryptsetup luksAddKey --key-slot 1 (and with the same passphrase you use for the slot 2) and then remove the key slot number 2. Note: working with key slots can be dangerous, if you make a mistake you can easily destroy the last key slot, so I would avoid doing that just to make the luksDump output "prettier", but if you want to do that, you can, but you should make a header backup first.
Is possible to rename a luks slot?
1,526,978,710,000
I am currently trying to solve a forensic file hunt training question. There were hints given, that there is a LUKS container and that we need to find the LUKS header in order to go on with the next training question. From the information given along with the question, the header can be identified with this magic number (hex) 4c554b53babe I am now searching the full device for this magic number. To do so, I started with xxd -g0 -c 32 /mnt/luksTraining/training001.dd | grep -C 1 4c554b53babe But then realized, that this only works when the header is not split into two lines. For example the first half of the header is in the very end of a line, and the other half in the following line. Is there a "smart" way to search for a specific file header?
Is there a "smart" way to search for a specific file header? Basically you are searching for a specific bytestring, so it would best to search the binary data directly instead of xxd output, which can produce false negatives (as well as potential false positives), i.e with: grep -oba "$(printf '\x4c\x55\x4b\x53\xba\xbe')" /mnt/luksTraining/training001.dd The above will output the byte offset in training001.dd where the given bytestring is located. More ways to grep bytestrings described in this Stack Overflow question.
Search full device for magic number as hex
1,526,978,710,000
The problem I erroneously removed several files from my /home/username with rm. I realized the mistake as soon as I hit enter, but the damage was done. I immediately created a full disk image with sudo dd if=/dev/sda of=/media/username/external_drive/image.iso and copied it to another PC and prepared to follow a very long path towards data recovery. And then realized I had no idea about where to start from. What I did I read some guides online and eventually extundelete /dev/partition_to_recover_from --restore-directory /path/to/restore came up as the most promising solution, so I tried it. The first problem I encountered was that I had encrypted my drive with LUKS (during OS install) and had to decrypt it. After some more research, I prepared the partition with the following commands (here I changed the real volume group name from the real value of <my_pc_name>-vg to pc-vg). $ sudo kpartx -a -v image.iso # map the disk image partitions add map loop0p1 (254:0): 0 997376 linear 7:0 2048 add map loop0p2 (254:1): 0 2 linear 7:0 1001470 add map loop0p5 (254:2): 0 975769600 linear 7:0 1001472 $ sudo cryptsetup luksOpen /dev/mapper/loop0p5 img # unlock the partition with my data Enter passhprase for /dev/mapper/loop0p5: $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 465,8G 0 loop ├─loop0p1 254:0 0 487M 0 part ├─loop0p2 254:1 0 1K 0 part └─loop0p5 254:2 0 465,3G 0 part └─img 254:3 0 465,3G 0 crypt ├─pc--vg-root 254:4 0 464,3G 0 lvm └─pc--vg-swap_1 254:5 0 980M 0 lvm [...omitting other lsblk output...] $ sudo vgchange -a y pc-vg 2 logical volume(s) in volume group "pc-vg" now active and then tried to recover with $ sudo extundelete /dev/mapper/pc--vg-root --restore-directory /home/username/path/to/restore NOTICE: Extended attributes are not restored. WARNING: EXT3_FEATURE_INCOMPAT_RECOVER is set. The partition should be unmounted to undelete any files without further data loss. If the partition is not currently mounted, this message indicates it was improperly unmounted, and you should run fsck before continuing. If you decide to continue, extundelete may overwrite some of the deleted files and make recovering those files impossible. You should unmount the file system and check it with fsck before using extundelete. Would you like to continue? (y/n) However, the partition was not mounted and df confirmed that. Also, sudo fsck -N only wanted to operate on /dev/sdaX. In doubt, I rebooted the system and repeated the above steps. I received exactly the same output, and considering that I was working on a copy of the original disk image (so I had a backup to use in case of data loss) this time I answered y. The result was: $ sudo extundelete /dev/mapper/pc--vg-root --restore-directory /home/username/path/to/restore NOTICE: Extended attributes are not restored. WARNING: EXT3_FEATURE_INCOMPAT_RECOVER is set. The partition should be unmounted to undelete any files without further data loss. If the partition is not currently mounted, this message indicates it was improperly unmounted, and you should run fsck before continuing. If you decide to continue, extundelete may overwrite some of the deleted files and make recovering those files impossible. You should unmount the file system and check it with fsck before using extundelete. Would you like to continue? (y/n) y Loading filesystem metadata ... extundelete: Extended attribute has an invalid value length when trying to examine filesystem I did do other research, but I couldn't understand what that means. The questions I'll try to avoid the XY problem. Is the method I used to try to recover my data corect? If so, what is extundelete complaining about and how can I resolve it? If not, how can I (try to) restore my data from the LUKS-encrypted disk in Debian? If any additional info is required, please ask for it. P. S.: «Restore from your recent backup you obviously have» is the correct answer, I know =). I do have a full backup of my home taken a couple of days before the data loss (not such a n00b), but I lost the product of more than twenty hours of work and I would like to have it back. Update I tried running fsck on the partition with my data, and the result was $ sudo fsck -r /dev/mapper/pc--vg-root fsck from util-linux 2.36.1 e2fsck 1.46.2 (28-Feb-2021) /dev/mapper/pc--vg-root: recovering journal Clearing orphaned inode 7077927 (uid=1000, gid=1000, mode=0100600, size=0) Clearing orphaned inode 7077925 (uid=1000, gid=1000, mode=0100600, size=65536) Clearing orphaned inode 19794062 (uid=1000, gid=1000, mode=040775, size=4096) Clearing orphaned inode 18366502 (uid=1000, gid=1000, mode=040755, size=4096) Clearing orphaned inode 18366515 (uid=1000, gid=1000, mode=040755, size=4096) Clearing orphaned inode 18366503 (uid=1000, gid=1000, mode=040755, size=4096) Clearing orphaned inode 18366504 (uid=1000, gid=1000, mode=040755, size=4096) Clearing orphaned inode 18366511 (uid=1000, gid=1000, mode=040755, size=4096) Clearing orphaned inode 18366512 (uid=1000, gid=1000, mode=040755, size=4096) Clearing orphaned inode 18351755 (uid=1000, gid=1000, mode=0100444, size=15383322) Clearing orphaned inode 18351757 (uid=1000, gid=1000, mode=0100444, size=12832) Clearing orphaned inode 18366521 (uid=1000, gid=1000, mode=040755, size=4096) Clearing orphaned inode 7078039 (uid=1000, gid=1000, mode=0100600, size=0) Clearing orphaned inode 7077945 (uid=1000, gid=1000, mode=0100600, size=65536) Clearing orphaned inode 11927591 (uid=0, gid=0, mode=0100644, size=147932) Clearing orphaned inode 18096551 (uid=0, gid=0, mode=0100644, size=2456) Clearing orphaned inode 11535970 (uid=0, gid=0, mode=0100644, size=335240) Setting free inodes count to 29879660 (was 29737485) Setting free blocks count to 41417686 (was 20072881) /dev/mapper/pc--vg-root: clean, 553620/30433280 files, 80298026/121715712 blocks /dev/mapper/pc--vg-root: status 0, rss 6876, real 38.344677, user 0.482391, sys 0.290328 I don't know how filesystems work, but to my understanding of what I read in the last hours, it looks like fsck just removed the data I was trying to restore? Now extundelete runs without complaints, but $ sudo extundelete /dev/mapper/pc--vg-root --restore-directory /home/username/path/to/restore NOTICE: Extended attributes are not restored. Loading filesystem metadata ... 3715 groups loaded. Loading journal descriptors ... 0 descriptors loaded. Searching for recoverable inodes in directory /home/username/path/to/restore... 0 recoverable inodes found. Looking through the directory structure for deleted files ... 0 recoverable inodes still lost. No files were undeleted. I know I can not restore overwritten data, but I erroneously removed more than 100GB, I don't think they can have been all overwritten before I created the disk image with dd...
I managed to restore many, maybe all, of the files I lost with photorec. I read about it in a forum page I can't find anymore and thought "let's just give it a try to this other tool as well". I ran $ sudo kpartx -a -v image.iso # map the disk image partitions add map loop0p1 (254:0): 0 997376 linear 7:0 2048 add map loop0p2 (254:1): 0 2 linear 7:0 1001470 add map loop0p5 (254:2): 0 975769600 linear 7:0 1001472 $ sudo cryptsetup luksOpen /dev/mapper/loop0p5 img # unlock the partition with my data Enter passhprase for /dev/mapper/loop0p5: $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 465,8G 0 loop ├─loop0p1 254:0 0 487M 0 part ├─loop0p2 254:1 0 1K 0 part └─loop0p5 254:2 0 465,3G 0 part └─img 254:3 0 465,3G 0 crypt ├─pc--vg-root 254:4 0 464,3G 0 lvm └─pc--vg-swap_1 254:5 0 980M 0 lvm [...omitting other lsblk output...] $ sudo vgchange -a y pc-vg 2 logical volume(s) in volume group "pc-vg" now active to prepare the partition, then $ sudo photorec The first time I got an error about EXT3 I don't remember much about. I tried to run sudo fsck -r /dev/mapper/pc--vg-root and got the same output I wrote in the update to the question. However, after that photorec worked. I don't know exactly what I did but it worked, so I won't comply. At the second run of photorec I just followed the wizard. I just picked the options I supposed would give me the best result and I don't have the complete history of them, so I will just write what I am sure I did. pick the right device (/dev/mapper/pc--vg-root) pich the right partition (ext4) pick the right partition filesystem ([ ext/ext3 ] ext2/ext3/ext4 filesystem) choose to scan the unallocated space only ([ Free ] Scan for file from ext2/ext3 unallocated space only), as I didn't need to restore not deleted files selected the position to write restored files to At some point I also chose the types of file I wanted to restore (text and PDF). After some hours of analysis and restoring, photorec delivered me some hundreds of subdirectories (recup_dir.<nnn>, where <nnn> is an incremental number) filled with files with random-looking names and correct extension (for example, f582010347.txt is a text file I know I saved with a proper name). I checked some random files and it looks like I got back all the text files I had on the disk, even the undeleted ones (including /etc/ssh/sshd_config, for example), renamed with random-looking names and apparently randomly sorted in subdirectories, and they sum up for more than 80GB in total. But at least I have them back. In the next days I will try to automatically filter them out to find the ones I need. I did some fast tries and I achieved good results with $ grep --ignore-case --recursive -B5 -A5 'string' restore_path where: string is a string I know is present in the file I want to restore restore_path is the path I instructed photorec to write the recovered file Adding to --text to grep options I could identify some of the PDF I need to restore as well. However, I know that as PDF are binary files this will probably not allow me to get back all of them. After all, it went much better than I imagined. The lesson I learned: my regular backups will not protect me against myself. Also, aliasing rm to something less dangerous might be a good idea (this looks interesting).
Restore files removed with rm (even from a LUKS-encrypted disk)
1,526,978,710,000
I have an Arch linux running windows 10 disk-based VMs. The disk is on a different volume group and is luks encrypted. I have a logical volume for each VM with ext4 file system. I manually edited the fstab with the correct UUIDs and I set the type to ext4. Before I installed windows on the VMs I rebooted to make sure the fstab was configured properly. After the installations, I'm getting this error for each partition right after I correctly type in the password for the disk: [TIME] Timed out waiting for device /dev/disk/by-uuid/1bdc0382-d2a4-4581-b737-feec147dec40. [DEPEND] Dependency failed for /disk0. [DEPEND] Dependency failed for Local File Systems. [DEPEND] Dependency failed for File System Check on /dev/disk/by-uuid/1bdc0382-d2a4-4581-b737-feec147dec40. After those errors I get: You are in emergency mode. After logging in type [...] I'm not a linux expert so the answer might be simpler than it seems. Anyone has any suggestion? EDIT #1: fstab snippet: # /dev/mapper/volgroup0-lv_disk0 UUID=1bdc0382-d2a4-4581-b737-feec147dec40 /disk0 ext4 rw,relatime 0 2 EDIT #2: lsblk -f snippet: NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS sda `-sda1 crypto_LUKS 2 48bd9c70-c5cd-42c0-a58e-f0257be18d44 `-disk LVM2_member LVM2 001 IVCIiW-5r2w-AzHY-hWyE-iJ7g-IqPB-lUdP9o |-volgroup0-lv_disk0 | `-volgroup0-lv_disk1 blkid snippet: /dev/sda1: UUID="48bd9c70-c5cd-42c0-a58e-f0257be18d44" TYPE="crypto_LUKS" PARTUUID="fe7085b2-c19b-1f48-908c-c59dd96bcfc9" /dev/mapper/disk: UUID="IVCIiW-5r2w-AzHY-hWyE-iJ7g-IqPB-lUdP9o" TYPE="LVM2_member" /dev/mapper/volgroup0-lv_disk0: PTUUID="3421c065-23d3-48a1-8274-951444ce8d5c" PTTYPE="gpt" EDIT #3: fdisk -l snippet: Disk /dev/mapper/disk: 447.12 GiB, 480086138368 bytes, 937668239 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes The primary GPT table is corrupt, but the backup appears OK, so that will be used. Disk /dev/mapper/volgroup0-lv_disk0: 200 GiB, 214748364800 bytes, 419430400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 3421C065-23D3-48A1-8274-951444CE8D5C Device Start End Sectors Size Type /dev/mapper/volgroup0-lv_disk0-part1 2048 206847 204800 100M EFI System /dev/mapper/volgroup0-lv_disk0-part2 206848 239615 32768 16M Microsoft reserved /dev/mapper/volgroup0-lv_disk0-part3 239616 418403031 418163416 199.4G Microsoft basic data /dev/mapper/volgroup0-lv_disk0-part4 418404352 419426303 1021952 499M Windows recovery environment The primary GPT table is corrupt, but the backup appears OK, so that will be used.
If you use a logical volume as a backing storage for virtual machines, the LV will be used "directly" as disk for the VM -- the ext4 filesystem you created was overwritten by the Windows installation so you can no longer mount it, because instead of ext4 your /dev/mapper/volgroup0-lv_disk0 LV now contains a partition table with Windows partitions. If you want to access data from your Windows VM you can use libguestfs. To fix your boot problem, remove the /dev/mapper/volgroup0-lv_disk0 entry from your fstab.
KVM disk-based VM on a luks-encrypted disk
1,526,978,710,000
I have an external NVMe disk that was in my old laptop which is encrypted with LUKS. I need to mount that disk and extract some data out of it so this is what I have tried fdisk -l /dev/sdc3 2549760 2000408575 1997858816 952.7G Linux filesystem udisksctl unlock -b /dev/sdc3 Unlocked /dev/sdc3 as /dev/dm-1. So far so good, however, now I am trying to issue udisksctl mount -b and it won't work with neither /dev/dm-1 or /dev/mapper/luks-96a2dfa5-1f16-45fd-895c-f2dd0505dde9 or /dev/sdc3, it always says that Object /org/freedesktop/UDisks2/block_devices/dm_2d1 is not a mountable filesystem. lsblk -l output sdc ├─sdc2 ext4 8df22661-a1f9-4fc6-aa2d-204c605a1626 ├─sdc3 crypto_LUKS 96a2dfa5-1f16-45fd-895c-f2dd0505dde9 │ └─luks-96a2dfa5-1f16-45fd-895c-f2dd0505dde9 LVM2_member 5EOtDn-9iM0-630j-1gqO-73cc-5FgB-Wk8SlY └─sdc1 vfat 86F0-B82B Output of vgs and lvs pmensik-Inspiron-7566% sudo vgs /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad. Falling back to internal scanning. VG #PV #LV #SN Attr VSize VFree elementary-vg 1 2 0 wz--n- 952.65g 21.33g pmensik-Inspiron-7566% sudo lvs /run/lvm/lvmetad.socket: connect failed: No such file or directory WARNING: Failed to connect to lvmetad. Falling back to internal scanning. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root elementary-vg -wi------- 930.37g swap_1 elementary-vg -wi------- 976.00m Is it because the disk was used for running the Elementary OS and there are several partitions mounted as different filesystems? How can I mount /home from such a disk and extract data out of it? Thanks a lot
You have an LVM setup so after unlocking the LUKS device you need to mount the root logical volume and not the unlocked device itself. In your case the logical volumes were not auto-activated because lvmetad is not running, you can activate them (= tell the system to actually create the logical volume block devices) using vgchange -ay elementary-vg and then mount the root logical volume /dev/elementary-vg/root using either mount or udisksctl mount -b /dev/elementary-vg/root.
Mount LUKS encrypted drive
1,526,978,710,000
I have a LUKS encrypted disk. It has 3 keyslots, and I know the pass phases for two of them. How do I determine which keyslots I know the pass phrases for?
Simple way would be to use --debug when unlocking the device, it prints which keyslot it tries to use, so with two passphrases you need just two runs of luksOpen to see which keyslot which passphrase "belongs" to. Example where I provided password for third keyslot: $ sudo cryptsetup open /dev/sde a --debug ... # Trying to open LUKS2 keyslot 0. ... # Verifying key from keyslot 0, digest 0. # Digest 0 (pbkdf2) verify failed with -1. # Trying to open LUKS2 keyslot 1. ... # Digest 0 (pbkdf2) verify failed with -1. # Trying to open LUKS2 keyslot 2. ... Key slot 2 unlocked. If you want to check from a script, you can use --key-slot <num> with luksOpen and cycle for every keyslot for every passphrase you know, unlocking wrong keyslot with a wrong passphrase will simply fail (you can also use this together with --test-passphrase to just check whether the passphrase is correct or not without actually unlocking the device). This will also help if you have two keyslots with the same passphrase, the --debug example above won't tell you that. So something similar to this should do the trick: for i in {0..2}; do for pass in "a" "b" "c"; do echo $pass | cryptsetup open /dev/sde a -q --test-passphrase --key-slot $i >/dev/null 2>&1 ret=$? [ $ret -eq 0 ] && echo "$pass is passphrase for keyslot $i" && break done done a is passphrase for keyslot 0 b is passphrase for keyslot 1 c is passphrase for keyslot 2
LUKS: Determine which keyslots match which pass phrase
1,526,978,710,000
I made a full encryption setup of Ubuntu 20.04 LTS according to this article. Due to it is highly recommended to add further passphrases to avoid loosing all data due to unavailability of the initial passphrase, I did so, by adding several passphrases using cryptsetup luksAddKey /dev/sda1 command. To be fully sure, that I have done all right, I checked for the validness of each passphrase by running cryptsetup luksOpen --test-passphrase /dev/sda1 and all passphrases passed the test successfully. The problem is that while booting and trying to unlock the master key (GPT) I can use only the first entered passphrase. Others are not working, while they are OK if checking it after system load by using cryptsetup luksOpen --test-passphrase /dev/sda1. Are there any known issues regarding this have been ever heard before or maybe there are some things that were our of my scope? UPDATE: There are no any strings within my /etc/crypttab file except of those, mentioned in the HOWTO article. Here they are: LUKS_BOOT UUID=<UUID-VALUE> /etc/luks/boot_os.keyfile luks,discard sda5_crypt UUID=<UUID-VALUE> /etc/luks/boot_os.keyfile luks,discard According to systemd crypttab manual, if no key-slot option is used, then: The default is to try all key slots in sequential order. But for some reason this doesn't work.
It seems that actually all passphrases work as they should. BUT the point was that response time from the first (initial) passphrase (about 5-7 sec) is significantly lower than the one from the rest of passphrases from the list. The most amusing thing here is that for all passphrases starting from the second one, the response time - the time from pressing the Enter button after entering a passphrase up to the moment the system actually starts - is about 25-35 (!) seconds. The root cause for such behavior is unknown. The main reason, why the initial question has been arisen, that I didn't wait such long time for system start and mistakenly thought that passphrases didn't work at all.
Only first slot password is valid during decryption of boot device while all others are valid while testing to open a device
1,526,978,710,000
I have a laptop that dual boots Arch and Windows 10. My boot select is set to a GRUB screen where I'm given a few seconds to change my selection (default is Arch) until it boots to a LUKS unlock screen. Often times I'll leave the laptop on my Windows partition and it'll decide to reboot, causing GRUB to select the Arch partition and then sit at the LUKS password input for hours (which makes the laptop run hard for some reason). Is there an option within GRUB to automatically shutdown if a selection is not entered in time, say 30 seconds? If not, is this something I can set with LUKS? Thanks!
Probably should have looked harder before asking :-) This is what I did: Backed up /boot/grub/grub.cfg Edited the timeout to 15 seconds from 5 to give me enough time Added a menu entry at the top (before Arch Linux) with the setting: menuentry "Shutdown" { halt } Saved grub.cfg Now whenever the computer boots I have 15 seconds to hit down arrow to select Arch or Windows, else it selects shutdown.
Option to automatically shutdown at GRUB without user input?
1,526,978,710,000
I have a Raspberry Pi with attached, via USB, HDD drive inside Orico chassis with a separate power source. I encrypted this drive with the command: cryptsetup luksFormat /dev/sda Then, I created key file with command: dd if=/dev/random bs=32 count=1 of=/home/ubuntu/luks/luks.key added this file as second key with command: cryptsetup luksAddKey /dev/sda /home/ubuntu/luks/luks.key and added this line to /etc/crypttab: vault /dev/sda none I think that I did everything to make this drive auto decrypt during system boot, but that doesn't happen. I have to do it manually after every reboot. The other thing that bothers me is output from cryptsetup luksDump /dev/sda.  I would expect two slots being on status "enabled", but I don't see it in the output: ubuntu@ubuntu:~$ sudo cryptsetup luksDump /dev/sda sudo: unable to resolve host ubuntu: Temporary failure in name resolution LUKS header information Version: 2 Epoch: 4 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: ab24c6e5-9286-4e6d-a874-29755338afa1 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes] Keyslots: 0: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Cipher key: 512 bits PBKDF: argon2i Time cost: 4 Memory: 270573 Threads: 4 Salt: b8 50 50 6c b2 54 45 ea 36 45 66 1d 61 d1 e9 94 87 7c 67 d3 a8 f3 3b 54 04 b6 46 7b 25 0d d2 89 AF stripes: 4000 AF hash: sha256 Area offset:32768 [bytes] Area length:258048 [bytes] Digest ID: 0 1: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Cipher key: 512 bits PBKDF: argon2i Time cost: 4 Memory: 268825 Threads: 4 Salt: 55 e6 be a8 55 45 61 3c 1b 6e 6d 7a b3 70 40 32 fc 4f 95 71 f0 13 52 c7 a1 69 cb 73 66 0b a9 6f AF stripes: 4000 AF hash: sha256 Area offset:290816 [bytes] Area length:258048 [bytes] Digest ID: 0 Tokens: Digests: 0: pbkdf2 Hash: sha256 Iterations: 39527 Salt: 62 9b 83 b6 04 f3 b0 aa 36 21 bc bf 28 aa 1d 3c ad 89 8a 5c 0d 7a d2 f4 0f 6e d4 09 b2 33 0b d4 Digest: 45 42 fc 30 22 95 12 26 3f 78 8c 56 d7 b0 c3 d9 10 4e 32 99 93 3c 10 48 a3 df ab 89 77 89 14 1f Do you think that this problem might be related to the fact that the HDD is connected as a USB drive?  As mentioned previously, opening this LUKS volume manually works fine.  Please help me debug this problem.
You need to specify the key file in /etc/crypttab, if you put none there, it will be interpreted as "ask for passphrase". From man crypttab: The third field specifies the encryption password. If the field is not present or the password is set to "none" or "-", the password has to be manually entered during system boot. Otherwise, the field is interpreted as an absolute path to a file containing the encryption password. Note that luksAddKey doesn't mean you are adding a new passwordless key slot to the LUKS device, you are adding a new keyslot protected with a new password or in your case a passphrase read from a file /home/ubuntu/luks/luks.key -- this is not the key used by LUKS/dm-crypt, it's just a "binary password", you still need to provide it when unlocking/opening the LUKS device. Changing your crypttab entry to vault /dev/sda /home/ubuntu/luks/luks.key should do the trick. The luksDump output is ok, it changed with LUKS version 2 and it no longer prints the Key Slot X: ENABLED/DISABLED line (for LUKS 2, it still prints it for LUKS 1).
LUKS auto decryption with key file fails, please help me debug
1,526,978,710,000
I want to use rsync for creating complete, bootable backups of a LUKS-encrypted Linux disk. Does it support hot-transferring of files, i.e. can I use rsync from a running (idling) system with opened files, processes etc.?
Yes. There is rarely exclusive file locking as there is on Windows systems. The up side of this is that it's easy to copy files that are open by other processes - even files that are being written. The down side is that it's easy to copy files that are open by other processes and that are being written. rsync does notice when a file it's copying has been updated underneath it, and will fail the copy of that file. A re-run will usually succeed - provided of course that the file isn't still being updated. Remember rsync -a to ensure you copy timestamps and permissions. Also be aware that rsync between two local devices/filesystems is nowhere near as efficient as copying between two systems. It trades network efficiency for disk efficiency.
Does rsync support hot transfer while the system is running?
1,526,978,710,000
I have looked at how to be able to imbue in a crypttab stanza (referring to a LUKS device) to allow a specific user or any unprivileged user to map it. Roughly, I am looking for the equivalent of user in the mount options in /etc/fstab when mapping the LUKS device (i.e. before mounting it). The only (half way sensible) approach I have come up so far was to let my unprivileged user run a wrapper script which runs cryptdisks_start usint its absolute path as root without password, but with the name hardcoded in it. Obviously the script permissions make it impossible for the unprivileged user to tamper with it. Is there a more straightforward solution, perhaps akin to user in fstab? This is on Ubuntu 20.04 and so perhaps systemd offers some way to achieve this? After all there is a unit such as systemd-cryptsetup@<name>.service being auto-generated by it, based on the entry in crypttab. As a side note: I am aware of pam_exec, but I am currently in the process of pondering which one works better for me. Either way it appears that there is no way to run it without superuser privileges.
As far as I know there is nothing like that in SystemD. You could use sudo udisksctl unlock with a PolKit configuration for certain users or groups
How can I achieve being able to run cryptdisks_start as normal user?
1,526,978,710,000
I have read `cryptsetup luksOpen <device> <name>` fails to set up the specified name mapping https://www.saout.de/pipermail/dm-crypt/2014-August/004272.html And tried cryptsetup open --type luks <device> <dmname> --key-file /root/luks.key still getting error 22 cryptsetup luksFormat <device> --key-file /root/luks.key -q output command successful. Followed steps here: https://gist.github.com/huyanhvn/1109822a989914ecb730383fa0f9cfad Created key with openssl genrsa -out /root/luks.key 4096 chmod 400 /root/luks.key $ sudo dmsetup targets striped v1.6.1 linear v1.3.1 error v1.5.1 Edit 1 Realised dm_crypt is not loaded, so did $ modprobe dm_crypt To check $ lsmod | grep -i dm_mod $ which cryptsetup Also checked $ blkid /dev/data /dev/data: UUID="xxxxxxxxxxxx" TYPE="crypto_LUKS" Edit 2 More missing module: modprobe aes_generic modprobe xts Kernel $ uname -r 4.9.0-12-amd64 OS is Debian Stretch And it's an Azure provided image, I'm not sure if they have patched anything related to this.
It's a naming conflict, I already have /dev/mapper/data due to the previous testing, so have to test it with another name. cryptsetup open --type luks /dev/data new_name # 1st time sucess cryptsetup open --type luks /dev/data new_name # 2nd time fail
cryptsetup failed with code 22 invalid argument
1,526,978,710,000
On the Fedora wiki it is mentioned that LUKS offers this protection. LUKS does provide passphrase strengthening but it is still a good idea to choose a good (meaning "difficult to guess") passphrase. What is it exactly and how is it accomplished?
A similar phrase appears in other places (e.g., this Red Hat 5 page), where a bit more detail is given: LUKS provides passphrase strengthening. This protects against dictionary attacks. Just from that I would expect it to mean that the password is being salted and probably has other improvements applied to the process (e.g., hashing it N times to increase the cost). Googling around, this phrase seems to have first appeared in conjunction with LUKS around 2006 in the Wikipedia article on Comparison of disk encryption software. There the description of "passphrase strengthening" goes to the article on "Key stretching", which is about various techniques to make passwords more resilient to brute-force attacks, including using PBKDF2. And indeed, LUKS1 did use PBKDF2 (LUKS2 switched to Argon2), according to the LUKS FAQ. So that's what passphrase strengthening means in this context: using PBKDF2 and similar to make passwords more difficult to crack. The FAQ also has a short description: If the password has lower entropy, you want to make this process cost some effort, so that each try takes time and resources and slows the attacker down. LUKS1 uses PBKDF2 for that, adding an iteration count and a salt. The iteration count is per default set to that it takes 1 second per try on the CPU of the device where the respective passphrase was set. The salt is there to prevent precomputation. For specifics, LUKS used SHA1 as the hashing mechanism in PBKDF2 (since 1.7.0 it's SHA256), with iteration count set so that it takes about 1 second. See also section 5.1 of the FAQ: How long is a secure passphrase? for a comparison of how using PBKDF2 in LUKS1 made for a considerable improvement over dm-crypt: For plain dm-crypt (no hash iteration) this is it. This gives (with SHA1, plain dm-crypt default is ripemd160 which seems to be slightly slower than SHA1): Passphrase entropy Cost to break 60 bit EUR/USD 6k 65 bit EUR/USD 200K 70 bit EUR/USD 6M 75 bit EUR/USD 200M 80 bit EUR/USD 6B 85 bit EUR/USD 200B ... ... For LUKS1, you have to take into account hash iteration in PBKDF2. For a current CPU, there are about 100k iterations (as can be queried with cryptsetup luksDump. The table above then becomes: Passphrase entropy Cost to break 50 bit EUR/USD 600k 55 bit EUR/USD 20M 60 bit EUR/USD 600M 65 bit EUR/USD 20B 70 bit EUR/USD 600B 75 bit EUR/USD 20T ... ...
In regards to dm-crypt with LUKS, what is meant by "passphrase strengthening"?
1,592,658,112,000
I have an external backup hard drive that is encrypted using LUKS. As I was re-organising my backups, I copied the data to another encrypted drive and did a kind of "quick wipe" on the original drive by replacing the key in the key slot with random data. Goal was to use the drive afterwards as a second backup, but at that moment, time failed to properly clean-up the drive and do the second copy. Unfortunately, meanwhile, my second backup drive and computer were stolen. I'm left with the original backup drive, which theoretically contains the data, but behind a key that I don't know. However, I still know the original key, the one that was previously used in the keyslot, before replacement. Is there a chance to get back this old keyslot? The drive is a standard magnetic 2.5" USB3 drive, not an SSD. So I don't know if it uses some kind of copy-on-write for such metadata or if some tools could find the data buried underneath the new keyslot? Internal FS is EXT4 for what is worth.
The problem is the content of the key slot. In order to access the data you need the master key. The people having one of the slot keys may not be supposed to know the master key (because then you could not lock somebody out without reencrypting all the data). Thus a new key is given (password or file), turned into a key of the suitable key length, and then the key slot data is generated by XORing the input key and the master key. In other words: When you know the password then you still need the slot data for building the master key. Without the slot data, the password is completely useless. Sorry. If you had a dump of the LUKS header, then you could restore that and use the old password.
LUKS: find a deleted key(slot)
1,592,658,112,000
I have two disks as RAID-1 encrypted via LUKS: # blkid ... /dev/md0: UUID="x-x-x-x-x" TYPE="crypto_LUKS" Accidently I executed cryptsetup luksFormat /dev/md0 instead of cryptsetup luksOpen /dev/md0 secure. luksFormat returned WARNING: Device /dev/md0 already contains a 'crypto_LUKS' superblock signature. Now I cant use open anymore, the following lines provide more information: # cryptsetup luksOpen /dev/md0 secure Device /dev/md0 is not a valid LUKS device. # cryptsetup luksDump /dev/md0 Device /dev/md0 is not a valid LUKS device. # hexdump -C /dev/md0 | grep LUKS 00000000 4c 55 4b 53 ba be 00 02 00 00 00 00 00 00 40 00 |LUKS..........@.| hexdump: /dev/md0: Input/output error Is there anything I can do, to get my data?
From man cryptsetup (section luksFormat): WARNING: Doing a luksFormat on an existing LUKS container will make all data of the old container permanently irretrievable, unless you have a header backup. I guess you don't have a header backup and therefore your data will be history. Sorry for the bad news. Nevertheless, the hexdump: /dev/md0: Input/output error indicates a problem with one of your drives!!
Accidentally executed luksFormat instead of luksOpen
1,592,658,112,000
I have tried different guides and tutorials to enlarge an encrypted LUKS LVM partition, but I am unable to move forward and finish the process as I don't have complete clarity over the structure of the volumes/partitions. This is a Debian 10 on a VirtualBox VM, I have properly enlarged the virtual disk by approx 10GB, and resized the partition, but when I start the VM and issue df I still see the root partition/volume not enlarged. I have performed all the operations with a live gparted image, mounting the virtual disk to modify it. Below is the output of lsblk performed from gparted: How can I complete the process to enlarge the root volume/partition?
I actually figured out the issue. The procedure I was following was correct, I was just missing one tiny step: pvchange -x y /dev/mapper/encryptedvolume I recommend to follow the instructions in the question #320957 on Unix & Linux Stack Exchange, where I found the above step in the second comment of the accepted answer.
Unable to enlarge/resize LUKS partition on virtual disk
1,592,658,112,000
I have an old laptop with Windows 7 on which I installed CentOS 8 on dual boot. Upon first reboot, GRUB showed only entries for Linux. Therefore I used Boot-Repair-Disk, but it somehow failed to install GRUB, as now the laptop boots into Windows directly. Disk partitions are as follows (as seen by Boot-Repair-Disk): Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 2,048 3,074,047 3,072,000 27 Hidden NTFS (Recovery Environment) /dev/sda2 * 3,074,048 629,905,407 626,831,360 7 NTFS / exFAT / HPFS /dev/sda3 629,905,408 632,002,559 2,097,152 83 Linux /dev/sda4 632,002,560 976,773,119 344,770,560 5 Extended /dev/sda5 632,004,608 975,978,495 343,973,888 8e Linux LVM and this is their rough size and what they're used for: /dev/sda1 1.5 Gb Windows recovery partition /dev/sda2 300 Gb Windows 7 partition /dev/sda3 1 Gb Linux /boot partition /dev/sda4 164 Gb Extended partition containing /dev/sda5 /dev/sda5 4 Gb /swap, 130 Gb /, 30 Gb /home, all LVM and LUKS-encrypted It is worth noting that Windows sees /dev/sda4 as a primary (not extended) partition. This is part of the output of Boot-Repair-Disk: Is there RAID on this computer? no File descriptor 8 (/proc/17432/mountinfo) leaked on lvs invocation. Parent PID 19248: /bin/sh Error: /dev/mapper/cl-00: unrecognised disk label Error: /dev/mapper/cl-01: unrecognised disk label Error: /dev/mapper/cl-02: unrecognised disk label Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only. Error: Invalid partition table - recursive partition on /dev/sr0. boot-repair is executed in live-session (Boot-Repair-Disk 64bit 1oct2017, zesty, Ubuntu, x86_64) CPU op-mode(s): 32-bit, 64-bit file=/cdrom/preseed/lubuntu.seed boot=casper initrd=/casper/initrd.lz quiet splash -- ls: cannot access '/home/usr/.config': No such file or directory Set sda as corresponding disk of mapper/cl-00 Set sda as corresponding disk of mapper/cl-01 Set sda as corresponding disk of mapper/cl-02 mount: /mnt/boot-sav/mapper/cl-00: unknown filesystem type 'crypto_LUKS'. mount /dev/mapper/cl-00 : Error code 32 mount -r /dev/mapper/cl-00 /mnt/boot-sav/mapper/cl-00 mount: /mnt/boot-sav/mapper/cl-00: unknown filesystem type 'crypto_LUKS'. mount -r /dev/mapper/cl-00 : Error code 32 mount: /mnt/boot-sav/mapper/cl-01: unknown filesystem type 'crypto_LUKS'. mount /dev/mapper/cl-01 : Error code 32 mount -r /dev/mapper/cl-01 /mnt/boot-sav/mapper/cl-01 mount: /mnt/boot-sav/mapper/cl-01: unknown filesystem type 'crypto_LUKS'. mount -r /dev/mapper/cl-01 : Error code 32 mount: /mnt/boot-sav/mapper/cl-02: unknown filesystem type 'crypto_LUKS'. mount /dev/mapper/cl-02 : Error code 32 mount -r /dev/mapper/cl-02 /mnt/boot-sav/mapper/cl-02 mount: /mnt/boot-sav/mapper/cl-02: unknown filesystem type 'crypto_LUKS'. mount -r /dev/mapper/cl-02 : Error code 32 =================== os-prober: /dev/sda1:Windows 7:Windows:chain /dev/sda2:Windows 7:Windows1:chain =================== blkid: /dev/sda1: LABEL="System" UUID="FC30DADA30DA9B4A" TYPE="ntfs" PARTUUID="e7d2fa64-01" /dev/sda2: LABEL="Main disk" UUID="E6C200E1C200B837" TYPE="ntfs" PARTUUID="e7d2fa64-02" /dev/sda3: UUID="b43f57d3-c143-47b0-ad99-a5b12a0416be" TYPE="ext4" PARTUUID="e7d2fa64-03" /dev/sr0: UUID="2017-10-29-00-56-18-00" LABEL="Boot-Repair-Disk 64bit" TYPE="iso9660" PTUUID="6b8b4567" PTTYPE="dos" /dev/loop0: TYPE="squashfs" /dev/sda5: UUID="sWZAY3-8hDE-wcdv-rEsu-pTcr-9lPV-QEfxlo" TYPE="LVM2_member" PARTUUID="e7d2fa64-05" /dev/zram0: UUID="462ef96d-8ed3-405e-92c4-043654187abd" TYPE="swap" /dev/zram1: UUID="df6b8b51-4029-4d37-86ec-d70532265f9b" TYPE="swap" /dev/zram2: UUID="9d842b41-46c1-4ed3-aefb-447d99a6321f" TYPE="swap" /dev/zram3: UUID="65d9a196-c801-4e87-9283-314b665108d6" TYPE="swap" /dev/zram4: UUID="b2cb39b0-9fe1-4485-b044-7d408527117f" TYPE="swap" /dev/zram5: UUID="f6b1cecd-603e-40a3-8eb3-2bc1ddaca8c1" TYPE="swap" /dev/zram6: UUID="2fc865d0-0b3b-4d11-bb8d-c272a21e5c39" TYPE="swap" /dev/zram7: UUID="5be79bcf-4b46-4d62-81b2-952b79e0ecda" TYPE="swap" /dev/mapper/cl-00: UUID="6eac3a8f-7854-40c7-ae94-c1d288a60698" TYPE="crypto_LUKS" /dev/mapper/cl-01: UUID="d0e378ae-9140-4335-94e6-2d73a3cb7bd1" TYPE="crypto_LUKS" /dev/mapper/cl-02: UUID="00e698ad-ac0b-4e27-9bc7-bdcb524ce4ba" TYPE="crypto_LUKS" 1 disks with OS, 2 OS : 0 Linux, 0 MacOS, 2 Windows, 0 unknown type OS. Is it really necessary to decrypt the LUKS LVM partitions before trying to reinstall GRUB? EDIT: To answer the questions in the comments: Output of fdisk -l /dev/sda: Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors Disk model: Seagate ST950056 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xe7d2fa64 Device Boot Start End Sectors Size Id Type /dev/sda1 2048 3074047 3072000 1.5G 27 Hidden NTFS WinRE /dev/sda2 * 3074048 629905407 626831360 298.9G 7 HPFS/NTFS/exFAT /dev/sda3 629905408 632002559 2097152 1G 83 Linux /dev/sda4 632002560 976773119 344770560 164.4G 5 Extended /dev/sda5 632004608 975978495 343973888 164G 8e Linux LVM It uses the default GRUB from CentOS 8 i.e. GRUB2 LVM partitions were created directly from the CentOS installer Laptop (almost 10 years old) uses BIOS, not UEFI Installer never asked where to put GRUB, it did so automatically
I can not explain why Boot-Repair-Disk reports an error with the partition table, nor why Windows thinks that /dev/sda4 is a primary partition. The output of fdisk reports no errors with the MBR partition table, and is also able to read the extended partition table in /dev/sda5. So, all we have to do is: recover access to the grub2 that was installed first (or install a new one) set the chainload commands needed to boot windows from grub. Re-install grub2 Grub2 was installed to the sectors after the MBR (and before the 2048 sector). That grub2 in the disk has the address of the /boot/grub directory hard coded. Sadly, the contents of /boot/grub are both inside LVM and encoded with LUKS. Grub needs to: read inside LVM disks. That is done with the lvm.mod module. decrypt LUKS disks. That is done with the luks.mod module. Both modules reside also inside /boot/grub. That creates a catch-22 situation where the keys to the car are inside the car. The solution is to write some modules to fixed disk sectors and load them on boot. Then ask for the LUKS password, decrypt the /boot/grub directory, load more modules, and finally load /boot/grub/grub.cfg to know which other modules need to be loaded and to present an OS selection list to the user. However, the modules that need to be written to disk are inside /usr/lib/grub/i386-pc/lvm.mod (for example). Yes, grub may be used in several architectures. Which is also encrypted on the system you installed. So, the only solution is to: boot to some live ISO (that has all the current grub files in /usr/lib) Mount all (decrypted and system) disks in the correct place. Use chroot to switch to the 'real' system on disk. Use grub-install to re-place grub to the disk. This is the guide to rescue CentOS that you should follow. It is simplified, except, sadly, the part of decrypting LVM-LUKS encrypted partitions which adds another twist. After that has been done and you system boots to CentOS via grub, you need to include (if windows was not auto-detected whith grub-install) to add a couple of entries to the grub.cfg file. Use the entries from this page Related: Full disk encryption, including /boot: Unlocking LUKS devices from GRUB Decrypt and mount LUKS disk from GRUB rescue mode How to re-install bootstrap code (GRUB) CentOS / RHEL 7 : How to reinstall GRUB2 from rescue mode Dual boot: Windows 7 and CentOS 7 - Tutorial
Unable to install GRUB on dual boot Windows / CentOS 8 (LVM+LUKS)
1,592,658,112,000
I want to change one of my partitions from ext4 to something else (xfs or openzfs), the partition is sdc5_crypt and is mounted at startup. My intuition says that I can format it with something like zpool create sdc_pool sdc5_crypt without any impact into the LUKS encryption, so that it will still be mounted normally as it is done now, but the /dev/mapper/sdc5_crypt partition will now be zfs instead of ext4 this is my configuration right now: >lsblk ... sdc 8:32 0 1.9T 0 disk ├─sdc1 8:33 0 1K 0 part └─sdc5 8:37 0 1.9T 0 part └─sdc5_crypt 254:2 0 1.9T 0 crypt /mnt/ssd-3 >mount | grep sdc /dev/mapper/sdc5_crypt on /mnt/ssd-3 type ext4 (rw,relatime,discard) Am I correct in assuming that the above approach will partition/reformat the sdc5_crypt partition without messing up LUKS headers (which are backed up in any case) or something else? Is it safe to do it like this? If not, what would be the suggested way to change the mounted ext4 partition to be used in a zfs pool?
Any operations on sdc5_crypt will not harm your LUKS configuration. You can treat it like a normal drive. So yes, you should be able to use that command to reformat your partition.
Change partition without affecting LUKS encryption
1,592,658,112,000
I have an encrypted LUKS partition, with btrfs for backups on it. Kernel panic happened while system was mounted and performing receive/send operations during backup. After restart no problem with LUKS itself, but on btrfs was found some errors and warnings: btrfs ch -p /dev/mapper/bckp Opening filesystem to check... Checking filesystem on /dev/mapper/bckp UUID: 4b793176-530a-4a82-b156-3363db035760 [1/7] checking root items (0:01:24 elapsed, 5200849 items checked) ref mismatch on [2351455076352 16384] extent item 0, found 1sed, 995581 items checked) tree backref 2351455076352 parent 6690 root 6690 not found in extent tree backpointer mismatch on [2351455076352 16384] [2/7] checking extents (0:04:33 elapsed, 997158 items checked) ERROR: errors found in extent allocation tree or chunk allocation [3/7] checking free space cache (0:00:30 elapsed, 4870 items checked) [4/7] checking fs roots (0:09:36 elapsed, 724856 items checked) [5/7] checking csums (without verifying data) (0:00:42 elapsed, 1691026 items checked) [6/7] checking root refs (0:00:00 elapsed, 223 items checked) [7/7] checking quota groups skipped (not enabled on this FS) found 3669056704512 bytes used, error(s) found total csum bytes: 3565923444 total tree bytes: 16333701120 total fs tree bytes: 11922030592 total extent tree bytes: 561364992 btree space waste bytes: 2285778795 file data blocks allocated: 49969135439872 referenced 5223974543360 I run it with another superblock or with backup of root tree like: btrfs ch -p -s 1 /dev/mapper/bckp btrfs ch -p -b /dev/mapper/bckp But with exactly same result and figures. I didn't run the repair option because it is marked as dangerous. Are these errors reparable? How can I save the filesystem?
Thanks Emmanuel Rosa for your comment, you've pointed me to the right direction. After i run scrub on a mounted volumebtrfs sc start -Bd /dev/mapper/bckp I've got this result: scrub device /dev/mapper/bckp (id 1) done scrub started at Thu Sep 12 13:28:38 2019 and finished after 05:12:29 total bytes scrubbed: 3.02TiB with 0 errors No errors or warnings in logs. So I again run btrfs check and finally got clean output: bf ch -p /dev/mapper/bckp Opening filesystem to check... Checking filesystem on /dev/mapper/bckp UUID: 4b793176-530a-4a82-b156-3363db035760 [1/7] checking root items (0:01:27 elapsed, 5102746 items checked) [2/7] checking extents (0:04:15 elapsed, 969366 items checked) [3/7] checking free space cache (0:00:32 elapsed, 4871 items checked) [4/7] checking fs roots (0:09:14 elapsed, 720718 items checked)ked) [5/7] checking csums (without verifying data) (0:00:29 elapsed, 1557593 items checked) [6/7] checking root refs (0:00:00 elapsed, 222 items checked) [7/7] checking quota groups skipped (not enabled on this FS) found 3308474904576 bytes used, no error found total csum bytes: 3214237436 total tree bytes: 15878373376 total fs tree bytes: 11854004224 total extent tree bytes: 553287680 btree space waste bytes: 2253478467 file data blocks allocated: 49609008967680 referenced 4863848071168 My software is: btrfs version btrfs-progs v4.19 uname -rom 4.19.57-gentoo x86_64 GNU/Linux` So FS is clean and usable again without any warnings, thank you for helping me.
btrfs ref and backpointer mismatch, ERROR in extent allocation tree
1,592,658,112,000
I have the following requirements: I have an LVM volume A, encrypted with key K1 using LUKS. I need to make a copy-on-write snapshot of A such that Writes to A will continue to be encrypted under K1 Writes to the snapshot will be encrypted under K2, which is different from K1. The use case is to allow the snapshot to be securely deleted by deleting the encryption key. Is this possible?
No, it is not possible to change the LUKS encryption key when making an LVM snapshot. LUKS is unaware of LVM, so this would be no different than cloning a partition and expecting to be able to change the encryption key. Now, you MAY be able to achieve your goal if you flip LVM and LUKS. It's a complex setup that goes something like this: Create multiple LUKS containers in partitions, each with a different key, of course. Create an LVM volume group which uses the unlocked LUKS containers as physical volumes. When you create logical volumes, specify which physical volume to use; This will determine which LUKS key is used for the logical volume. When you create a snapshot, specify a different physical volume; This means writes to this volume would be encrypted with a different LUKS key.
Encrypting data written to an LVM snapshot with a different key
1,592,658,112,000
So I have LVM on LUKS in Arch Linux. My whole disk is encrypted and my boot partition is on an external disk. So do I need to encrypt the swap partition if the disk is encrypted when powered off?
If you’re swapping to a logical volume in your encrypted LVM on top of LUKS, then no, you don’t need additional encryption for your swap, it’s already encrypted.
If I have full disk encryption do I need to encrypt the swap partition?
1,592,658,112,000
I'm trying to setup a new Arch Linux installation with encrypted /boot partition, as described here: https://wiki.archlinux.org/index.php/Dm-crypt/Encrypting_an_entire_system#Encrypted_boot_partition_.28GRUB.29 I'm creating three partitions with cgdisk: /dev/sda1 - Type ESP (ef00) Size 100MiB /dev/sda2 - Type Linux (8300) Size 200MiB - for /boot (after encryption) /dev/sda3 - Type Linux LVM (8e00) Size 12GiB - for / (after encryption) Then I'm following with these commands: mkfs.fat -F32 /dev/sda1 cryptsetup luksFormat /dev/sda2 cryptsetup open /dev/sda2 cryptoboot mkfs.ext2 /dev/mapper/cryptoboot mkdir /mnt/boot mount /dev/mapper/cryptoboot /mnt/boot mkdir /mnt/boot/efi mount /dev/sda1 /mnt/boot/efi cryptsetup luksFormat /dev/sda3 cryptsetup open /dev/sda3 cryptosystem mkfs.f2fs /dev/mapper/cryptosystem mount /dev/mapper/cryptosystem /mnt # edit "/etc/pacman.d/mirrorlist" as needed pacstrap /mnt base grub-efi-x86_64 efibootmgr dosfstools f2fs-tools genfstab -U /mnt >> /mnt/etc/fstab arch-chroot /mnt # remember to configure time, locale, language and hostname # edit "/etc/mkinitcpio.conf" # HOOKS="base udev autodetect modconf block keymap encrypt lvm2 filesystems keyboard fsck" mkinitcpio -p linux # edit "/etc/default/grub" # GRUB_CMDLINE_LINUX="cryptdevice=/dev/sda3:lvm" # GRUB_ENABLE_CRYPTODISK=y grub-mkconfig -o /boot/grub/grub.cfg grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=grub --recheck I'm getting this error: Installing for the x86_64 platform. grub-install: error: failed to get canonical path of '/boot/efi'. Already tried: Installing the fuse2 and mtools packages; Re-creating /boot/efi directory and re-mounting /dev/sda1to it, while in the chroot environment. When using ext4 for the root partition, this last procedure works and GRUB installs - and even boots (and oddly enough, re-mounting isn't necessary, only mkdir). But for F2FS, it's not enough, although it manages to change the error message to: Installing for the x86_64 platform. grub-install: error: unknown filesystem. According to The Arch Wiki ([1], [2]) it should be possible to use F2FS for root, provided that GRUB is installed to a separate partition with another filesystem which it supports. My /boot partition is ext2. So, why won't it install? Appreciate your help immensely.
The solution is to pay attention to the /etc/fstab upon its generation, since genfstab doesn't add entries for /boot and /boot/efi and it must be done by hand. After chroot, we must re-mount not only the ESP, but also the /boot partition. Then grub-install will work. Update: Mounting /boot and the ESP should really be done AFTER mounting the root filesystem to /mnt, i.e. # format the ESP mkfs.fat -F32 /dev/sda1 # set up LUKS for the boot partition cryptsetup luksFormat /dev/sda2 cryptsetup open /dev/sda2 cryptoboot mkfs.ext2 /dev/mapper/cryptoboot # same for the root partition cryptsetup luksFormat /dev/sda3 cryptsetup open /dev/sda3 cryptosystem mkfs.f2fs /dev/mapper/cryptosystem # mount root, and only then, mount /boot and the ESP, in that order mount /dev/mapper/cryptosystem /mnt mkdir /mnt/boot mount /dev/mapper/cryptoboot /mnt/boot mkdir /mnt/boot/efi mount /dev/sda1 /mnt/boot/efi # edit "/etc/pacman.d/mirrorlist", then continue with pacstrap etc It is a matter of logic. If we do things in that order, genfstab will correctly generate entries for all partitions and everything will work just fine.
Installing GRUB to encrypted partition doesn't work if (root) is F2FS
1,592,658,112,000
I have a linux mint 18 installation with a luks encrypted / and swap partitions. / which is /dev/sda6 unlocks and mounts fine at boot up. The system then goes into emergency mode. journalctl says timeout trying to reach the swap partition. I tried running cryptsetup open --type luks /dev/sda5 sda5_cryptand that returns Device /dev/sda5 is not a valid LUKS device.
DopeGhoti is correct. To confirm a corrupted LUKS header, you can use the following command: cryptsetup luksDump /dev/sda5 You should get the same error message. To fix it, re-create the LUKS container, setup the swap again, and take a backup of the LUKS header. Something like this: cryptsetup luksFormat /dev/sda5 cryptsetup open --type luks /dev/sda5 sda5_crypt mkswap -L SWAP /dev/mapper/sda5_crypt swapon -L SWAP cryptsetup luksHeaderBackup /dev/sda5 --header-backup-file /root/sda5_luks_header.img The LUKS header is so vulnerable. There's only one copy so when you lose it, there's no way to unlock the device. Unless... you have a backup ;)
LUKS encrypted swap partition is no longer recognized after power failure
1,592,658,112,000
I wondering where are places luks key password. For example password for root is in etc/shadow. Where I can find file with luks key password? I add this cryptsetup luksAddKey --key-slot 1 /dev/sda5 what is directory of file with this password?
The key material is stored after the LUKS partition header (so no file). The key slots can be viewed using: cryptsetup luksDump /dev/<your_disk> See: https://gitlab.com/cryptsetup/cryptsetup/wikis/LUKS-standard/on-disk-format.pdf
Directory to file with Luks key password
1,592,658,112,000
Using a liveCD. Benchmark says the fastest disk IO on my notebook will be: aes-xts 256b root@ubuntu:~# cryptsetup benchmark # Tests are approximate using memory only (no storage IO). PBKDF2-sha1 1008246 iterations per second PBKDF2-sha256 615361 iterations per second PBKDF2-sha512 458293 iterations per second PBKDF2-ripemd160 585142 iterations per second PBKDF2-whirlpool 215578 iterations per second # Algorithm | Key | Encryption | Decryption aes-cbc 128b 517.0 MiB/s 2130.7 MiB/s serpent-cbc 128b 69.3 MiB/s 240.2 MiB/s twofish-cbc 128b 157.3 MiB/s 294.5 MiB/s aes-cbc 256b 398.4 MiB/s 1785.7 MiB/s serpent-cbc 256b 70.4 MiB/s 234.5 MiB/s twofish-cbc 256b 158.3 MiB/s 290.5 MiB/s aes-xts 256b 1964.8 MiB/s 1968.9 MiB/s serpent-xts 256b 246.5 MiB/s 240.0 MiB/s twofish-xts 256b 290.2 MiB/s 293.9 MiB/s aes-xts 512b 1372.7 MiB/s 1403.4 MiB/s serpent-xts 512b 244.9 MiB/s 240.0 MiB/s twofish-xts 512b 272.5 MiB/s 296.2 MiB/s root@ubuntu:~# my current settings for the LUKS device: root@ubuntu:~# cryptsetup luksDump /dev/sda5 LUKS header information for /dev/sda5 Version: 1 Cipher name: aes Cipher mode: xts-plain64 Hash spec: sha256 Payload offset: 4096 MK bits: 512 MK digest: 48 86 e6 b3 6b 4c 4b 9e 2c ce ce ed c3 57 13 11 ab b4 fd 2d MK salt: 83 d4 35 64 d8 01 75 9d 58 76 8d 2e ac eb 3a 9c a4 11 3b 9f f4 79 1d 56 5c 57 25 23 39 d8 b5 ab MK iterations: 80375 UUID: df2f64fa-5bce-4d8c-9dcb-274435c8180a Key Slot 0: ENABLED Iterations: 323231 Salt: ca 08 b2 1b 43 a3 0f 41 df 3b 13 95 fa 80 03 33 ba 28 70 a5 36 6f a2 0d 94 ae 25 55 ee 1b 62 b0 Key material offset: 8 AF stripes: 4000 Key Slot 1: DISABLED Key Slot 2: DISABLED Key Slot 3: DISABLED Key Slot 4: DISABLED Key Slot 5: DISABLED Key Slot 6: DISABLED Key Slot 7: DISABLED root@ubuntu:~# but when I try to set it for fast disk IO but slow brute-force attack speeds against the LUKS password (increasing iteration time to 10 seconds - according to the manpages, the default is 1 second): root@ubuntu:~# cryptsetup-reencrypt /dev/sda5 -c aes-xts -s 512 -h sha512 -i 10000 WARNING: this is experimental code, it can completely break your data. Enter passphrase for key slot 0: device-mapper: reload ioctl on failed: Invalid argument Activation of temporary devices failed. root@ubuntu:~# It trashes my LUKS device: root@ubuntu:~# cryptsetup luksDump /dev/sda5 Device /dev/sda5 is not a valid LUKS device. root@ubuntu:~# Q: what am I missing?
Your live cd must be very old, regarding the warning message of cryptsetup-reencrypt. I used this tool many times, without any message or problems like this. Also your command line is not correct, and must be changed from aes-xts to aes-xts-plain64.
cryptsetup-reencrypt crashes my LUKS device
1,592,658,112,000
I want to make my linux laptopwith LUKS Encryption. OS : Mint 17. Is it better to make the disk LUKS-encrypted: at the installation process ? after it ?
During the installation, the installer will show you an option to encrypt the disk with LUKS.
Mint, install and LUKS/LVM
1,592,658,112,000
My root node is LUKS encrypted, and it uses a key file, instead of user input password to unlock it. I know in Grub2, I can use insmod cryptodisk insmod luks cryptomount -u <uuid> But I have no idea how to use a key file to unlock the disk.
Try insmod (hd0,1)/routeto/keyfile.file change (hd0,1) (if necessary)and put the mounting point of the keyfile file system, sda or whatsoever...
grub2 and cryptsetup
1,690,742,708,000
I need an advice about a bad situation. I was installing a new linux ditro by using my main PC. I booted with the ISO and then installed the distro on /dev/sdc, which was an external USB drive. My bad, I didn't realized when installing, that I didn't change the boot loader installation on the same drive, but I left on /dev/sda, which holds my main operating system (LUKS encrypted). Obviuosly after this, the main operating system is not booting any more, and I see an errror message from GRUB saying: error: no such cryptodisk found Now, I have a clonezilla backup of my /dev/sda/, and in the file list, among other files, I see 12/04/2022 04:59 AM 10 parts 12/03/2022 11:41 PM 38 sda-chs.sf 12/03/2022 11:41 PM 1,048,064 sda-hidden-data-after-mbr 12/03/2022 11:41 PM 512 sda-mbr 12/03/2022 11:41 PM 391 sda-pt.parted 12/03/2022 11:41 PM 338 sda-pt.parted.compact 12/03/2022 11:41 PM 267 sda-pt.sf 12/03/2022 11:41 PM 118,196,482 sda1.ext4-ptcl-img.gz.aa 12/04/2022 04:59 AM 512 sda2-ebr I hope some of these backups can help me in restore the situation (may be sda-mbr and sda-hidden-data-after-mbr?), but I'd like to ask your help before doing anything, in order to avoid more damage. Anyone can advice how to recover the situation? Thanks so much!!
UEFI or BIOS installs. Looks more like BIOS as you have MBR and maybe grub's core.img which is in the sectors after the MBR. Most systems now are UEFI and prefer gpt partitioning. If you plug in external drive, does it boot? Update it to see encrypted install. sudo apt-get update && sudo apt-get install lvm2 cryptsetup sudo modprobe dm-crypt sudo cryptsetup luksOpen /dev/sdXY my-crypt sudo os-prober sudo update-grub And then install grub from external drive to external drive's MBR. And from encrypted install install grub to sda. So internal drive boots normally. Also what version of grub? Nowadays GRUB (v2.06) allows to boot Linux from a LUKS-encrypted /boot partition (even one inside a LVM) with the GRUB_ENABLE_CRYPTODISK=y key and the appropriate LUKS or LUKS2 modules loaded. Otherwise you can use live installer. https://askubuntu.com/questions/63594/mount-encrypted-volumes-from-command-line http://askubuntu.com/questions/719409/how-to-reinstall-grub-from-a-liveusb-if-the-partition-is-encrypted-and-there-i?rq=1
Restore lost bootloader on a luks partition
1,690,742,708,000
System: Fedora 37, Gnome 43 I enabled LUKS encryption on setup and enabled auto-decrypt via TPM 2 with following an article from Fedora Magazine. Auto-decrypt works but while it decrypts, it shows the passphrase screen until system boots. How can I hide this screen?
This worked for me, after reading up and working out what was asking for the password I tried a sleep for the ask password. I tried a few values 8 seconds the boot screen showed then switched to the password before continuing, 15 didn't show up. It means that if the TPM decrypt fails the password will eventually show up. To Hide the Password on startup for 15 seconds (enough time for auto unencrypt) edit systemd-ask-password-plymouth.service in /usr/lib/systemd/system, add: [Service] ExecStartPre=/bin/sleep 15 Don't forget to dracut -f to rebuild the initramfs and then rebind your disk to the TPM chip depending on what PCR ID's you used.
I Have LUKS Enabled And Integrated With TPM 2. How To Hide Passphrase Screen?
1,690,742,708,000
I followed my own instructions here to shrink down a LUKS-encrypted Ubuntu 20.04 partition and its inner LVM volume so I could install Ubuntu 22.04 in a new LUKS-encrypted partition next to it, but after installing the LUKS-encrypted Ubuntu 22.04 OS, the LUKS-encrypted 20.04 installation (in a separate encrypted partition) is no longer in the Grub boot menu. Why? How do I get this dual boot to work properly? Should I have put both OS's in the same LUKS-encrypted partition, just in different LVM volumes within that partition? Here's my disk, as shown in gparted while logged into the new Ubuntu 22.04 OS. Description: /dev/nvme0n1p1 is the 512 MiB EFI partition /dev/nvme0n1p2 is the ext4 /boot non-encrypted partition for the old Ubuntu 20.04 OS /dev/nvme0n1p3 is the LUKS-encrypted partition containing a single LVM volume with Ubuntu 20.04 in it (no longer in the grub menu) /dev/nvme0n1p4 is the ext4 /boot non-encrypted partition for the new Ubuntu 22.04 OS /dev/nvme0n1p5 is the LUKS-encrypted partition containing a single LVM volume with Ubuntu 22.04 in it (is in the grub menu, and is the OS running right now) These look potentially useful: Ask Ubuntu: How can I install Ubuntu encrypted with LUKS with dual-boot? Ask Ubuntu: how do I boot into a LUKS-encrypted environment? - helps me clearly see the definitions of LUKS-partition vs LVM vs logical volumes within it
I figured it out! How to add other LUKS-encrypted Linux distributions back to your Grub bootloader startup menu: quick summary # 1. Open your `/etc/default/grub` file. sudo gedit /etc/default/grub # Then manually add these lines to the bottom of that file: # (required) GRUB_DISABLE_OS_PROBER=false # (optional) GRUB_ENABLE_CRYPTODISK=y # 2. Unlock your LUKS-encrypted partitions which contain other bootable # operating systems. In my case: sudo cryptsetup luksOpen /dev/nvme0n1p3 nvme0n1p3_crypt # 3. Update your Grub bootloader in your `/boot` partition. sudo update-grub # When I run `update-grub`, my output now includes this line: # # Found Ubuntu 20.04.5 LTS (20.04) on /dev/mapper/system-root # 4. Done. Reboot to see and use the new Grub entries! reboot For a ton more details and information, see my much longer answer here: Ask Ubuntu: How to get old LUKS-encrypted Ubuntu version back into Grub menu after installing new Ubuntu version in new LUKS partition
How to get a dual boot (2 Linux OSs) system working when both are LUKS-encrypted
1,690,742,708,000
I'm relatively new to linux and I have just wiped my drive and I am installing arch linux from the start. When I run: cryptsetup luksFormat /dev/sda3 I get the following warning: WARNING: Locking directory /run/cryptsetup is missing! It allows me to continue apparently but I decided to check if that would create me any problems in the future. So should I fix this now? What are the consequences of that situation? How can I fix it? All threads I've seen seem to be about more serious problems which don't seem to be the case here (I'm guessing/hoping). Thanks in advance.
You can ignore the warning, cryptsetup will create the directory if it doesn't exist. There were some discussions between systemd and cryptsetup who should be in charge of creating the directory. The warning was changed to a debug message with a different wording on cryptsetup 2.3.5 and newer. The directory itself is used for header locking. From cryptsetup manpage: The LUKS2 on-disk metadata is updated in several steps and to achieve proper atomic update, there is a locking mechanism. For an image in file, code uses flock(2) system call. For a block device, lock is performed over a special file stored in a locking directory (by default /run/lock/cryptsetup). The locking directory should be created with the proper security context by the distribution during the boot-up phase. Only LUKS2 uses locks, other formats do not use this mechanism. (It says default is /run/lock/cryptsetup which is no longer true, default is now /run/cryptsetup, but that's just a documentation issue.)
arch - /run/cryptsetup missing while formatting luks partition
1,690,742,708,000
At reboot, with USB sticks inserted, the TPM will not allow passphraseless booting of the server. With a USB HDD inserted passphraseless booting of the server is possible. Our servers are running Centos 8.3 with Linux kernel version 4.18.0-240. TPM2 modules are used with LUKS version 1 encryption of an LVM group consisting of several partitions including / /home swap etc. LUKS header slot 0 is used for the passphrase, and slot 1 for the TPM. The LUKS header slot 1 is bound to PCR values 0,1,2 and 3. Thus the BIOS (0), BIOS configuration (1), Options ROM (2) and Options ROM configuration (3). From what I read the Options ROM consists of the firmware that is loaded during the POST boot process. If any changes occur from the state of the system when the TPM was signed, the TPM won't allow the system to boot without a passphrase. As USB sticks have firmware that might get loaded during the boot process, I initially thought that binding only to PCR values 0 and 1, ie without the Options ROM, would solve the problem. This did not work. Any advice on why it won't boot from the TPM with a USB stick attached will be appreciated.
We have figured out which PCR value changes after a USB stick gets inserted and the system reboots. By not binding to that PCR value we managed to boot off the TPM with a USB stick inserted. In our case PCR 1 (BIOS config) kept changing when a USB stick was inserted and the system rebooted. The command we used to query the PCR values was tpm2_pcrread. This lists the PCR values in sha1 and sha256. We redirected the stdout of the PCR values to a file before and after the changes and using diff command we monitored the changes between each file.
Not booting off TPM with USB disk inserted
1,690,742,708,000
I have added a second disk to my LVM system. I created a physical volume there, added it to the volume group of ubuntu, 'vgubuntu', extended logical volume to fill the whole disk. How do I extend the LUKS system partition to fill the whole logical volume? Here's more info provided by pvdisplay, vgdisplay and lvdisplay: --- Physical volume --- PV Name /dev/mapper/nvme0n1p3_crypt VG Name vgubuntu PV Size <464.53 GiB / not usable 0 Allocatable NO PE Size 4.00 MiB Total PE 118919 Free PE 0 Allocated PE 118919 PV UUID DwO3R1-DeRo-c83D-qx5F-xjC5-icXG-x3j28i --- Physical volume --- PV Name /dev/nvme1n1p1 VG Name vgubuntu PV Size <476.94 GiB / not usable 0 Allocatable yes (but full) PE Size 4.00 MiB Total PE 122096 Free PE 0 Allocated PE 122096 PV UUID 9UyJR4-m0G9-sYPG-BBkW-2WEg-TBdR-DAj0u3 root@omen15:~# vgdisplay --- Volume group --- VG Name vgubuntu System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 8 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 2 Act PV 2 VG Size 941.46 GiB PE Size 4.00 MiB Total PE 241015 Alloc PE / Size 241015 / 941.46 GiB Free PE / Size 0 / 0 VG UUID ANNTFf-p9hU-O4R3-jwDQ-bZhP-v8tm-hVL8Fn root@omen15:~# lvdisplay --- Logical volume --- LV Path /dev/vgubuntu/root LV Name root VG Name vgubuntu LV UUID rxnIOU-yNg2-ythJ-Dz5V-N3Sr-X7DQ-WzbUUF LV Write Access read/write LV Creation host, time ubuntu, 2021-07-24 17:25:39 +0300 LV Status available # open 1 LV Size <940.51 GiB Current LE 240770 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/vgubuntu/swap_1 LV Name swap_1 VG Name vgubuntu LV UUID MOvhEP-64w3-wHHO-wmDh-YkSU-XARL-7hRQIf LV Write Access read/write LV Creation host, time ubuntu, 2021-07-24 17:25:39 +0300 LV Status available # open 2 LV Size 980.00 MiB Current LE 245 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 And here's what df -h prints: root@omen15:~# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 1.6G 2.1M 1.6G 1% /run /dev/mapper/vgubuntu-root 925G 7.3G 871G 1% / tmpfs 7.6G 12M 7.6G 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup /dev/nvme0n1p2 705M 251M 403M 39% /boot /dev/nvme0n1p1 511M 5.3M 506M 2% /boot/efi tmpfs 1.6G 2.0M 1.6G 1% /run/user/1000
You have LUKS configured on the PV level so "under" your LVM setup so unfortunately you need to start over -- your PV must be encrypted you can't "extend" the existing LUKS/dm-crypt device to the second disk. The structure should look like disk -> partition -> LUKS -> PV -> VG -> LV (it is possible to configure encryption on the LV level but your existing configuration is encrypted on the PV level). So you need to shrink your root LV back, remove your newly created PV from vgubuntu and then create LUKS on nvme1n1p1 (cryptsetup luksFormat /dev/nvme1n1p1), unlock it (cryptsetup luksOpen /dev/nvme1n1p1 nvme1n1p1_crypt) and use /dev/mapper/nvme1n1p1_crypt as the second PV. You'll also need to add the new LUKS device to /etc/crypttab.
How do I extend LUKS partition to fill the whole logical volume of 2 disks on LVM?
1,690,742,708,000
I am trying to set up SELinux and an encrypted additional partition that I mount at startup using a systemd service. If I run SELinux in permissive mode, everything runs ok (partition is correctly mounted, data can be accessed and service runs properly). If I run SELinux in enforcing mode (enforcing=1), I am not able to mount such partition with the error: /dev/mapper/temporary-cryptsetup-1808: chown failed: Permission denied sh[1777]: Failed to open temporary keystore device. sh[1777]: Command failed with code 5: Input/output error Any ideas to fix that? Audit2allow does not return any additional rules to be added Edit 1 after @A.B comment: I used cat instead of tail. Audit2allow suggest no additional allow rules, but analyzing the log file I find some denial of interest: type=AVC msg=audit(1624863678.748:72): avc: denied { getattr } for pid=1894 comm="cryptsetup" path="/dev/dm-0" dev="devtmpfs" ino=5388 scontext=system_u:system_r:sysadm_t:s0-s15:c0.c1023 tcontext=system_u:object_r:fixed_disk_device_t:s15:c0.c1023 tclass=blk_file permissive=1 type=AVC msg=audit(1624863678.748:73): avc: denied { read } for pid=1894 comm="cryptsetup" name="dm-0" dev="devtmpfs" ino=5388 scontext=system_u:system_r:sysadm_t:s0-s15:c0.c1023 tcontext=system_u:object_r:fixed_disk_device_t:s15:c0.c1023 tclass=blk_file permissive=1 Searching for every "cryptsetup" entry in the audit log I find this: ~# cat /var/log/audit/audit.log | grep "cryptsetup" type=AVC msg=audit(1624863678.748:72): avc: denied { getattr } for pid=1894 comm="cryptsetup" path="/dev/dm-0" dev="devtmpfs" ino=5388 scontext=system_u:system_r:sysadm_t:s0-s15:c0.c1023 tcontext=system_u:object_r:fixed_disk_device_t:s15:c0.c1023 tclass=blk_file permissive=1 type=SYSCALL msg=audit(1624863678.748:72): arch=14 syscall=195 success=yes exit=0 a0=bfebd34c a1=bfebd2e0 a2=bfebd2e0 a3=bfebd370 items=0 ppid=1891 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cryptsetup" exe="/usr/sbin/cryptsetup" subj=system_u:system_r:sysadm_t:s0-s15:c0.c1023 key=(null) type=AVC msg=audit(1624863678.748:73): avc: denied { read } for pid=1894 comm="cryptsetup" name="dm-0" dev="devtmpfs" ino=5388 scontext=system_u:system_r:sysadm_t:s0-s15:c0.c1023 tcontext=system_u:object_r:fixed_disk_device_t:s15:c0.c1023 tclass=blk_file permissive=1 type=SYSCALL msg=audit(1624863678.748:73): arch=14 syscall=5 success=yes exit=6 a0=bfebf7ac a1=131000 a2=0 a3=10022cc0 items=0 ppid=1891 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cryptsetup" exe="/usr/sbin/cryptsetup" subj=system_u:system_r:sysadm_t:s0-s15:c0.c1023 key=(null) Edit 2: Looking for any changes in the refpolicy repo, I found this Novembre 2020 commit and this February 2021 commit. I don't know if they may apply to the case in hand.
Solved assigning to cryptsetup the lvm_exec_t context. In the lvm.fc file cryptsetup was defined as /bin/cryptsetup but I had to change it to /usr/sbin/cryptsetup where it actually was.
SELinux and cryptsetup: chown failed and can't access temporary keystore
1,690,742,708,000
I am trying to setup lvm on luks2 with boot inside lvm. NAME FSTYPE FSVER FSAVAIL FSUSE% MOUNTPOINT nvme0n1 ├─nvme0n1p1 vfat FAT32 510.7M 0% /mnt/efi └─nvme0n1p2 crypto_LUKS 2 └─cryptlvm LVM2_member LVM2 001 ├─ArchNVMe-swap swap 1 [SWAP] ├─ArchNVMe-root ext4 1.0 27G 8% /mnt └─ArchNVMe-home ext4 1.0 395.5G 0% /mnt/home cryptomount works, and ls in grub rescue shows all the volumes, but it can't identify their filesystem (error: unknown filesystem), including ArchNVMe-root and nvme0n1p2. On wiki it says that it can happen if BIOS boot partition outside of the first 2TiB. But I didn't create BIOS boot partition because it also says that UEFI systems don't need one. I have tried with BIOS boot partition, it didn't change anything, still getting that error. Thanks in advance.
Thanks to telcoM for their comment -- it appears that I didn't have ext2.mod loaded.
GRUB error: unknown filesystem
1,690,742,708,000
My disks (ZFS on Linux on encrypted LUKS) are not staying in standby and I'm not able to identify which process is waking them up. iotop is showing command txg_sync which is related to ZFS. So I tried fatrace. But even with fatrace -c I don't get any output. This is related to ZFS and a known issue. Next try was using the iosnoop script (https://github.com/brendangregg/perf-tools). With this I was only able to identify that dm_crypt is writing when the disks are becoming active again. So it seems I'm not really able to identify the process nor the file which is accessed due to the combination of ZFS and LUKS. What else can I do to identify which process is waking up my drives?
With the following you are able to identify I/O per process: cut -d" " -f 1,2,42 /proc/*/stat | sort -n -k +3
How to identify which process is writing on encrypted disk with ZFS
1,604,269,152,000
loop0p2 contains a luks2 that contains an ext4. The question is: how to calculate the minimum size the outer partition needs in order to hold the inner. Given is the size of the inner partition (in the example: 25614K) Device Start End Sectors Size Type /dev/loop0p1 2048 4095 2048 1M Microsoft basic data /dev/loop0p2 4096 87273 83178 40,6M unknown 83178 Sectors (a 512 B) = 42587136 B cryptsetup luksDump /dev/loop0p2 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] 42587136 B - 16760832 B (luks2 overhead) = 25826304 B 25826304 B == 50442 sectors == 25221 KiB dumpe2fs -h /dev/mapper/test| grep -iE '^(block|inode) (count|size)' dumpe2fs 1.45.6 (20-Mar-2020) Inode count: 5232 Block count: 24567 Block size: 1024 Inode size: 128 inner partition: 1285232 + 245671024 = 25156608 B = 24567 KiB resize2fs /dev/mapper/test 25614K resize2fs 1.45.6 (20-Mar-2020) The containing partition (or device) is only 25205 (1k) blocks. You requested a new size of 25614 blocks. 25221 KiB - 25205 KiB = 16 KiB So: the outer partition holds 41589 KiB, the luks overhead is 16368 KiB Makes for the inner partition: 25221 KiB Actual size of the inner partition seems to be: 24567 KiB. The inner partition says, it cannot grow over 25205 KiB. What is in the 16 KiB stored? Did I miscalculate the LUKS overhead? Is that something ext4 related?
The "extra" 16 KiB is secondary LUKS2 header. The "Metadata area: 16384 [bytes]" is size of the header, but you have two on the disk. See LUKS2 On-Disk Format Specification for more details.
ext4 in luks: how to calculate the minimum 'outer partition's size
1,604,269,152,000
Current setup is two 4TB hard drives in a RAID 1 setup using MDADM in Debian. Would like to encrypt the MD0 mount with whatever encryption is good. I was going to use something like this: sudo cryptsetup luksFormat /dev/md0 sudo cryptsetup luksOpen /dev/md0 secure sudo mkfs.ext4 /dev/mapper/secure 1) From what I understand, this will wipe out my data before encrypting. Is there a way to encrypt my MD0 without losing or moving all the data around? 2) Would the array being encrypted impact the performance of grep/ripgrep commands?
Generally speaking it's not possible with standard linux tools to encrypt in place, that is to encrypt that device while it has data on it, and not lose the data. You'll need to copy it to another media, encrypt and then copy it back. It will effect performance of all operations on that disk. Now the disk will have to encrypt and decrypt blocks of data to read and write to them which has overhead.
MDADM with Encryption without losing files
1,604,269,152,000
After updating to kernel 5.5.10-200.fc31, Fedora 31 can't decrypt the root file system on boot. After entering the decryption passphrase, the filesystem fails to decrypt. The same happens with kernel 5.5.11. However, if I boot with kernel 5.5.8 there's no problem. These are the error messages I get with 5.5.11 when running 'journalctl': localhost.localdomain systemd-cryptsetup[436]: device-mapper: reload ioctl on failed: Invalid argument localhost.localdomain kernel: device-mapper: table: 253:0: crypt: unknown target type localhost.localdomain kernel: device-mapper: ioctl: error adding target to table localhost.localdomain systemd-cryptsetup[436]: Failed to activate with specified passphrase: Invalid argument localhost.localdomain systemd[1]: systemd-cryptsetup@luks\.... .service: Main process exited, code=exited, status=1/FAILURE localhost.localdomain systemd[1]: systemd-cryptsetup@luks\.... .service: Failed with result 'exit-code'. localhost.localdomain systemd[1]: Failed to start cryptography setup for luks-.... localhost.localdomain systemd[1]: Dependency failed for Local Encrypted Volumes. localhost.localdomain systemd[1]: Job cryptsetup.target/start failed with result 'dependency' I left out the luks ids as I'm typing this up by hand. Any help appreciated!
You should head over to https://bugzilla.redhat.com and report this as a bug. It is very unlikely that we here can help. Only advise (for now) is to delete the oldest offending kernel(s), so you only keep the very last one and one (or two) working kernels, that way an update won't erase a working kernel.
Fedora 31 kernels 5.5.10 and 5.5.11 fail when trying to decrypt luks root filesystem after kernel update, but kernel 5.5.8 works
1,604,269,152,000
Everytime i decrypt my luks drive the partition is not showing up: cryptsetup -v luksOpen /dev/md0 md0_crypt lsblk sdb 8:16 0 3,7T 0 disk └─sdb1 8:17 0 3,7T 0 part └─md0 9:0 0 7,3T 0 raid5 └─md0_crypt 253:11 0 7,3T 0 crypt sdc 8:32 0 3,7T 0 disk └─sdc1 8:33 0 3,7T 0 part └─md0 9:0 0 7,3T 0 raid5 └─md0_crypt 253:11 0 7,3T 0 crypt sdd 8:48 0 3,7T 0 disk └─sdd1 8:49 0 3,7T 0 part └─md0 9:0 0 7,3T 0 raid5 └─md0_crypt 253:11 0 7,3T 0 crypt when i run partprobe partprobe lsblk sdb 8:16 0 3,7T 0 disk └─sdb1 8:17 0 3,7T 0 part └─md0 9:0 0 7,3T 0 raid5 └─md0_crypt 253:11 0 7,3T 0 crypt └─md0_crypt1 253:12 0 7,3T 0 part sdc 8:32 0 3,7T 0 disk └─sdc1 8:33 0 3,7T 0 part └─md0 9:0 0 7,3T 0 raid5 └─md0_crypt 253:11 0 7,3T 0 crypt └─md0_crypt1 253:12 0 7,3T 0 part sdd 8:48 0 3,7T 0 disk └─sdd1 8:49 0 3,7T 0 part └─md0 9:0 0 7,3T 0 raid5 └─md0_crypt 253:11 0 7,3T 0 crypt └─md0_crypt1 253:12 0 7,3T 0 part fdisk: Disk /dev/mapper/md0_crypt: 7,3 TiB, 8001299677184 bytes, 15627538432 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disklabel type: gpt Disk identifier: A599A15F-07DA-B340-ADDC-AA56AE2E9249 Device Start End Sectors Size Type /dev/mapper/md0_crypt-part1 2048 15627536383 15627534336 7,3T Linux I want to mount the md0_crypt1 partition every time i boot. But without runing partprobe every time. Did i miss something?
It's unusual to partition LUKS/LVM devices and as such, this is not covered by most standard tools. In fact, keeping partitions hidden on Device Mapper devices is a feature, as Virtual Machines using Logical Volumes as backing devices would usually partition those but NOT want their partitions to show up on the host. I want to mount the md0_crypt1 partition every time i boot. But without runing partprobe every time. You're pretty much stuck with it. And once you snuggle it into your init script somewhere, you won't even notice the difference... (in other words: automate it) Make a backup before trying anything below this point. Also do this from a LiveCD where nothing is mounted. Option 1) You could remove the superfluous partition table altogether (shift all data by 2048s, i.e. the offset of the first and only partition). Highly Dangerous dd command: dd status=progress bs=1M if=/dev/mapper/md0_crypt1 of=/dev/mapper/md0_crypt Note: dd'ing in place like this has to make sure not to overwrite data that has yet to be read, so this wouldn't work in the other direction. Option 2) Converting the partition table to LVM might also be doable (and not require relocating any data), however LVM prefers a larger metadata area these days and also likes to zero and wipe signatures. So you have to be careful to avoid those and make sure the 1st PE starts at 1M, not 2M, or other larger defaults. [ replace /dev/loop0 with /dev/mapper/md0_crypt ] # vgcreate --dataalignment 1M --metadatasize 128K vgname /dev/loop0 [ this step will wipe GPT signature ] # pvs -o +pe_start /dev/loop0 PV VG Fmt Attr PSize PFree 1st PE /dev/loop0 foobar lvm2 a-- 1020.00m 0 1.00m ^^^^^^^ [ 1st PE must be 1.00m (2048s) otherwise abort mission! ] # lvcreate --wipesignatures n --zero n -l100%FREE -n lvname vgname # file -sL /dev/vgname/lvname /dev/vgname/lvname: Linux rev 1.0 ext4 filesystem data [...] [ if there's no filesystem, something went wrong ] It can be done but should not be attempted without a full backup.
Partition not showing up after Luks decryption
1,604,269,152,000
BACKGROUND: I have already created a passphrase list to use with bruteforce-luks. Unfortunately, that list was not enough to find the correct passphrase. I am assuming I missed out a word or 2. I am fairly sure some or all of the words are in the actual passphrase, but I am unsure of the remaining possible 1 to 3 words. REVISED QUESTION: I think between 1 to 3 words are missing. What's the best approach for me to use? I can't seem to remember what those words would be. ORIGINAL QUESTION: How would I go about attempting to use bruteforce-luks across 4 servers? The servers are in a single building, and are currently loaded using a Linux Live Disk. I have a usb stick which is luks encrypted and have forgotten the passphrase. Tried to bruteforce it at home, but my 2 core CPU was only giving me 4 passphrase attempts per second. I'm hoping to make use of the 8 cores per server (32 cores in total) at the same time to attempt to speed up the bruteforce processing.
Solution: Create program which allows you to type in all the words you usually use in passwords. Run the program which will do a sort of permutation but not only change the order of the words, but also build up the number of words used per permutation. Save the generated passphrase text file (ended up being over 5,000,000 passphrases) Use the 32 cores I had available and split the passphrase text file into 4. I file per server. Use dd to create an additional 3 copies of the USB stick which needs to be brute-forced Plug 1 USB stick into each of the 4 server, and 1 forth of the text file per server. Install and run bruteforce-luks with all cores and all server and around 24 hours later, the passphrase was found. Thank you for all the close votes instead of helping. Gotta love all the helpful people in this place. :)
Using cores on multiple servers for bruteforce-luks [closed]
1,604,269,152,000
I have LVM partitions which are encrypted using LUKS. My root partition is /dev/HDD/root: LVM group "HDD" (at /dev/HDD): Encrypted LUKS device "root" at /dev/HDD/root I'm trying to decrypt it via grub parameters, GRUB_CMDLINE_LINUX: dolvm crypt_root=/dev/HDD/root root=/dev/mapper/root root_keydev=UUID=<usb-uuid> root_key=hdd.key The key for LUKS is located on usb-device at /hdd.key But when it is loading, it shows error message: >> Scanning for and activating Volume Groups Reading all physical volumes. This may take a while... Found volume group "HDD" using metadata type lvm 3 logical volume(s) in volume group "HDD" now active >> Using key device /dev/sdc2. >> Removable device /dev/sdc2 mounted. >> hdd.key on device /dev/sdc2 found No key available with this passphrase. !! Failed to open LUKS device /dev/HDD/root !! Could not find the root in /dev/HDD/root. !! Please specify another value or: !! - press Enter for the same !! - type "shell" for a shell !! - type "q" to skip... According to logs the volume group was found, the device with key was mounted successfully, the key on the device was found, but then it failed to open it using LUKS. Then I'm using shell option to fix it manually. Device with the key is mounted to /mnt/key, so I can find it here and open LUKS device with this key: cat /mnt/key/hdd.key | cryptsetup luksOpen /dev/HDD/root root this command opens LUKS device at /dev/mapper/root, so I can exit the shell and press q to skip this error. After that the system is booting successfully. It seems that something is wrong with my grub configuration, because it is possible to open LUKS device and mount it using my key manually, so my question is how to fix it to open LUKS device automatically on boot using grub2?
The issue was that I used a key as a passphrase for LUKS instead of keyfile. To convert it to keyfile it's needed to add a keyfile via cryptsetup and remove passphrase for it: cat /bood/hdd.key | cryptsetup luksAddKey -d /bood/hdd.key /dev/root cat /bood/hdd.key | cryptsetup luksRemoveKey /dev/root
Failed to open root LUKS device on LVM during boot
1,604,269,152,000
What I have: Solus OS install with an encrypted LVM2 on a 56G SSD w/o swap - works pretty good. I have 32G RAM, so swap isn't an issue right now - it's my future main rig and it is mainly intended to being used as desktop for office, web, daw and rust programming (not everything at the same time). What I want to do: Add two 1T hds formatted with btrfs in a raid 1 configuration to the actual lvm2 volume group and they should contain /home (with all the stuff that's being already there) and being mounted as /home during boot so that I'll have 1T space for /home with software mirroring. The raid level 1 has to be for data and metadata. /home should stay encrypted with the already used key phrase. Also I'd like to mount the btrfs' with -o compression-force that has to be done in fstab and fscrypt. I'm currently unsure whether it was fscrypt or something else sounding similar. What I've understood so far: create the btrfs raid copy everything from /home to the temporary mounted /home-btrfs do some magic to get: /home on ssd gone, unmount /home-btrfs add btrfs-raid to the volume group and mount the btrfs-raid as /home - everything is encrypted again, but with more space Is there anybody who can explain it to me? I'm unsure that I understood it well enough to get started. I am not afraid of the terminal or any cli. I've just decided to opt out of the vendor lock-in of Windows 10 and go for Linux. And I know that I'll get some performance hits with that config but that is okay for me. My plan is currently to do this: gparted will create a partition table (gpt) and format /dev/sdb1 with btrfs open the terminal/shell sudo mount /dev/sdb1 /home-btrfs copy everything frome /home to /home-btrfs with cp -var /home /home-btrfs gparted will create a partition table (gpt) on /dev/sdc -> /dev/sdc1 btrfs device add /dev/sdc1 /home-btrfs btrfs fi balance start -mconvert=raid1,soft -dconvert=raid1,soft /home-btrfs open a second shell to watch the raid conversion progress btrfs filesystem balance status /home-btrfs btrfs balance start -dusage=0 -musage=0 /mnt/btrfs (get rid of empty chunks) I'm stucked because now I could not get to fit lvextend, pvcreate, vgextend and other things from lvm2 into my plan. I apologize my bad grammar. And yes I've spend quite a lot of time with the search function here and Google but couldn't find the answers I need.
Solus OS uses systemd, therefore /etc/crypttab is used to configure LUKS devices that need to be unlocked so that filesystems can be mounted from them using /etc/fstab. Here's the procedure. Mirroring (raid1) /home with LUKS and BTRFS Using Software Center, install btrfs-progs. Create a LUKS key file which will be stored on your encrypted / and used to unlock the new LUKS containers for /home: sudo dd bs=512 count=4 if=/dev/urandom of=/root/home.key. Create LUKS containers on both devices using the key file: sudo cryptsetup luksFormat /dev/sdb /root/home.key && sudo cryptsetup luksFormat /dev/sdc /root/home.key Unlock both LUKS containers: sudo cryptsetup open --type luks /dev/sdb home0 --key-file /root/home.key && sudo cryptsetup open --type luks /dev/sdc home1 --key-file /root/home.key Create the BTRFS filesystem: sudo mkfs.btrfs -d raid1 -m raid1 /dev/mapper/home0 /dev/mapper/home1 Mount the BTRFS filesystem somewhere (you only need to specify one of the devices): mount /dev/mapper/home0 /mnt Create a /home subvolume, to give you more flexibility with BTRFS: sudo btrfs subvol create /mnt/home Copy your home directory to the subvolume: cp -var /home /mnt Create/modify /etc/crypttab so it unlocks the new LUKS containers: sudo echo "home0 /dev/sdb /root/home.key" >> /etc/crypttab && sudo echo "home1 /dev/sdc /root/home.key" Modify /etc/fstab so that it mounts your new home: sudo echo "/dev/mapper/home0 /home btrfs defaults,subvol=/home" >> /etc/fstab Reboot. Your new raid1 BTRFS filesystem will be mounted at /home when you reboot. The remaining item is removing the old /home. To do that: Removing the old /home Reboot but when the systemd-boot menu shows up press the e key. Now you'll be able to edit the kernel command line. Add "systemd.unit=rescue" Press ENTER to boot with the added kernel command line so that you boot into single-user mode. That will allow you to un-mount /home. Un-mount /home: umount /home. Remove the old /home. Be careful, I recommend having backups: cd /home && rm -fR . Reboot. Notice that neither partitions nor LVM are needed since you're using the entire devices for BTRFS only. You also don't need to re-balance BTRFS because it's created with both devices and in RAID1 configuration from the start. In addition, the LUKS containers are unlocked with a key file so that you're not prompted for the passphrase three times. But you may want to add your passphrase into another LUKS slot in case something happens to the key file. Tip Finally, I highly recommend backing up all three LUKS headers. If any of the headers get damaged and you don't have a backup, then you might as well send your disks to the landfill.
How to add a btrfs raid 1 to an encrypted lvm2 volume group under Solus OS (Linux)?
1,604,269,152,000
I am facing booting problem after upgrading kernel to 4.13.3 on CentOS 6 having LUKS encryption (cryptsetup-luks-1.2.0-11.el6.x86_64). I tried compiling the same kernel on another CentOS6 which does not have LUKS volume, that works without any issue. But I am facing issue on the servers having LUKS volume. During booting, the system is asking encryption password and after that there is no progress. I have attached the screenshot. Kindly suggest.
Somehow the mod_crypt and mod_dm modules were not included in initramfs while running make. The issue is resolved after creating initramfs with mod_crypt and mod_dm modules # mkinitrd initramfs-4.13.3.img 4.13.3 --with=mod_crypt --with=mod_dm
System Boot Problem - LUKS+kernel-4.13.3 [closed]
1,604,269,152,000
I've got a second hard drive in my laptop. However, it only mounts itself when I load the GUI and click on the device in Nemo. What I'd like is for it to automount on boot. sudo lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT sdb ├─sdb2 ntfs BIOS_RVY F61C92C71C9281F3 └─sdb1 crypto_LUKS 3df2999e-9b64-46ec-b634-7986877c57f5 └─luks-3df2999e-9b64-46ec-b634-7986877c57f5 ext4 32c29f17-28fd-4288-8680-2fc62027586a /run/media/bill/32c29f17-28fd-4288-8680-2fc62027586a sr0 sda ├─sda4 ntfs WinRE tools 0CDA8AEEDA8AD2FE ├─sda2 ├─sda5 crypto_LUKS 28c449da-d8ba-42be-8a4e-17822270b7bd │ └─luks-28c449da-d8ba-42be-8a4e-17822270b7bd LVM2_member LQR013-0T1K-E5QL-8sVa-94rN-C8cE-Agfbtn │ ├─fedora-root ext4 047ddca4-cfb8-4307-9c86-a8de31c0bc68 / │ ├─fedora-swap swap 18e032b2-eb2c-485c-97ce-b500c675dfda [SWAP] │ └─fedora-home ext4 19caa2b4-d5a3-4c0d-bd76-c11ec303dd0c /home ├─sda3 ext4 f11b0191-49b9-41c2-a8f2-f26851442b17 /boot └─sda1 vfat SYSTEM 1288-7285 /boot/efi (Ignore the NTFS partitions, these are the recovery partitions from the original OEM WIndows setup, just in case I ever want to restore it to factory.) My fstab is: /dev/mapper/fedora-root / ext4 defaults,x-systemd.device-timeout=0 1 1 UUID=f11b0191-49b9-41c2-a8f2-f26851442b17 /boot ext4 defaults 1 2 UUID=1288-7285 /boot/efi vfat umask=0077,shortname=winnt 0 2 /dev/mapper/fedora-home /home ext4 defaults,x-systemd.device-timeout=0 1 2 /dev/mapper/fedora-swap swap swap defaults,x-systemd.device-timeout=0 0 0 /extraswap none swap sw 0 0 and crypttab is luks-28c449da-d8ba-42be-8a4e-17822270b7bd UUID=28c449da-d8ba-42be-8a4e-17822270b7bd none discard luks-3df2999e-9b64-46ec-b634-7986877c57f5 UUID=3df2999e-9b64-46ec-b634-7986877c57f5 none luks Both drives are encrypted with the same passphrase, and I only have to enter it once during boot. I tried adding the following to fstab /dev/mapper/luks-3df2999e-9b64-46ec-b634-7986877c57f5 /run/media/bill/32c29f17-28fd-4288-8680-2fc62027586a ext4 defaults 0 2 However, it got an error on boot. My guess is that it's something to do with LVM and I need to add a reference in there?
As @Thomas points out in comments, the following worked: sudo mkdir /mnt/data Then in fstab: /dev/mapper/luks-3df2999e-9b64-46ec-b634-7986877c57f5 /mnt/data ext4 defaults 0 2
Secondary LUKS physical volume won't automount
1,604,269,152,000
So I'm trying to encrypt a partition sda2 while installing Arch Linux. root@archiso ~ # cryptsetup luksFormat /dev/sda2 -c aes-xts-plain -y -v -s 512 -h sha512 But it fails: Cannot format device /dev/sda2 which is still in use. How do I fix this?
If it's in use, you should check whether it's mounted, loop-device'd, still cryptsetup-opened, active in LVM, part of a RAID set, etc. and then stop all these things. Also quit any running processes that might be using the device (partitioners, installers, ddrescue, badblocks, ...). The list of possibilities as to what could be using a device, is nearly endless. lsof or fuser are able to catch some of them... # example only, none of these are accurate umount /dev/sda2 losetup -D vgchange -a n cat /proc/mdstat | grep -C 2 sda2 mdadm --stop /dev/md?? ... Or, if you want to deliberately ignore the issue, you could explicitely put a loop device on top and then format the loop device. You should reboot afterwards to see if whatever was still using the device, would corrupt your LUKS header because of it (if you can't open it after reboot, that's what happened). Without rebooting, you might be able to copy data happily on the device but it's all gone later... # dangerous hack cryptsetup luksFormat $(losetup --find --show /dev/sda2) -s 512 -h sha512 ... reboot Also triple-check that you're actually working with the correct device in the first place. In your post you somehow mentioned both sda1 and sda2 so which is it? This is not part of your issue, but aes-xts-plain is deprecated in favour of aes-xts-plain64. It's also the default cipher so you don't have to specify it at all. (see cryptsetup --help or luksDump afterwards.)
Arch Linux setup Luks encryption
1,604,269,152,000
I've setup a Kali Linux installation on an SD card and created an encrypted LUKS partition with a LVM logical volume inside of it, then created a BTRFS filesystem inside. Almost everything works, but boot fails after decrypting the LUKS volume succeeds. Logs: Begin: Loading essential drivers ... done. Begin: Mounting root file system ... Begin: Running /scrypts/local-top ... [ 8.655803] device-mapper: ioctl: 4.28.0-ioctl (2014-09-17) initialised: [email protected] [ 8.689182] random: lvm urandom read with 113 bits of entropy available Volume group "pi" not found Skipping volume group pi Unable to find LVM volume pi/root Unlocking the disk /dev/mmcblk0p2 (picrypt) Enter passphrase: Reading all physical volumes. This may take a while... Found volume group "pi" using metadata type lvm2 ffff0000-ffff1000 r-xp 00000000 00:00 0 [vectors]: mlock failed: Cannot allocate memory 1 logical volume(s) in volume group "pi" now active ffff0000-ffff1000 r-xp 00000000 00:00 0 [vectors]: munlock failed: Cannot allocate memory cryptsetup: picrypt set up successfully done. Begin: Running /scripts/local-premount ... done. mount: mounting /dev/mapper/pi-root on /root failed: Invalid argument Begin: Running /scripts/local-bottom ... done. done. Begin: Running /scripts/init-bottom ... mount: mounting /de on /root/dev failed: No such file or directory done. Target filesystem doesn't have requested /sbin/init. No init found. Try passing init= bootarg. modprobe: module i8042 not found in modules.dep modprobe: module atkbd not found in modules.dep modprobe: module ehci-pci not found in modules.dep modprobe: module ehci-orion not found in modules.dep modprobe: module ehci-hcd not found in modules.dep modprobe: module uhci-hcd not found in modules.dep modprobe: module ohci-hcd not found in modules.dep BusyBox v1.20.2 (Debian 1:1.20.0-7) built-in shell (ash) Enter 'help' for a list of built-in commands. /bin/sh: can't access tty; job control turned off It seems to fail to be able to mount my root filesystem. In the initramfs, I can actually mount the BTRFS partition just fine using all the options given in fstab. It seems to fail initially, for some strange reason that I can't diagnose. In the initramfs, running the following works: mount -t btrfs -o defaults,subvol=@,compress=lzo,ssd,noatime /dev/mapper/pi-root /root Mounting works and I can see the filesystem properly. Here's my /etc/fstab: proc /proc proc defaults 0 0 /dev/mmcblk0p1 /boot vfat defaults 0 2 /dev/mapper/pi-root / btrfs defaults,subvol=@,compress=lzo,ssd,noatime 0 1 Here's my /etc/crypttab: picrypt /dev/mmcblk0p2 none luks Here's my kernel command line: dwc_otg.fiq_fix_enable=1 console=tty1 console=tty1 root=/dev/mapper/pi-root cryptopts=target=picrypt,source=/dev/mmcblk0p2,lvm=pi rootfstype=btrfs rootwait rootdelay=5 ro rootflags=noload,subvol=@ I've made sure that initramfs.gz is up to date. Again, to reiterate, here's my setup: SD Card Boot VFAT FS LUKS Encrypted FS (picrypt) LVM Logical Volume (/dev/mapper/pi-root) BTRFS Filesystem BTRFS Subvolume (subvol=@) I have a fairly identical setup as this on my main laptop, which works fine. During Boot, here's what happens and then what fails: Decrypt LUKS Volume picrypt: Works Open Volume Group pi and LV root: Works Try mounting LV root via fstab: Fails with Invalid argument. Busybox also seems to fail with not being able to access the tty, but that's irrelevant to the problem. How can I debug what's going wrong here?
The error was caused by this parameter in my kernel line: rootflags=noload,subvol=@ This parameter's value is passed directly to mount as filesystem options for mounting the root filesystem. By inserting debugging statements into /scripts/local (generated from /usr/share/initramfs-tools/scripts/local), I was able to determine what went wrong with the mount parameters and was able to then fix my boot command line.
Mounting failed - invalid argument
1,604,269,152,000
I just tried to move /usr to a small ssd I recently purchased. The ssd is formatted with LUKS and btrfs. Apparently systemd fails to start the cryptography target before the partitions are first mounted: Nov 28 16:12:33 laptop systemd[1]: Failed to start Remount Root and Kernel File Systems. Nov 28 16:12:33 laptop systemd[1]: Unit systemd-remount-fs.service entered failed state. Nov 28 16:12:33 laptop systemd[1]: systemd-remount-fs.service failed. Nov 28 16:12:31 laptop systemd-remount-fs[238]: /bin/mount for /usr exited with exit status 1. Nov 28 16:12:33 laptop systemd-remount-fs[238]: mount: UUID=... kann nicht gefunden werden (The last part translates to uuid=xyz can not be found) later: Nov 28 16:12:35 laptop systemd[1]: Starting Cryptography Setup for usr-crypt... Nov 28 16:12:36 laptop systemd-cryptsetup[313]: Set cipher aes, mode xts-plain64, key size 256 bits for device /dev/disk/by-uuid/... Any idea on how to solve this?
Systemd expects a separate /usr partition to be made available by the initramfs. I don't have a setup to reproduce this, but you probably need to follow the wiki on /usr as a separate partition together with enabling the encrypt or sd-encrypt hooks in the HOOKS variable of /etc/mkinitcpio.conf. Both of them allow you to have an encrypted / device, but they can probaly be configure to work for /usr as well.
LUKS partition is unlocked after first use
1,604,269,152,000
I've been trying to string together the right flags in cryptsetup: cryptsetup -y -h sha512 -c twofish-xts-plain64 -s 512 luksFormat /dev/sdx But it hasn't been working. When I enter the command listed above, nothing happens; it doesn't even prompt me for a password. Trying this command with aes-xts-plain64 doesn't work either. Perhaps I'm doing something wrong? Or maybe there's a different program I could try. At this point I'd be willing to try anything.
With a little help from reddit, I've figured it out. tcplay did the trick. It encrypted my USB flash drive with the following command: tcplay -c -d /dev/sdx -a SHA512 -b TWOFISH-256-XTS tcplay is essentially a TrueCrypt clone. It's available in the Ubuntu repositories. -c tells tcplay to create a new volume. -d specifies device. -a specifies hash. -b specifies cipher. If tcplay gives you problems, you can use the eCryptfs. Here's a helpful tutorial.
How do you encrypt a USB drive partition with the Twofish cipher and SHA-512 hash without using TrueCrypt? [closed]
1,386,425,889,000
When I already have a LUKS block device opened using cryptsetup luksOpen, does invoking the command on the same machine with the same arguments including the device name for the second time just do nothing or is doing so unsafe?
If you try to open the device with the same name, cryptsetup will simply tell you that the mapped device already exists. If you try different name, the call will fail because the device is in use: $ sudo cryptsetup luksOpen /dev/sdc1 a Device a already exists. $ sudo cryptsetup luksOpen /dev/sdc1 b Enter passphrase for /dev/sdc1: Cannot use device /dev/sdc1 which is in use (already mapped or mounted).
Is it safe to run cryptsetup luksOpen twice?
1,452,633,174,000
What is the difference between Docker, LXD, and LXC. Do they offer the same services or different.
No, LXC, Docker, and LXD, are not quite the same. In short: LXC LinuX Containers (LXC) is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host (LXC host) https://wiki.archlinux.org/index.php/Linux_Containers low level ... https://linuxcontainers.org/ Docker by Docker, Inc a container system making use of LXC containers so you can: Build, Ship, and Run Any App, Anywhere http://www.docker.com LXD by Canonical, Ltd a container system making use of LXC containers so that you can: run LXD on Ubuntu and spin up instances of RHEL, CentOS, SUSE, Debian, Ubuntu and just about any other Linux too, instantly, ... http://www.zdnet.com/article/ubuntu-lxd-not-a-docker-replacement-a-docker-enhancement/ Docker vs LXD Docker specializes in deploying apps LXD specializes in deploying (Linux) Virtual Machines Source: http://linux.softpedia.com/blog/infographic-lxd-machine-containers-from-ubuntu-linux-492602.shtml Originally: https://insights.ubuntu.com/2015/09/23/infographic-lxd-machine-containers-from-ubuntu/ Minor technical note installing LXD includes a command line program coincidentally named lxc http://blog.scottlowe.org/2015/05/06/quick-intro-lxd/
What is the difference between Docker, LXD, and LXC [closed]
1,452,633,174,000
Are there any notable differences between LXC (Linux containers) and FreeBSD's jails in terms of security, stability & performance? On first look, both approaches look very similar.
No matter the fancy name used here, both are solutions to a specific problem: A better segregation solution than classic Unix chroot. Operating system-level virtualization, containers, zones, or even "chroot with steroids" are names or commercial titles that define the same concept of userspace separation, but with different features. Chroot was introduced on 18 March 1982, months before the release of 4.2 BSD, as a tool to test its installation and build system, but today it still has its flaws. Since the first objective of chroot was only to provide a newroot path, other aspects of system that needed to be isolated or controlled got uncovered (network, process view, I/O throughput). This is where the first containers (User-level virtualization) appeared. Both technologies (FreeBSD Jails and LXC) make use of userspace isolation to provide another layer of security. This compartmentalization will ensure that a determined process will communicate only with other processes in the same container on the same host, and if using any network resource to achieve "outside world" communication, all will be forwarded to the assigned interface/channel that this container has. Features FreeBSD Jails: Considered stable technology, since it is a feature inside FreeBSD since 4.0; It takes the best of ZFS filesystem at the point where you could clone jails and create jail templates to easily deploy more jails. Some more ZFS madness; Well documented, and evolving; Hierarchical Jails allow you to create jails inside a jail (we need to go deeper!). Combine with allow.mount.zfs to achieve more power, and other variables like children.max do define max children jails. rctl(8) will handle resource limits of jails (memory, CPU, disk, ...); FreeBSD jails handle Linux userspace; Network isolation with vnet, allowing each jail to have its own network stack, interfaces, addressing and routing tables; nullfs to help linking folders to ones that are located on the real server to inside a jail; ezjail utility to help mass deployments and management of jails; Lots of kernel tunables (sysctl). security.jail.allow.* parameters will limit the actions of the root user of that jail. Maybe, FreeBSD jails will extend some of the VPS project features like live migration in a near future. There is some effort of ZFS and Docker integration running. Still experimental. FreeBSD 12 supports bhyve inside a jail and pf inside a jail, creating further isolation to those tools Lots of interesting tools were developed during the last years. Some of them are indexed on this blog post. Alternatives: FreeBSD VPS project Linux Containers (LXC): New "in kernel" technology but being endorsed by big ones(specially Canonical); Unprivileged containers starting from LXC 1.0, makes a big step into security inside containers; UID and GID mapping inside containers; Kernel namespaces, to make separation of IPC, mount, pid, network and users. These namespaces can be handled in a detached way, where a process that uses a different network namespace will not necessarily be isolated on other aspects like storage; Control Groups (cgroups) to manage resources and grouping them. CGManager is the guy to achieve that. Apparmor/SELinux profiles and Kernel capabilities for better enforcing Kernel features accessible by containers. Seccomp is also available on lxc containers to filter system calls. Other security aspects here. Live migration functionality being developed. It’s really hard to say when it will be ready for production use, since docker/lxc will have to deal with userspace process pause, snapshot, migrate and consolidate - ref1, ref2. Live migration is working with basic containers(no device passthrough neither complex network services or special storage configurations). APIs bindings to enable development in python3 and 2, lua, Go, Ruby and Haskell Centralized "What's new" area. Pretty useful whenever you need to check if some bug was fixed or a new feature got committed. Here. An interesting alternative could be lxd, that under the hood works with lxc but, it has some nice features like a REST api, OpenStack integration, etc. Another interesting thing is that Ubuntu seems to be shipping zfs as the default filesystem for containers on 16.04. To keep projects aligned, lxd launched it's 2.0 version, and some of the features are zfs related. Alternatives: OpenVZ, Docker Docker. Note here that Docker uses namespaces, cgroups creating "per app"/"per software" isolation. Key differences here. While LXC creates containers with multiple processes, Docker reduces a container as much as possible to a single process and then manage that through Docker. Effort on integrating Docker with SELinux and reducing capabilities inside a container to make it more secure - Docker and SELinux, Dan Walsh What is the difference between Docker, LXD, and LXC Docker no longer uses lxc. They now have a specific lib called runc that handles the integration with low-level Kernel namespace and cgroups features directly. Neither technology is a security panacea, but both are pretty good ways to isolate an environment that doesn’t require Full Virtualization due to mixed operating systems infrastructure. Security will come after a lot of documentation reading and implementation of kernel tunables, MAC and isolations that those OS-Level virt offer to you. See Also: Hand-crafted containers BSD Now: Everything you need to know about Jails ezjail – Jail administration framework A Brief History of Containers: From the 1970s to 2017 Docker Considered Harmful - Good article about the security circus around container technologies.
Linux LXC vs FreeBSD jail
1,452,633,174,000
I'm trying to mount a folder on the host to an LXC container. The host has a folder /mnt/ssd/solr_data created (this is currently on the root filesystem, but later I'll mount an SSD drive there, so I'm prepping for that). I want that folder to mount as /data in the container. So in the containers fstab file I have the following: /mnt/ssd/solr_data /var/lib/lxc/Solr4StandAlone/rootfs/data ext4 defaults,noatime 0 0 But that's a no-go, I get this error starting the container: lxc-start: No such file or directory - failed to mount '/mnt/ssd/solr_data' on '/usr/lib/x86_64-linux-gnu/lxc//data' lxc-start: failed to setup the mounts for 'Solr4StandAlone' lxc-start: failed to setup the container lxc-start: invalid sequence number 1. expected 2 lxc-start: failed to spawn 'Solr4StandAlone'
To create the directory automatically in the container, you can also add the create=dir option in the fstab : /mnt/ssd/solr_data /var/lib/lxc/Solr4StandAlone/rootfs/data none bind,create=dir Edit : this is specific to LXC. See this thread Just like we already had "optional", this adds two new LXC-specific mount flags: create=dir (will do a mkdir_p on the path) create=file (will do a mkdir_p on the dirname + a fopen on the path) This was motivated by some of the needed bind-mounts for the unprivileged containers.
LXC: How do I mount a folder from the host to the container?
1,452,633,174,000
I'm running some services inside of Docker LXC containers on my server and I'm beginning to actually do serious things with them. One thing I'm not clear on is how user permissions work inside and outside of the container. If, for example, I'm running MySQL in a container and have its data directory set to /data, which is a Docker volume, how do permissions inside and outside of the container affect access policies? Obviously, the idea is to run MySQL as its own user in the container (ie mysql:mysql) and give it read and write rights to that directory. I assume that this would be fairly straightforward, just chmoding the directory, etc. But how does this work outside of the container? Now that I have this Docker shared volume called 'data,' how do I manage access control to it? I'm specifically looking to be able to run an unprivileged user outside of the Docker container which will periodically access the MySQL shared volume and backup the data. How can I setup permissions, users, and groups so that a specific user on the host can read/write files and folders in the Docker shared volume?
Since the release of 0.9 Docker has dropped LXC and uses its own execution environment, libcontainer. Your question's a bit old but I guess my answer still applies the version you are using. Quick Answer: To understand the permissions of volumes, you can take the analogy of mount --bind Host-Dir Container-Dir. So to fulfill your requirement you can use any traditional methods for managing permissions. I guess ACL is what you need. Long Answer: So as in your example we have a container named dock with a volume /data. docker run -tid --name dock -v /usr/container/Databases/:/data \ centos:latest /bin/bash Inside the container our MySQL server has been configured to use the /data as its data directory. So we have our databases in the /data inside the container. And outside the container on the Host OS, we have mounted this /data volume from /usr/container/Databases/ and we assign a normal user bob to take backups of the databases. From the host machine we'll configure ACLs for user bob. useradd -u 3000 bob usermod -R o=--- /usr/container/Databases/ setfacl -R -m u:bob:rwx /usr/container/Databases/ setfacl -R -d -m u:bob:rwx /usr/container/Databases/ To test it out lets take a backup with user bob. su - bob tar -cvf container-data.tar /usr/container/Databases/ And tar will list out and you can see that our user was able to access all the files. Now from inside the container if you check with getfacl you will notice that instead of bob it shows 3000. This is because the UID of bob is 3000 and there is no such user in the container so it simply displays the UID it receives from the meta data. Now if you create a user in your container with useradd -u 3000 bob you will notice that now the getfacl shows the name bob instead of 3000. Summary: So the user permissions you assign from either inside or outside the container reflects to both environments. So to manage the permissions of volumes, UIDs in host machine must be different from the UIDs in the container.
User permissions inside and outside of LXC containers?
1,452,633,174,000
This is stated in the man page for systemd-nspawn Note that even though these security precautions are taken systemd-nspawn is not suitable for secure container setups. Many of the security features may be circumvented and are hence primarily useful to avoid accidental changes to the host system from the container. The intended use of this program is debugging and testing as well as building of packages, distributions and software involved with boot and systems management. This very question was subsequently asked on the mailing list in 2011, but the answer seems to be outdated. systemd-nspawn contains code to execute CLONE_NEWNET using the --private-network option now. This seems to cover the private AF_UNIX namespace issue, and I guess the CAP_NET_RAW and CAP_NET_BIND issues mentioned. What issues remain at this point and what does for example LXC do in addition to what systemd-nspawn can currently do?
LXC is a little bit better because it can run containers as unpriveleged users. This is possible with systemd-nspawn, but only for scenarios where you only need one user (instead of multiple), which can be difficult or less secure for multi process in container scenarios. If you want to know why docker, lxc, and systemd-nspawn are inherently not a solid security mechanism, read this: https://opensource.com/business/14/7/docker-security-selinux. Basically, containers still have access to the kernel and any kernel exploit gains control of the entire machine. On a monolithic kernel like Linux, kernel exploits are not uncommon.
What makes systemd-nspawn still "unsuitable for secure container setups"?
1,452,633,174,000
is it currently possible to setup LXC containers with X11 capabilities? I'm looking forward for the lightest available X11 container (memory-wise), hardware acceleration a plus but not essential. If it is not currently possible, or readily available, is it known what functionality needs to be yet implemented in order to support it?
yes it is possible to run a complete X11 desktop environment inside a LXC container. Right now, I do this on Arch Linux. I won't say it's "light" as I haven't gone as far as trying to strip out stuff from the standard package manager install but I can confirm that it does work very well. You have to install any kernel drivers on the HOST as well as in the container. Such things as the graphics driver (I use nvidia). You have to make the device nodes in dev accessible inside the container by configuring your container.conf to allow it. You then need to make sure that those device nodes are created inside the container (i.e mknod). So, to answer you question: YES it does work. If I can help any further or provide more details please do let me know. --- additional infomation provided --- In my container... /etc/inittab starts in run level 5 and launches "slim" Slim is configured to use vt09: # Path, X server and arguments (if needed) # Note: -xauth $authfile is automatically appended default_path /bin:/usr/bin:/usr/local/bin default_xserver /usr/bin/X xserver_arguments -nolisten tcp vt09 I am not using a second X display on my current vt, but a completely different one (I can switch between many of thise using CTRL+ALT+Fn). If you aren't using slim, you can use a construct like this to start X on another vt: /usr/bin/startx -- :10 vt10 That will start X on display :10 and put it on vt10 (CTRL+ALT+F10). These don't need to match but I think it's neater if they do. You do need your container config to make the relevant devices available, like this: # XOrg Desktop lxc.cgroup.devices.allow = c 4:10 rwm # /dev/tty10 X Desktop lxc.cgroup.devices.allow = c 195:* rwm # /dev/nvidia Graphics card lxc.cgroup.devices.allow = c 13:* rwm # /dev/input/* input devices And you need to make the devices in your container: # display vt device mknod -m 666 /dev/tty10 c 4 10 # NVIDIA graphics card devices mknod -m 666 /dev/nvidia0 c 195 0 mknod -m 666 /dev/nvidiactl c 195 255 # input devices mkdir /dev/input # input devices chmod 755 /dev/input mknod -m 666 /dev/input/mice c 13 63 # mice I also manually configured input devices (since we don't have udev in container) Section "ServerFlags" Option "AutoAddDevices" "False" EndSection Section "ServerLayout" Identifier "Desktop" InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "InputDevice" Identifier "Keyboard0" Driver "kbd" Option "XkbLayout" "gb" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "ZAxisMapping" "4 5 6 7" EndSection The above going in a file /etc/X11/xorg.conf.d/10-input.conf Not sure if any of that will help, but good luck!
Linux - LXC; deploying images with tiniest possible X11
1,452,633,174,000
I'm exploring the LXC features in Ubuntu 12.04 and I really want to set up a network like this: client1: 192.168.56.101/24 lxc-host: 192.168.56.102/24 guest1 192.168.56.201/24 guest2 192.168.56.202/24 guest3 192.166.56.203/24 I just want a "flat" network where the guests have full access to the LAN and are visible from the clients. I'm used to bridged networking with libvirt/KVM, as described here: http://libvirt.org/formatdomain.html#elementsNICSBridge On the host: # /etc/network/interfaces auto br0 iface br0 inet static address 192.168.56.102 netmask 255.255.255.0 broadcast 192.168.56.255 bridge_ports eth1 lxc.conf for the first guest: # /var/lib/lxc/guest1/config: lxc.network.type=veth lxc.network.link=br0 lxc.network.flags=up lxc.network.hwaddr=00:16:3e:13:48:4e lxc.network.ipv4=192.168.56.201/24 It looks like 192.168.56.201 is invisible to the outside world, which isn't what I want. Seems like I have to do one of these things: 1) Manually set up routing on the host and guest 2) Do something hokey...create virtual interfaces on the host ahead of time, and configure the guests to use them lxc.network.type=phys. I don't know if that would actually work. I'm focused on Ubuntu, but answers for RHEL/Fedora would be useful too....
This is pretty much right—though you're missing a line like this: lxc.network.ipv4.gateway = X.X.X.X I have an LXC guest running on Debian. First, you set up the host bridge (the easy way), in /etc/network/interfaces: auto wan iface wan inet static address 72.X.X.X netmask 255.255.255.0 gateway 72.X.X.1 bridge_ports wan_phy # this line is important. bridge_stp off bridge_fd 2 bridge_maxwait 20 In your case, you've called it br0, and I've called it wan. The bridge can be called anything you want. You get this working first—if it fails, investigate with (e.g.,) brctl Then your LXC config is set up to join that bridge: lxc.utsname = FOO lxc.network.type = veth lxc.network.link = wan # remember, this is what I call my bridge lxc.network.flags = up lxc.network.name = v-wan # optional, I believe lxc.network.ipv4 = 72.X.X.Y/24 # different IP than the host lxc.network.ipv4.gateway = 72.X.X.1 # same as on the host As HoverHell notes, someone with root in the container can change the IP address. Yep. It's a bridge (aka Ethernet switch). If you want to prevent that, you can use firewall rules on the host—at least in my case, the packets need to go through the host's iptables.
How to configure external IP addresses for LXC guests?
1,452,633,174,000
The technical explanation of what is unprivileged container is quite good. However, it is not for ordinary PC user. Is there a simple answer when and why people should use unpriviliged containers, and what are their benefits and downsides?
Running unprivileged containers is the safest way to run containers in a production environment. Containers get bad publicity when it comes to security and one of the reasons is because some users have found that if a user gets root in a container then there is a possibility of gaining root on the host as well. Basically what an unprivileged container does is mask the userid from the host . With unprivileged containers, non root users can create containers and will have and appear in the container as root but will appear as userid 10000 for example on the host (whatever you map the userids as). I recently wrote a blog post on this based on Stephane Graber's blog series on LXC (One of the brilliant minds/lead developers of LXC and someone to definitely follow). I say again, extremely brilliant. From my blog: From the container: lxc-attach -n ubuntu-unprived root@ubuntu-unprived:/# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 04:48 ? 00:00:00 /sbin/init root 157 1 0 04:48 ? 00:00:00 upstart-udev-bridge --daemon root 189 1 0 04:48 ? 00:00:00 /lib/systemd/systemd-udevd --daemon root 244 1 0 04:48 ? 00:00:00 dhclient -1 -v -pf /run/dhclient.eth0.pid syslog 290 1 0 04:48 ? 00:00:00 rsyslogd root 343 1 0 04:48 tty4 00:00:00 /sbin/getty -8 38400 tty4 root 345 1 0 04:48 tty2 00:00:00 /sbin/getty -8 38400 tty2 root 346 1 0 04:48 tty3 00:00:00 /sbin/getty -8 38400 tty3 root 359 1 0 04:48 ? 00:00:00 cron root 386 1 0 04:48 console 00:00:00 /sbin/getty -8 38400 console root 389 1 0 04:48 tty1 00:00:00 /sbin/getty -8 38400 tty1 root 408 1 0 04:48 ? 00:00:00 upstart-socket-bridge --daemon root 409 1 0 04:48 ? 00:00:00 upstart-file-bridge --daemon root 431 0 0 05:06 ? 00:00:00 /bin/bash root 434 431 0 05:06 ? 00:00:00 ps -ef From the host: lxc-info -Ssip --name ubuntu-unprived State: RUNNING PID: 3104 IP: 10.1.0.107 CPU use: 2.27 seconds BlkIO use: 680.00 KiB Memory use: 7.24 MiB Link: vethJ1Y7TG TX bytes: 7.30 KiB RX bytes: 46.21 KiB Total bytes: 53.51 KiB ps -ef | grep 3104 100000 3104 3067 0 Nov11 ? 00:00:00 /sbin/init 100000 3330 3104 0 Nov11 ? 00:00:00 upstart-udev-bridge --daemon 100000 3362 3104 0 Nov11 ? 00:00:00 /lib/systemd/systemd-udevd --daemon 100000 3417 3104 0 Nov11 ? 00:00:00 dhclient -1 -v -pf /run/dhclient.eth0.pid -lf /var/lib/dhcp/dhclient.eth0.leases eth0 100102 3463 3104 0 Nov11 ? 00:00:00 rsyslogd 100000 3516 3104 0 Nov11 pts/8 00:00:00 /sbin/getty -8 38400 tty4 100000 3518 3104 0 Nov11 pts/6 00:00:00 /sbin/getty -8 38400 tty2 100000 3519 3104 0 Nov11 pts/7 00:00:00 /sbin/getty -8 38400 tty3 100000 3532 3104 0 Nov11 ? 00:00:00 cron 100000 3559 3104 0 Nov11 pts/9 00:00:00 /sbin/getty -8 38400 console 100000 3562 3104 0 Nov11 pts/5 00:00:00 /sbin/getty -8 38400 tty1 100000 3581 3104 0 Nov11 ? 00:00:00 upstart-socket-bridge --daemon 100000 3582 3104 0 Nov11 ? 00:00:00 upstart-file-bridge --daemon lxc 3780 1518 0 00:10 pts/4 00:00:00 grep --color=auto 3104 As you can see processes are running inside the container as root but are not appearing as root but as 100000 from the host. So to sum up: Benefits - added security and added isolation for security. Downside - A little confusing to wrap your head around at first and not for the novice user.
What are benefits and downsides of unprivileged containers?
1,452,633,174,000
I am trying to hunt down information for why a network interface name would have an at sign, but there's too much noise in the results I am so far getting (I lack the correct terminology to search on) I have a LXC container on a Ubuntu host. Inside the container I run and get: # ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 9: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:16:3e:37:a0:7a brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.0.3.195/24 brd 10.0.3.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::216:3eff:fe37:a07a/64 scope link valid_lft forever preferred_lft forever Note that eth0@if10 What is this @ portion called / referring to? On the host there is no such if10, another container I have has an eth0@if8 - I must assume this is part of LXC's/containers' handling of network translations somehow, but I had not noticed this existing previously, and wonder if it's a complement to bridging, that might exist in other scenarios ?
eth0@if10 means: your network interface is named still simply eth0 and network tools and apps can only refer to this name, without the @ appendix. (As a sidenote, this is most probably a veth peer, but the name does not need to reflect this.) @if10 means: eth0 is connected to some (other) network interface with index 10 (decimal). Since there is also a link-netnsid 0 shown, this other network interface is in another network namespace (kind of virtual IP stack), presumably the root (a.k.a. host) network namespace. If you use ip link show in your host, and not in your container, then one of the network interfaces listed there should have an @9 appendix; the interface name will probably start with veth.... This interface is the peer to the eth0@10 interface you asked about. Veth interfaces come in pairs connected to each other, like a virtual cable. So, the @... is an appendix created by the ip tool, and it is not part of Linux' network interface names. The number after the @ refers to another network interface with the index number that is shown after the @. The index numbers are printed before the network interface names, such as in 9: eth0@if10. The peer network interface can be in a different network namespace. Unfortunately, finding the correct network namespace for the link-netnsid .. is rather involved, see how to find the network namespace of a veth peer ifindex. UPDATE 2023: Siemens has now released Edgeshark as OSS that provides a nice graphical web UI rendering the relationships of network interfaces in containers, the host, et cetera.
Network interface name has with at sign - what is it?
1,452,633,174,000
I use unprivileged lxc containers in Arch Linux. Here are the basic system infos: [chb@conventiont ~]$ uname -a Linux conventiont 3.17.4-Chb #1 SMP PREEMPT Fri Nov 28 12:39:54 UTC 2014 x86_64 GNU/Linux It's a custom/compiled kernel with user namespace enabled: [chb@conventiont ~]$ lxc-checkconfig --- Namespaces --- Namespaces: enabled Utsname namespace: enabled Ipc namespace: enabled Pid namespace: enabled User namespace: enabled Network namespace: enabled Multiple /dev/pts instances: enabled --- Control groups --- Cgroup: enabled Cgroup clone_children flag: enabled Cgroup device: enabled Cgroup sched: enabled Cgroup cpu account: enabled Cgroup memory controller: enabled Cgroup cpuset: enabled --- Misc --- Veth pair device: enabled Macvlan: enabled Vlan: enabled File capabilities: enabled Note : Before booting a new kernel, you can check its configuration usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig [chb@conventiont ~]$ systemctl --version systemd 217 +PAM -AUDIT -SELINUX -IMA -APPARMOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID -ELFUTILS +KMOD +IDN Unfortunately, systemd does not play well with lxc currently. Especially setting up cgroups for a non-root user seems to be working not well or I am just too unfamiliar how to do this. lxc will only start a container in unprivileged mode when it can create the necessary cgroups in /sys/fs/cgroup/XXX/*. This however is not possible for lxc because systemd mounts the root cgroup hierarchy in /sys/fs/cgroup/*. A workaround seems to be to do the following: for d in /sys/fs/cgroup/*; do f=$(basename $d) echo "looking at $f" if [ "$f" = "cpuset" ]; then echo 1 | sudo tee -a $d/cgroup.clone_children; elif [ "$f" = "memory" ]; then echo 1 | sudo tee -a $d/memory.use_hierarchy; fi sudo mkdir -p $d/$USER sudo chown -R $USER $d/$USER echo $$ > $d/$USER/tasks done This code creates the corresponding cgroup directories in the cgroup hierarchy for an unprivileged user. However, something which I don't understand happens. Before executing the aforementioned I will see this: [chb@conventiont ~]$ cat /proc/self/cgroup 8:blkio:/ 7:net_cls:/ 6:freezer:/ 5:devices:/ 4:memory:/ 3:cpu,cpuacct:/ 2:cpuset:/ 1:name=systemd:/user.slice/user-1000.slice/session-c1.scope After executing the aforementioned code I see in the shell I ran it in: [chb@conventiont ~]$ cat /proc/self/cgroup 8:blkio:/chb 7:net_cls:/chb 6:freezer:/chb 5:devices:/chb 4:memory:/chb 3:cpu,cpuacct:/chb 2:cpuset:/chb 1:name=systemd:/chb But in any other shell I still see: [chb@conventiont ~]$ cat /proc/self/cgroup 8:blkio:/ 7:net_cls:/ 6:freezer:/ 5:devices:/ 4:memory:/ 3:cpu,cpuacct:/ 2:cpuset:/ 1:name=systemd:/user.slice/user-1000.slice/session-c1.scope Hence, I can start my unprivileged lxc container in the shell I executed the code mentioned above but not in any other. Can someone explain this behaviour? Has someone found a better way to set up the required cgroups with a current version of systemd (>= 217)?
A better and safer solution is to install cgmanager and run it with systemctl start cgmanager (on a systemd-based distro). You can than have your root user, or if you have sudo rights on the host create cgroups for your unprivileged user in all controllers with: sudo cgm create all $USER sudo cgm chown all $USER $(id -u $USER) $(id -g $USER) Once they have been created for your unprivileged user she/he can move processes he has access to into his cgroup for every controller by using: cgm movepid all $USER $PPID Safer, faster, more reliable than the shell script I posted. Manual solution: To answer 1. for d in /sys/fs/cgroup/*; do f=$(basename $d) echo "looking at $f" if [ "$f" = "cpuset" ]; then echo 1 | sudo tee -a $d/cgroup.clone_children; elif [ "$f" = "memory" ]; then echo 1 | sudo tee -a $d/memory.use_hierarchy; fi sudo mkdir -p $d/$USER sudo chown -R $USER $d/$USER echo $$ > $d/$USER/tasks done I was ignorant about what was going on exactly when I wrote that script but reading the cgroups documentation and experimenting a bit helped me to understand what is going on. What I am basically doing in this script is to create a new cgroup session for the current user which is what I already stated above. When I run these commands in the current shell or run them in a script and make it so that it gets evaluated in the current shell and not in a subshell (via . script The . is important for this to work!) is that I not just open a new session for user but add the current shell as a process that runs in this new cgroup. I can achieve the same effect by running the script in a subshell and then descend into the cgroup hierarchy in the chb subcgroup and use echo $$ > tasks to add the current shell to every member of the chb cgroup hierarchy. Hence, when I run lxc in that current shell my container will also become a member of all the chb subcgroups that the current shell is a member of. That is to say my container inherits the cgroup status of my shell. This also explains why it doesn't work in any other shell that is not part of the current chb subcgroups. I still pass at 2.. We'll probably need to wait either for a systemd update or further Kernel developments to make systemd adopt a consistent behaviour but I prefer the manual setup anyway as it forces you to understand what you're doing.
How to create user cgroups with systemd
1,452,633,174,000
Using instructions for Docker auto-start on Linode VPS running Ubuntu 12.04 and Docker 0.8.1, the specified container does not start on reboot. Once booted, I am able to ~$ sudo start [service-name] and everything goes as planned, but I would also like to container to restart after a reboot. Is the script in the tutorial not designed to handle reboots? /etc/default/docker file contains one line: DOCKER_OPTS="-r=false" /etc/init/service-name.conf is straight from the docker page: description "service description"  author "me" start on filesystem and started docker stop on runlevel [!2345] respawn script # Wait for docker to finish starting up first. FILE=/var/run/docker.sock while [ ! -e $FILE ] ; do inotifywait -t 2 -e create $(dirname $FILE) done /usr/bin/docker start -a db5e61a9afa8 end script
At some point over the past couple of months, the upstart script in the tutorial was changed to remove the loop to wait for docker to start. I removed the loop from my upstart scripts and my containers now restart correctly after a reboot. My /etc/init/service-name.conf script now looks like this: description "service description" author "me" start on filesystem and started docker stop on runlevel [!2345] respawn script /usr/bin/docker start -a db5e61a9afa8 end script I'm not sure what was wrong with that loop. Maybe it was pointing to the wrong file on my system, although I didn't make any changes to the default docker install. For now, I'm just happy the fix involved code removal instead of some complicated work-around.
Why doesn't docker container start at boot w/ upstart script on Ubuntu 12.04?
1,452,633,174,000
I'm trying to set up my Linux machine to run multiple guest OSes, one of those being a Windows VM, and another a Linux container. The goal here is to prevent me from messing up the host system, while being free to operate the base operating system and play with the host hardware. Eventually, on top of running my desktop in the container, I hope to run graphics-accelerated simulations, etc. Since Docker has such nice git-like versioning of containers built-in, it seemed like a good idea to use it. Perhaps libvirt would do just as good with LXC, but docker's privileged mode makes it easier to not have to configure devices to the container. I've done a little research and come up with a few answers already, but I'm having trouble putting it all together. Background in LXC Running X from LXC helped me to see how I can configure a container with (i.e.): lxc.cgroup.devices.allow = c 226:0 rwm and using mknod -m 666 dri/card0 c 226 0 inside the container to connect to the host device. Docker From cuda - Using GPU from a docker container, I saw that I can get the same setup to work in Docker with the LXC backend. It appeared to me that if a docker container is run in privileged mode, then it can access the GPU normally without this extra configuration. So, I fired up a base system, installed graphics drivers, xorg-server, xorg-xinit, and a window manager to test it out. First try # startx Cannot run from a console (or some message like that) Okay, I thought I was on tty2. # tty /dev/console That's not what I expected. # chvt 2 # tty /dev/tty2 Well, it appears as if that worked. Let's try # startx again. It started the window manager, with the cursor in the center. No mouse response. No keyboard response. Let's try to change the tty with Ctrl-Alt+F3. No response. Well, it looks like I'll have to reboot cold. Second try # tty /dev/console # chvt 2 # tty /dev/console What? I can't change it now? Continued After trying another time, I got it to change tty, and startx froze the computer again. What now? So, I'm now at an impass. I really want to be able to use a container - Docker preferred, LXC with libvirt is also acceptable - to run as my daily operating system while keeping a lean host OS. Is it best to use Docker with privileged mode here, or to use the explicit LXC backend and try the options listed above? I am already planning on using libvirt (possibly under vagrant-libvirt) to manage my Windows vm, so would it be about the same for me to use libvirt or vagrant-LXC in this case? Edit: reading LXC vs. Docker, I get the feeling that since Docker and Docker containers are meant for single-application environments, perhaps it would be best to use LXC instead of Docker to run as my daily operating system. Thoughts? Edit: I've discovered that, like docker, there is a lxc-device command which allows me to bypass the cgroups and mknod steps. Whereas before I was able to get x to start and freeze my system, it just errors out now. Perhaps I can figure this out eventually, since no one seems to be out there. Update: I have the mouse working. On the guest, I installed xf86-input-mouse and xf86-input-keyboard. On the host, I ran the following: # lxc-device -n g1 add /dev/input/mice # lxc-device -n g1 add /dev/dri/card0 # lxc-device -n g1 add /dev/dri/controlD64 # lxc-device -n g1 add /dev/dri/renderD128 # lxc-device -n g1 add /dev/fb0 # lxc-device -n g1 add /dev/tty2 Works!
This question had the answer that I needed. Of course, I used lxc-device instead of cgroup definitions in the config file. However, in my case, I have only gotten the keyboard to work in X if I start it on a different tty.
docker - how to run x desktop in a container?
1,452,633,174,000
In FreeBSD 4.9 it was very easy to accomplish with just a single command like jail [-u username] path hostname ip-number command if path was / you had running just the same program as usual but all its network communication was restricted to use only given IP-address as the source. Sometimes it's very handy. Now in Linux there's LXC, which does look very similar to FreeBSD's jail (or Solaris' zones) — can you think of similar way to execute a program?
Starting the process inside a network namespace that can only see the desired IP address can accomplish something similar. For instance, supposed I only wanted localhost available to a particular program. First, I create the network namespace: ip netns add limitednet Namespaces have a loopback interface by default, so next I just need to bring it up: sudo ip netns exec limitednet ip link set lo up Now, I can run a program using ip netns exec limitednet and it will only be able to see the loopback interface: sudo ip netns exec limitednet ip addr show 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever If I wanted to limit it to an address other than localhost, I could add other interfaces into the namespace using: ip link set DEVICE_NAME netns NAMESPACE I'd have to experiment a bit more to figure out how to add a single IP address into a namespace in the case where an interface might have more than one IP address The LWN article on namespaces is also helpful.
Linux: Is there handy way to exec a program binding it to IP-address of choice?
1,452,633,174,000
I have a Docker container (LXC) which runs MySQL. Since the idea behind Docker is generally "one running process per container," if I define AppArmor profiles targeting the MySQL binary, will they be enforced? Is there a way for me to test for this?
First, cgroups are not used to isolate an application from others on a system. They are used to manage resource usage and device access. It's the various namespaces (PID, UTS, mount, user...) that provide some (limited) isolation. Moreover, a process launched inside a Docker container will probably not be able to manage the AppArmor profile it is running under. The approach currently taken is to setup a specific AppArmor profile before launching the container. It looks like the libcontainer execution driver in Docker supports setting AppArmor profiles for containers, but I can't find any example or reference in the doc. Apparently AppArmor is also supported with LXC in Ubuntu. You should write an AppArmor profile for your application and make sure LXC/libcontainer/Docker/... loads it before starting the processes inside the container. Profiles used this way should be enforced, and to test it you should try an illegal access and make sure it fails. There is no link between the binary and the actually enforced profile in this case. You have to explicitly tell Docker/LXC to use this profile for your container. Writing a profile for the MySQL binary will only enforce it on the host, not in the container.
AppArmor profiles in Docker/LXC
1,452,633,174,000
I've set up a new Debian 9 (stretch) LXC container on a machine running Proxmox VE, and installed the cifs-utils package. I quickly tested the connection to the SMB server by running smbclient //192.168.0.2/share -U myusername which worked fine. However, the command mount.cifs //192.168.0.2/share /mnt -o user=myusername failed, printing the following error message: mount error(1): Operation not permitted Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I've made sure that… the owner and group of the shared directory (on the SMB server, which is a FreeBSD machine) are both existent on the client, i.e., inside the container. the owner of the shared directory is a member of the group, both on the server and the client. (id myusername) the mountpoint (/mnt) exists on the client. What could be the cause of the above-mentioned error?
You're probably running an unprivileged LXC container. The easiest solution is to use a privileged container instead. However, there might be other solutions; take a look e.g. at this thread/post in the proxmox forums.
Why do I get ”mount error(1): Operation not permitted“ on ”mount.cifs“ in a LXC container on a Proxmox VE machine?
1,452,633,174,000
I looked at the stackexchange site but couldn't find anything. I looked at the wikipedia entry on Linux container https://en.wikipedia.org/wiki/LXC and as well as hypervisor https://en.wikipedia.org/wiki/Hypervisor but the explanation to both is beyond a person who has not worked on either will understand. I also saw http://www.linux.com/news/enterprise/cloud-computing/785769-containers-vs-hypervisors-the-battle-has-just-begun but that also doesn't explain it. I have played with VM's such as virtualbox. One of the starting ideas to my limited understanding might have been for Virtual Machines were perhaps to test software in a sandbox environment (Having a Solaris box when you cannot buy/afford to have the machine and still have some idea how the software you are developing for that target hardware is working.) While being limited had it uses. This is probably one of the ways it made the jump in cloud computing as well. The questions are broad so this is how I distill it - Can some people explain what a hypervisor and a *nix container is (with analogies if possible.)? Is a *nix hypervisor the same as virtual machine or is there a difference?
A Virtual Machine (VM) is quite a generic term for many virtualisation technologies. There are a many variations on virtualisation technologies, but the main ones are: Hardware Level Virtualisation Operating System Level Virtualisation qemu-kvm and VMWare are examples of the first. They employ a hypervisor to manage the virtual environments in which a full operating system runs. For example, on a qemu-kvm system you can have one VM running FreeBSD, another running Windows, and another running Linux. The virtual machines created by these technologies behave like isolated individual computers to the guest. These have a virtual CPU, RAM, NIC, graphics etc which the guest believes are the genuine article. Because of this, many different operating systems can be installed on the VMs and they work "out of the box" with no modification needed. While this is very convenient, in that many OSes will install without much effort, it has a drawback in that the hypervisor has to simulate all the hardware, which can slow things down. An alternative is para-virtualised hardware, in which a new virtual device and driver is developed for the guest that is designed for performance in a virtual environment. qemu-kvm provide the virtio range of devices and drivers for this. A downside to this is that the guest OS must be supported; but if supported, the performance benefits are great. lxc is an example of Operating System Level Virtualisation, or containers. Under this system, there is only one kernel installed - the host kernel. Each container is simply an isolation of the userland processes. For example, a web server (for instance apache) is installed in a container. As far as that web-server is concerned, the only installed server is itself. Another container may be running a FTP server. That FTP server isn't aware of the web-server installation - only it's own. Another container can contain the full userland installation of a Linux distro (as long as that distro is capable of running with the host system's kernel). However, there are no separate operating system installations when using containers - only isolated instances of userland services. Because of this, you cannot install different platforms in a container - no Windows on Linux. Containers are usually created by using a chroot. This creates a separate private root (/) for a process to work with. By creating many individual private roots, processes (web-servers, or a Linux distro, etc) run in their own isolated filesystem. More advanced techniques, such as cgroups can isolate other resources such as network and RAM. There are pros and cons to both and many long running debates as to which is best. Containers are lighter, in that a full OS isn't installed for each; which is the case for hypervisors. They can therefore run on lower spec'd hardware. However, they can only run Linux guests (on Linux hosts). Also, because they share the kernel, there is the possibility that a compromised container may affect another. Hypervisors are more secure and can run different OSes because a full OS is installed in each VM and guests are not aware of other VMs. However, this utilises more resources on the host, which has to be relatively powerful.
What is a Linux container and a Linux hypervisor?
1,452,633,174,000
I am currently starting a project evaluating untrusted programs (student assignments) in a secure sandbox environment. Main idea is to create a web app for GlassFish and Java wrapper around lxc-utils to manage LXC containers. It'll have a queue of waiting programs and a Java wrapper will maintain a fixed number (pool) of LXC containers, assigning each program one (unused) container. Each container should be secured with SELinux to protect the host system. My question is: Is it a good idea to create such a mechanism for a sandbox environment, or are there any better fitting solution to this problem? It should be light and secure against student creativity.
You didn't write why you choose LXC as it's not the most secure virtualization solution. I'm heavy user of KVM/XEN and also LXC and I can say this one thing that when it comes to security I never go with Linux containers (no matter if LXC / OpenVZ / VServer). It's just easier (and more reliable) with KVM/XEN. If it's about performance or hardware requirements then ok - you can try with LXC, but there are some rules You should follow: libvirt ensures strict confinement of containers when using SELinux (thanks to LXC_driver) - not sure though if it's only RHEL/Centos/Fedora case (I don't use Ubuntu/Debian that much) https://www.redhat.com/archives/libvir-list/2012-January/msg01006.html - so going with SELinux is a good idea (in my opinion it's "must have" in such circumstances) Set strict cgroups rules so Your guests doesn't make Your host freeze or affect other containers I'd rather go with LVM based containers - it's always one more layer of "security" Think about network solution and architecture. Do those containers have to communicate with each other? Start with reading this - it's quite old, but still - there's much knowledge there. And also - meet user namespaces And after all of that think again - do you really have that much time to play with LXC security? KVM is just so much simpler...
LXC containers as a sandbox environment
1,452,633,174,000
Typically on a server, automatic updates of security-related patches are configured. Therefore, if I'm running MySQL 5.5 and a new security patch comes out, Ubuntu Server will apply the upgrade and restart MySQL to keep me protected in an automated way. Obviously, this can be disabled, but it's helpful for those of us who are a bit lazy ;) Does such a concept exist inside of a Docker container? If I'm running MySQL in a Docker container, do I need to constantly stop the container, open a shell in it, then update and upgrade MySQL?
TL;DR: If you don't build it in yourself, it's not going to happen. The effective way to do this is to simply write a custom start script for your container specified by CMD in your Dockerfile. In this file, run an apt-get update && apt-get upgrade -qqy before starting whatever you're running. You then have a couple way of ensuring updates get to the container: Define a cron job in your host OS to restart the container on a schedule, thus having it update and upgrade on a schedule. Subscribe to security updates to the pieces of software, then on update of an affected package, restart the container. It's not the easiest thing to optimize and automate, but it's possible.
Application updates inside of Docker containers?
1,452,633,174,000
Environment: I am using a CentOS-7 as a hypervisor for running several LXCs under libvirt. Each container runs a minimal installation of CentOS-7 with cut down FreePBX (Asterisk, Apache, MySQL + bits). Symptoms: There are 16 containers running without any problems. When I start one more it does start, but after the 17th container starts I can not do systemctl start/restart/stop <anything> in ANY of the containers: [root@test-lxc ~]# systemctl restart dnsmasq Error: Too many open files Diagnostics: The following diagnostics and counts are done while the 17th LXC is running and systemctl restart blabla is failing: I can ssh into any LXC and run most basic commands, e.g. ls, etc. I suspect the limit somehow affects only the systemd. I'm trying to understand where/why I hit the limit. [root@lxc-hypervisor]# sysctl fs.file-nr fs.file-nr = 29616 0 12988463 That was not tweaked, this is just what happened to be from the default install. Same as above maximum (last) value = 12988463 is reported by the hypervisor and also inside each LXC. Very similar 1st value just under 30000 is also reported in each LXC. When I try to count file descriptors across all process inside each LXC I get in the order 400 ~ 500 in each LXC. for pid in $( ls /proc/ | grep -E -e "^[0-9][0-9]*\$" ); do ls -l /proc/${pid}/fd/ 2> /dev/null | wc -l done The sum total around 9000 (9k) without the hypervisor itself. When I run that on the hypervisor I usually get suspiciously close values just over 10000 e.g. 10005. Questions: Q1. Where is the limit set or inherited from? Q2. Why does the limit affect systemctl start/stop/restart blah commands, but I can still ssh into LXCs, run commands such as bash scripts with loops that fork a lot, albeit as root. Q3. How to tweak limits to allow running more LXCs. To the best of my understanding RAM and other resources are not the limit. I did read many articles and answers on the subject of file descriptor limits, but I fail to see where my system hits the limits. Any other relevant information is also welcome.
I believe you are not hitting a global limit, but an inotify limit. This would be seen on containers running systemd because systemd uses the inotify facility for its bookkeeping, but the host would also be affected. Containers not using systemd (nor inotify) would probably be unaffected. /proc/sys/fs/inotify/max_user_instances: This specifies an upper limit on the number of inotify instances that can be created per real user ID. If only non-rootless (ie: root in the container is the real root) containers are in use, then root user becomes the bottleneck. Having multiple containers using the same rootless user mapping would also create such bottleneck for this container's root user (but not affect the host). The default is 128, far too little for containers use. CentOS7 (or Rocky9) doesn't include any default setup for this with LXC. Debian-based distributions include this file on the host: /etc/sysctl.d/30-lxc-inotify.conf: # Defines the maximum number of inotify listeners. # By default, this value is 128, which is quickly exhausted when using # systemd-based LXC containers (15 containers are enough). # When the limit is reached, systemd becomes mostly unusable, throwing # "Too many open files" all around (both on the host and in containers). # See https://kdecherf.com/blog/2015/09/12/systemd-and-the-fd-exhaustion/ # Increase the user inotify instance limit to allow for about # 100 containers to run before the limit is hit again fs.inotify.max_user_instances = 1024 So you should do the same by creating this file on the host. For immediate effect (on the host): sysctl -w fs.inotify.max_user_instances=1024
"Error: Too many open files" while starting service in environment with several LXCs
1,452,633,174,000
I will be using Ubuntu Linux for this project. For training of a particular application at a conference I need: To have each student be able to ssh into the same user account on a server Upon each login automatically put the user in separate isolated environments Each isolated environment includes the application, example config files, and the standard unix toolset (e.g. grep, awk, sort, uniq, etc.) However, access to an entire linux filesystem is fine too as long as the user can only damage his own isolated environment and not those of others. The virtual environments should be destroyed when the users SSH session ends For #1 we would like to do the single user account so we don't have to deal with creating an account for each student and handing out the user names and passwords. Does anyone know how I can meet these goals? Which technology e.g. LXC, Chroot, etc. is best for this? I've been toying with the idea of using .bash_profile and .bash_logout to handle the creation and destruction of these environments but not sure which technology is capable of creating the environments I need.
With Docker you can do this very easily. docker pull ubuntu docker run -t -i ubuntu /bin/bash # make your changes and then log out docker commit $(docker ps -a -q | head -n 1) sandbox cat > /usr/local/bin/sandbox <<EOF #!/bin/sh exec docker run -t -i --rm=true sandbox /bin/bash EOF chmod a+x /usr/local/bin/sandbox echo /usr/local/bin/sandbox >> /etc/shells useradd testuser -g docker -s /usr/local/bin/sandbox passwd testuser Whenever testuser logs in, they will be placed into an isolated container where they can't see anything outside it, not even the containers of other users. The container will then be automatically removed when they log out. Note: This can be circumvented by the user specifying a command. For example: ssh foo.example.com /bin/bash. If security is a concern, you can use the ForceCommand option in /etc/sshd_config. Explanation: docker pull ubuntu Here we fetch the base image that we're going to work with. Docker provides standard images, and ubuntu is one of them.   docker run -t -i ubuntu /bin/bash # make your changes and then log out Here we launch a shell from the ubuntu image. Any changes you make will be preserved for your users. You could also use a Dockerfile to build the image, but for a one time thing, I think this is simpler.   docker commit $(docker ps -a -q | head -n 1) sandbox Here we convert the last container that was run into a new image called sandbox.   cat > /usr/local/bin/sandbox <<EOF #!/bin/sh exec docker run -t -i --rm=true sandbox /bin/bash EOF This will be a fake shell that the user is forced to run on login. The script will launch them into a docker container which will automatically be cleaned up as soon as they log out.   chmod a+x /usr/local/bin/sandbox I hope this is obvious :-)   echo /usr/local/bin/sandbox >> /etc/shells This may not be required on your system, but on mine a shell cannot be a login shell unless it exists in /etc/shells.   useradd testuser -g docker -s /usr/local/bin/sandbox We create a new user that with their shell set to a script we will create. The script will force them to launch into the sandbox container. They are a member of the docker group so that the user can launch a new container. An alternative to putting the user in the docker group would be to grant them sudo permissions to a single command.   passwd testuser I hope this is also obvious.  
replicate and isolating user environments on the fly
1,452,633,174,000
I'm trying to decide between "jailing" certain applications and I know the trade-offs of KVM versus LXC and how I can use them both. Lately I came across UML (User-Mode Linux) again and was wondering how it compares with respect to security and resource consumption (or overhead, if you will). Where can I find a comparison like that, or does anyone here know how they compare? Basically: what is the disk I/O and CPU overhead? how strict is the separation and how secure is the host from what's going on in the guest?
Best Disk I/O: LXC > KVM > UML. No overhead to speak of with LXC, KVM adds a layer of indirection so it will be slower (but you could also use it with raw disks), UML will be much slower. Least CPU overhead: LXC > KVM > UML. No overhead to speak of with LXC, small overhead with KVM, bigger overhead with UML. Strict separation and security: UML > KVM > LXC. Contrary to the statements above by krowe, if you want security above all else, UML is the way to go. You can run the UML kernel process as a totally unprivileged user, in a restricted chrooted environment, with any hardening you want on top. Escaping from the VM would require finding a kernel bug first, and even then, at best you end up with the privileges of a normal user process on the host. Now, if you care about performance... KVM is a much better option. LXC will give you the best performance, but is also the least secure of the 3.
Which one is lighter security- and CPU-wise: LXC versus UML
1,452,633,174,000
I know that you can display interfaces by doing ip a show. That only displays the interfaces that the host can see, but virtual interfaces configured by containers don't appear in this list. I've tried using ip netns as well, and they don't show up either. Should I recompile another version of iproute2? In /proc/net/fb_trie, you can see the local/broadcast addresses for, I assume, as a use for the forwarding database. Where can I find any of this information, or command to list all interfaces including containers? To test this out, start up a container. In my case, it is a lxc container on snap. Do an ip a or ip l. It will show the host machine's view, but not the container configured interface. I'm grepping through procfs, since containers are just cgrouped processes, but I don't get anything other than the fib_trie and the arp entry. I thought it could be due to a netns namespace obfuscation, but ip netns also shows nothing. You can use conntrack -L to display all incoming and outgoing connections that are established, because lxd needs to connection track the forwarding of the packets, but I'd like to list all ip addresses that are configured on the system, like how I'd be able to tell using netstat or lsof.
An interface, at a given time, belongs to one network namespace and only one. The init (initial) network namespace, except for inheriting physical interfaces of destroyed network namespaces has no special ability over other network namespaces: it can't see directly their interfaces. As long as you are still in init's pid and mount namespaces, you can still find the network namespaces by using different informations available from /proc and finally display their interfaces by entering those network namespaces. I'll provide examples in shell. enumerate the network namespaces For this you have to know how those namespaces are existing: as long as a resource keep them up. A resource here can be a process (actually a process' thread), a mount point or an open file descriptor (fd). Those resources are all referenced in /proc/ and point to an abstract pseudo-file in the nsfs pseudo-filesystem enumerating all namespaces. This file's only meaningful information is its inode, representing the network namespace, but the inode can't be manipulated alone, it has to be the file. That's why later we can't just keep only the inode value (given by stat -c %i /proc/some/file): we'll keep the inode to be able to remove duplicates and a filename to still have an usable reference for nsenter later. process (actually thread) The most common case: for usual containers. Each thread's network namespace can be known via the reference /proc/pid/ns/net: just stat them and enumerate all unique namespaces. The 2>/dev/null is to hide when stat can't find ephemeral processes anymore. find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do stat -L -c '%20i %n' $procpid/ns/net done 2>/dev/null This can be done faster with the specialized lsns command which deals with namespaces, but appears to handle only processes (not mount points nor open fd as seen later): lsns -n -u -t net -o NS,PATH (which would have to be reformatted for later as lsns -n -u -t net -o NS,PATH | while read inode path; do printf '%20u %s\n' $inode "$path"; done) mount point Those are mostly used by the ip netns add command which creates permanent network namespaces by mounting them, thus avoiding them disappearing when there is no process nor fd resource keeping them up, then also allowing for example to run a router, firewall or bridge in a network namespace without any linked process. Mounted namespaces (handling of mount and perhaps pid namespaces is probably more complex but we're only interested in network namespaces anyway) appear like any other mount point in /proc/mounts, with the filesystem type nsfs. There's no easy way in shell to distinguish a network namespace from an other type of namespace, but since two pseudo-files from the same filesystem (here nsfs) won't share the same inode, just elect them all and ignore errors later in the interface step when trying to use a non-network namespace reference as network namespace. Sorry, below I won't handle correctly mount points with special characters in them, including spaces, because they are already escaped in /proc/mounts's output (it would be easier in any other language), so I won't bother either to use null terminated lines. awk '$3 == "nsfs" { print $2 }' /proc/mounts | while read -r mount; do stat -c '%20i %n' "$mount" done open file descriptor Those are probably even more rare than mount points except temporarily at namespace creation, but might be held and used by some specialized application handling multiple namespaces, including possibly some containerization technology. I couldn't devise a better method than search all fd available in every /proc/pid/fd/, using stat to verify it points to a nsfs namespace and again not caring for now if it's really a network namespace. I'm sure there's a more optimized loop, but this one at least won't wander everywhere nor assume any maximum process limit. find /proc/ -mindepth 1 -maxdepth 1 -name '[1-9]*' | while read -r procpid; do find $procpid/fd -mindepth 1 | while read -r procfd; do if [ "$(stat -f -c %T $procfd)" = nsfs ]; then stat -L -c '%20i %n' $procfd fi done done 2>/dev/null Now remove all duplicate network namespace references from previous results. Eg by using this filter on the combined output of the 3 previous results (especially from the open file descriptor part): sort -k 1n | uniq -w 20 in each namespace enumerate the interfaces Now we have the references to all the existing network namespaces (and also some non-network namespaces which we'll just ignore), simply enter each of them using the reference and display the interfaces. Take the previous commands' output as input to this loop to enumerate interfaces (and as per OP's question, choose to display their addresses), while ignoring errors caused by non-network namespaces as previously explained: while read -r inode reference; do if nsenter --net="$reference" ip -br address show 2>/dev/null; then printf 'end of network %d\n\n' $inode fi done The init network's inode can be printed with pid 1 as reference: echo -n 'INIT NETWORK: ' ; stat -L -c %i /proc/1/ns/net Example (real but redacted) output with a running LXC container,an empty "mounted" network namepace created with ip netns add ... having an unconnected bridge interface, a network namespace with an other dummy0 interface, kept alive by a process not in this network namespace but keeping an open fd on it, created with: unshare --net sh -c 'ip link add dummy0 type dummy; ip address add dev dummy0 10.11.12.13/24; sleep 3' & sleep 1; sleep 999 < /proc/$!/ns/net & and a running Firefox which isolates each of its "Web Content" threads in an unconnected network namespace (all those down lo interfaces): lo UNKNOWN 127.0.0.1/8 ::1/128 eth0 UP 192.0.2.2/24 2001:db8:0:1:bc5c:95c7:4ea6:f94f/64 fe80::b4f0:7aff:fe76:76a8/64 wlan0 DOWN dummy0 UNKNOWN 198.51.100.2/24 fe80::108a:83ff:fe05:e0da/64 lxcbr0 UP 10.0.3.1/24 2001:db8:0:4::1/64 fe80::216:3eff:fe00:0/64 virbr0 DOWN 192.168.122.1/24 virbr0-nic DOWN vethSOEPSH@if9 UP fe80::fc8e:ff:fe85:476f/64 end of network 4026531992 lo DOWN end of network 4026532418 lo DOWN end of network 4026532518 lo DOWN end of network 4026532618 lo DOWN end of network 4026532718 lo UNKNOWN 127.0.0.1/8 ::1/128 eth0@if10 UP 10.0.3.66/24 fe80::216:3eff:fe6a:c1e9/64 end of network 4026532822 lo DOWN bridge0 UNKNOWN fe80::b884:44ff:feaf:dca3/64 end of network 4026532923 lo DOWN dummy0 DOWN 10.11.12.13/24 end of network 4026533021 INIT NETWORK: 4026531992
How do I find all interfaces that have been configured in Linux, including those of containers?
1,452,633,174,000
When creating a userns (unprivileged) LXC container on Ubuntu 14.04 with the following command line: lxc-create -n test1 -t download -- -d $(lsb_release -si|tr 'A-Z' 'a-z') -r $(lsb_release -sc) -a $(dpkg --print-architecture) and (without touching the created configuration file) then attempting to start it with: lxc-start -n test1 -l DEBUG it fails. The log file shows me: lxc-start 1420149317.700 INFO lxc_start_ui - using rcfile /home/user/.local/share/lxc/test1/config lxc-start 1420149317.700 INFO lxc_utils - XDG_RUNTIME_DIR isn't set in the environment. lxc-start 1420149317.701 INFO lxc_confile - read uid map: type u nsid 0 hostid 100000 range 65536 lxc-start 1420149317.701 INFO lxc_confile - read uid map: type g nsid 0 hostid 100000 range 65536 lxc-start 1420149317.701 WARN lxc_log - lxc_log_init called with log already initialized lxc-start 1420149317.701 INFO lxc_lsm - LSM security driver AppArmor lxc-start 1420149317.701 INFO lxc_utils - XDG_RUNTIME_DIR isn't set in the environment. lxc-start 1420149317.702 DEBUG lxc_conf - allocated pty '/dev/pts/2' (5/6) lxc-start 1420149317.702 DEBUG lxc_conf - allocated pty '/dev/pts/7' (7/8) lxc-start 1420149317.702 DEBUG lxc_conf - allocated pty '/dev/pts/8' (9/10) lxc-start 1420149317.702 DEBUG lxc_conf - allocated pty '/dev/pts/10' (11/12) lxc-start 1420149317.702 INFO lxc_conf - tty's configured lxc-start 1420149317.702 DEBUG lxc_start - sigchild handler set lxc-start 1420149317.702 DEBUG lxc_console - opening /dev/tty for console peer lxc-start 1420149317.702 DEBUG lxc_console - using '/dev/tty' as console lxc-start 1420149317.702 DEBUG lxc_console - 14946 got SIGWINCH fd 17 lxc-start 1420149317.702 DEBUG lxc_console - set winsz dstfd:14 cols:118 rows:61 lxc-start 1420149317.905 INFO lxc_start - 'test1' is initialized lxc-start 1420149317.906 DEBUG lxc_start - Not dropping cap_sys_boot or watching utmp lxc-start 1420149317.906 INFO lxc_start - Cloning a new user namespace lxc-start 1420149317.906 INFO lxc_cgroup - cgroup driver cgmanager initing for test1 lxc-start 1420149317.907 ERROR lxc_cgmanager - call to cgmanager_create_sync failed: invalid request lxc-start 1420149317.907 ERROR lxc_cgmanager - Failed to create hugetlb:test1 lxc-start 1420149317.907 ERROR lxc_cgmanager - Error creating cgroup hugetlb:test1 lxc-start 1420149317.907 INFO lxc_cgmanager - cgroup removal attempt: hugetlb:test1 did not exist lxc-start 1420149317.908 INFO lxc_cgmanager - cgroup removal attempt: perf_event:test1 did not exist lxc-start 1420149317.908 INFO lxc_cgmanager - cgroup removal attempt: blkio:test1 did not exist lxc-start 1420149317.908 INFO lxc_cgmanager - cgroup removal attempt: freezer:test1 did not exist lxc-start 1420149317.909 INFO lxc_cgmanager - cgroup removal attempt: devices:test1 did not exist lxc-start 1420149317.909 INFO lxc_cgmanager - cgroup removal attempt: memory:test1 did not exist lxc-start 1420149317.909 INFO lxc_cgmanager - cgroup removal attempt: cpuacct:test1 did not exist lxc-start 1420149317.909 INFO lxc_cgmanager - cgroup removal attempt: cpu:test1 did not exist lxc-start 1420149317.910 INFO lxc_cgmanager - cgroup removal attempt: cpuset:test1 did not exist lxc-start 1420149317.910 INFO lxc_cgmanager - cgroup removal attempt: name=systemd:test1 did not exist lxc-start 1420149317.910 ERROR lxc_start - failed creating cgroups lxc-start 1420149317.910 INFO lxc_utils - XDG_RUNTIME_DIR isn't set in the environment. lxc-start 1420149317.910 ERROR lxc_start - failed to spawn 'test1' lxc-start 1420149317.910 INFO lxc_utils - XDG_RUNTIME_DIR isn't set in the environment. lxc-start 1420149317.910 INFO lxc_utils - XDG_RUNTIME_DIR isn't set in the environment. lxc-start 1420149317.910 ERROR lxc_start_ui - The container failed to start. lxc-start 1420149317.910 ERROR lxc_start_ui - Additional information can be obtained by setting the --logfile and --logpriority options. Now I see two errors here, the latter probably being a result of the former, which is: lxc_start - failed creating cgroups However, I see /sys/fs/cgroup mounted: $ mount|grep cgr none on /sys/fs/cgroup type tmpfs (rw) and cgmanager is installed: $ dpkg -l|awk '$1 ~ /^ii$/ && /cgmanager/ {print $2 " " $3 " " $4}' cgmanager 0.24-0ubuntu7 amd64 libcgmanager0:amd64 0.24-0ubuntu7 amd64 Note: My host defaults still to upstart. In case there's any doubt, the kernel support cgroups: $ grep CGROUP /boot/config-$(uname -r) CONFIG_CGROUPS=y # CONFIG_CGROUP_DEBUG is not set CONFIG_CGROUP_FREEZER=y CONFIG_CGROUP_DEVICE=y CONFIG_CGROUP_CPUACCT=y CONFIG_CGROUP_HUGETLB=y CONFIG_CGROUP_PERF=y CONFIG_CGROUP_SCHED=y CONFIG_BLK_CGROUP=y # CONFIG_DEBUG_BLK_CGROUP is not set CONFIG_NET_CLS_CGROUP=m CONFIG_NETPRIO_CGROUP=m Note: My host defaults still to upstart.
Turns out, surprise surprise, this is a Ubuntu-specific thing. The cause The problem: although the kernel has cgroups enabled (check with grep CGROUP /boot/config-$(uname -r)) and cgmanager is running, there is no cgroup specific to my user. You can check that with: $ cat /proc/self/cgroup 11:hugetlb:/ 10:perf_event:/ 9:blkio:/ 8:freezer:/ 7:devices:/ 6:memory:/ 5:cpuacct:/ 4:cpu:/ 3:name=systemd:/ 2:cpuset:/ if your UID is given in each of the relevant lines, it's alright, but if no cgroups have been defined there will only be a slash after the second colon on each line. My problem was specific to starting an unprivileged container. I could start privileged containers just fine. It turned out that my problem was closely related to this thread on the lxc-users mailing list. Remedy On Ubuntu 14.04 upstart is the default, as opposed to systemd. Hence certain components that would be installed on a systemd-based distro do not get installed by default. There were two packages in addition to cgmanager which I had to install in order to get beyond the error shown in my question: cgroup-bin and libpam-systemd. Quite frankly I am not 100% certain that the former is strictly needed, so you could try to leave it out and comment here. After the installation of the packages and a reboot, you should then see your UID (id -u, here 1000) in the output: $ cat /proc/self/cgroup 11:hugetlb:/user/1000.user/1.session 10:perf_event:/user/1000.user/1.session 9:blkio:/user/1000.user/1.session 8:freezer:/user/1000.user/1.session 7:devices:/user/1000.user/1.session 6:memory:/user/1000.user/1.session 5:cpuacct:/user/1000.user/1.session 4:cpu:/user/1000.user/1.session 3:name=systemd:/user/1000.user/1.session 2:cpuset:/user/1000.user/1.session After that, the error upon attempting to start the guest container becomes (trimmed for brevity): lxc-start 1420160065.383 INFO lxc_cgroup - cgroup driver cgmanager initing for test1 lxc-start 1420160065.419 ERROR lxc_start - failed to create the configured network lxc-start 1420160065.446 ERROR lxc_start - failed to spawn 'test1' lxc-start 1420160065.451 ERROR lxc_start_ui - The container failed to start. So still no success, but we're one step closer. The above linked lxc-users thread points to /etc/systemd/logind.conf not mentioning three controllers: net_cls, net_prio and debug. For me only the last one was missing. After the change you'll have to re-login, though, as the the changes take effect upon creation of your login session. This blog post by one of the authors of LXC gives the next step: Your user, while it can create new user namespaces in which it’ll be uid 0 and will have some of root’s privileges against resources tied to that namespace will obviously not be granted any extra privilege on the host. One such thing is creating new network devices on the host or changing bridge configuration. To workaround that, we wrote a tool called “lxc-user-nic” which is the only SETUID binary part of LXC 1.0 and which performs one simple task. It parses a configuration file and based on its content will create network devices for the user and bridge them. To prevent abuse, you can restrict the number of devices a user can request and to what bridge they may be added. An example is my own /etc/lxc/lxc-usernet file: stgraber veth lxcbr0 10 This declares that the user “stgraber” is allowed up to 10 veth type devices to be created and added to the bridge called lxcbr0. Between what’s offered by the user namespace in the kernel and that setuid tool, we’ve got all that’s needed to run most distributions unprivileged. If your user has sudo rights and you're using Bash, use this: echo "$(whoami) veth lxcbr0 10"|sudo tee -a /etc/lxc/lxc-usernet and make sure the type (veth) matches the one in the container config and the bridge (lxcbr0) is configured and up. And now we get another set of errors: lxc-start 1420192192.775 INFO lxc_start - Cloning a new user namespace lxc-start 1420192192.775 INFO lxc_cgroup - cgroup driver cgmanager initing for test1 lxc-start 1420192192.923 NOTICE lxc_start - switching to gid/uid 0 in new user namespace lxc-start 1420192192.923 ERROR lxc_start - Permission denied - could not access /home/user. Please grant it 'x' access, or add an ACL for the container root. lxc-start 1420192192.923 ERROR lxc_sync - invalid sequence number 1. expected 2 lxc-start 1420192192.954 ERROR lxc_start - failed to spawn 'test1' lxc-start 1420192192.959 ERROR lxc_start_ui - The container failed to start. Brilliant, that can be fixed. Another lxc-users thread by the same protagonists as in the first thread paves the way. For now a quick test sudo chmod -R o+X $HOME will have to do, but ACLs are a viable option here as well. YMMV.
userns container fails to start, how to track down the reason?
1,452,633,174,000
I'm following an Ansible tutorial I got from Packt, I reached this part where I've created 3 Ubuntu containers (lxc) and got them up and running. I'm also able to login to each of them. I've downloaded Ansible by doing: git clone ansible-git-url and then sourced it. My working setup is as follows: /home/myuser/code in here I have 2 folders: ansible (the whole git repo) and ansible_course where I have 2 files: ansible.cfg and inventory. inventory contains the following: [allservers] 192.168.122.117 192.168.122.146 192.168.122.14 [web] 192.168.122.146 192.168.122.14 [database] 192.168.122.117 And ansible.cfg contains: [root@localhost ansible_course]# cat ansible.cfg [defaults] host_key_checking = False Then from this path: /home/myuser/code/ansible_course I try to execute the following: $ ansible 192.168.122.117 -m ping -u root The guy from the tutorial does exactly like this, and he gets success response from the ping, but I get the following error messages: [WARNING]: Unable to parse /etc/ansible/hosts as an inventory source [WARNING]: No inventory was parsed, only implicit localhost is available [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' [WARNING]: Could not match supplied host pattern, ignoring: 192.168.122.117 In the tutorial, he never says that I need to do something special in order to give an inventory source, he just says that we need to create an inventory file with the IP addresses of the Linux containers that we have. I mean, he doesn't say that we need to execute a command to set this up.
You'll probably want to tell ansible where the hosts file is in ansible.cfg, e.g. [defaults] inventory=inventory assuming inventory is actually your inventory file.
Ansible: How to parse an inventory source
1,452,633,174,000
Is it possible to use LXC on a desktop system to confine browsers and other pieces of software that have in the past been shown to be prone to certain kinds of exploits. So what I want to achieve is to jail, say Firefox, be still able to view its windows etc and yet be sure it only has read and write access to anything "inside the bubble", but not the host system. The example lxc-sshd container in LXC suggests something like this should be possible (app-level containers), but I have only seen this for program that require a TTY at most. Can this work also under KDE, GNOME, Unity ...?
Firejail is a Linux namespaces sandbox program that can jail Firefox or any other GUI software. It should work on any Linux computer.
Can LXC be used to jail instances of an installed browser?
1,452,633,174,000
We have a Linux server running Debian 4.0.5 (Kernel 4.0.0-2) with 32G RAM installed and 16G Swap configured. The system uses lxc containers for compartmentalisation, but that shouldn't matter here. The issue exists inside and out of different containers. Here's a typical free -h: total used free shared buff/cache available Mem: 28G 2.1G 25G 15M 936M 26G Swap: 15G 1.4G 14G /proc/meminfo has Committed_AS: 12951172 kB So there's plenty of free memory, even if everything allocated was actually used at once. However, the system is instantly paging even running processes. This is most notable with Gitlab, a Rails application using Unicorn: newly forked Unicorn workers are instantly swapped, and when a request comes in need to be read from disk at ~1400kB/s (data from iotop) and runs into timeouts (30s for now, to get it restarted in time. No normal request should take more than 5s) before it gets loaded into memory completely, thus getting instantly killed. Note that this is just an example, I have seen this happen to redis, amavis, postgres, mysql, java(openjdk) and others. The system is otherwise in a low-load situation with about 5% CPU utilization and a loadavg around 2 (on 8 cores). What we tried (in no particular order): swapoff -a: fails at about 800M still swapped Reducing swappiness (in steps) using sysctl vm.swappiness=NN. This seems to have no impact at all, we went down to 0% and still exactly the same behaviour exists Stopping non-essential services (Gitlab, a Jetty-based webapp...), freeing ca. 8G of committed-but-not-mapped memory and bringing Committed_AS down to about 5G. No change at all. Clearing system caches using sync && echo 3 > /proc/sys/vm/drop_caches. This frees up memory, but does nothing to the swap situation. Combinations of the above Restarting the machine to completely disable swap via fstab as a test is not really an option, as some services have availability issues and need planned downtimes, not "poking around"... and also we don't really want to disable swap as a fallback. I don't see why there is any swapping occuring here. Any ideas what may be going on? This problem has existed for a while now, but it showed up first during a period of high IO load (long background data processing task), so I can't pinpoint a specific event. This task is done for some days and the problem persists, hence this question.
Remember how I said: The system uses lxc containers for compartmentalisation, but that shouldn't matter here. Well, turns out it did matter. Or rather, the cgroups at the heart of lxc matter. The host machine only sees reboots for kernel upgrades. So, what were the last kernels used? 3.19, replaced by 4.0.5 2 months ago and yesterday with 4.1.3. And what happened yesterday? Processes getting memkilled left, right and center. Checking /var/log/kern.log, the affected processes were in cgroups with 512M memory. Wait, 512M? That can't be right (when the expected requirement is around 4G!). As it turns out, this is exactly what we configured in the lxc configs when setting this all up months ago. So, what happened is that 3.19 completely ignored the memory limit for cgroups; 4.0.5 always paged if the cgroup required more than allowed (this is the core issue of this question) and only 4.1.3 does a full memkiller-sweep. The swappiness of the host system had no influence on this, since it never was anywhere near being out of physical memory. The solution: For a temporary change, you can directly modify the cgroup, for example for an lxc container named box1 the cgroup is called lxc/box1 and you may execute (as root in the host machine): $ echo 8G > /sys/fs/cgroup/memory/lxc/box1/memory.limit_in_bytes The permanent solution is to correctly configure the container in /var/lb/lxc/... lxc.cgroup.memory.limit_in_bytes = 8G Moral of the story: always check your configuration. Even if you think it can't possibly be the issue (and takes a different bug/inconsistency in the kernel to actually fail).
Permanent swapping with lots of free memory
1,452,633,174,000
Is it possible to have a vanilla installation of Ubuntu 14.04 (Trusty) and run inside it containerized older Ubuntu versions that originally came with older kernels? For example for 12.04 I'd assume the answer is yes as it has linux-image packages for subsequent Ubuntu releases, such as linux-image-generic-lts-saucy and linux-image-generic-lts-quantal. For 10.04 that isn't the case, though, so I'm unsure. But is there documentation available that I can use to deduce what's okay to run? The reason I am asking is because the kernel interface undergoes updates every now and then. However, it's sometimes beneficial to run newer versions of the distro and at the same time keeping a build environment based on a predecessor.
You can run older Linux programs on newer kernels. Linux maintains backward compatibility (at least for all documented interfaces), for the benefit of people who are running old binaries for one reason or another (because they don't want to bother recompiling, because they've lost the source, because this is commercial software for which they don't have the source, etc.). If you want to have a build environment with older development tools, or even a test environment for anything that doesn't dive deeply into kernel interfaces, then you don't need to run an older kernel, just an older userland environment. For this, you don't need anything complex: a chroot will do. Something more advanced like LXC, Docker, … can be useful if you want the older (or newer, for that matter) distribution to have its own network configuration. If you don't want that, you can use what Debian uses precisely to build software in a known environment (e.g. build software for Debian stable on a machine with a testing installation): schroot. See How do I run 32-bit programs on a 64-bit Debian/Ubuntu? for a guide on setting up an alternate installation of Debian or a derivative in a chroot. If you want to run the older distribution's kernel, you'll need an actual virtual machine for that, such as KVM or VirtualBox. Linux-on-Linux virtualization with LXC or the like runs the same kernel throughout.
Is it possible to run a 10.04 or 12.04 or earlier LTS containerized under LXC or Docker on Trusty?
1,452,633,174,000
I'm attempting to setup a system that automatically creates a new sandbox on a ssh login to use as a temporary jump box into my server. So to do this I was wonder how to setup lxc to spin up a new shell in the container once there is a ssh connection then destroy that container after the session is closed. What would be the best way to go about this? EDIT: Thanks to everyone that has placed their input. I have been able to devise the method to do this same setup as follows: /etc/ssh/sshd_config: match group ssh forceCommand sudo docker run --rm -t -i busybox /bin/sh What this does is forces the user session to instantly go into a busybox container then delete the changes on exit. One could change or create their own image and specify this in place of busybox. Though this is included here since the only thing the default busybox container offers is wget and telnet which is good enough for most OOB testing/jump-boxes and this was the use case goal with this design.
Okay, your main problem doesn't appear to be the way you execute this. That's relatively easy with ForceCommand inside a Match block of sshd_config. In order to do what you want to achieve, i.e. a throw-away container that "self-destructs" after use, you can use lxc-start-ephemeral. That's to say your use-case has already been considered in the set of LXC userspace tools. Only catch: your LXC version needs to be recent enough. There's one more thing. You need a container which lxc-start-ephemeral uses as the basis for the ephemeral clone to start. More details can be found in on the man page for lxc-start-ephemeral.
How to automatically launch a lxc container on ssh connection?
1,452,633,174,000
I've followed Waydroid arch-wiki page and have installed waydroid, binder_linux-dkms and waydroid-image-gapps. When I run waydroid it works perfectly except for the network part. I do have new interfaces on host machine: 30: waydroid0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:16:3e:00:00:02 brd ff:ff:ff:ff:ff:ff inet 192.168.240.1/24 brd 192.168.240.255 scope global waydroid0 valid_lft forever preferred_lft forever inet6 fe80::216:3eff:fe00:1/64 scope link valid_lft forever preferred_lft forever 31: vethbrQLNw@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master waydroid0 state UP group default qlen 1000 link/ether fe:3e:17:46:42:95 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::fc3e:17ff:fe47:4295/64 scope link valid_lft forever preferred_lft forever but I don't have properly configured network interfaces in waydroid shell 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0@if31: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 00:16:3e:f9:d3:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet6 fe80::fba7:3c32:8e2f:857/64 scope link stable-privacy valid_lft forever preferred_lft forever So when I try to ping 1.1.1.1 I catch an error: connect: Network is unreachable. And I have no network inside wayland. I don't have ufw or firewalld installed. I've tried: Restart waydroid-container.service Stop nftables.service Restart iptables.service P.S. I have following in journalctl: окт 27 15:16:53 nous dnsmasq[139035]: started, version 2.87 cachesize 150 окт 27 15:16:53 nous dnsmasq[139035]: compile time options: IPv6 GNU-getopt DBus no-UBus i18n IDN2 DHCP DHCPv6 no-Lua TFTP con> окт 27 15:16:53 nous dnsmasq-dhcp[139035]: DHCP, IP range 192.168.240.2 -- 192.168.240.254, lease time 1h окт 27 15:16:53 nous dnsmasq-dhcp[139035]: DHCP, sockets bound exclusively to interface waydroid0 окт 27 15:16:53 nous dnsmasq[139035]: reading /etc/resolv.conf окт 27 15:16:53 nous dnsmasq[139035]: using nameserver 127.0.0.1#53 окт 27 15:16:53 nous dnsmasq[139035]: read /etc/hosts - 148944 addresses (127.0.0.1 is because I have unbound installed) waydroid log output: (138954) [15:16:45] % chmod 666 -R /dev/binder (138954) [15:16:45] % chmod 666 -R /dev/anbox-vndbinder (138954) [15:16:45] % chmod 666 -R /dev/anbox-hwbinder (138954) [15:16:45] Container manager is waiting for session to load (139008) [15:16:52] Save session config: /var/lib/waydroid/session.cfg (139008) [15:16:52] UserMonitor service is not even started (139008) [15:16:52] Clipboard service is not even started (138954) [15:16:52] % /usr/lib/waydroid/data/scripts/waydroid-net.sh start vnic is waydroid0 (138954) [15:16:53] % mount /usr/share/waydroid-extra/images/system.img /var/lib/waydroid/rootfs (138954) [15:16:53] % mount -o remount,ro /usr/share/waydroid-extra/images/system.img /var/lib/waydroid/rootfs (138954) [15:16:53] % mount /usr/share/waydroid-extra/images/vendor.img /var/lib/waydroid/rootfs/vendor (138954) [15:16:53] % mount -o remount,ro /usr/share/waydroid-extra/images/vendor.img /var/lib/waydroid/rootfs/vendor (138954) [15:16:53] % mount -o bind /var/lib/waydroid/waydroid.prop /var/lib/waydroid/rootfs/vendor/waydroid.prop (138954) [15:16:53] Save config: /var/lib/waydroid/waydroid.cfg (138954) [15:16:53] % mount -o bind /home/user/.local/share/waydroid/data /var/lib/waydroid/data (138954) [15:16:53] % chmod 777 -R /dev/ashmem (138954) [15:16:53] % chmod 777 -R /dev/dri (138954) [15:16:53] % chmod 777 -R /dev/fb0 (138954) [15:16:53] % chmod 777 -R /dev/video3 (138954) [15:16:53] % chmod 777 -R /dev/video2 (138954) [15:16:53] % chmod 777 -R /dev/video1 (138954) [15:16:53] % chmod 777 -R /dev/video0 (138954) [15:16:53] % lxc-start -P /var/lib/waydroid/lxc -F -n waydroid -- /init (138954) [15:16:53] New background process: pid=139115, output=background (138954) [15:16:53] Save session config: /var/lib/waydroid/session.cfg (139008) [15:17:02] waydroidusermonitor: Received transaction: 1 (139008) [15:17:02] Android with user 0 is ready (146313) [15:40:48] % tail -n 60 -F /var/lib/waydroid/waydroid.log (146313) [15:40:48] *** output passed to waydroid stdout, not to this log ***
As a workaround one can disable NFT usage (otherwise network will be unusable). Which is by some reason is enabled by default (whereas there is a commit with disabling it). In case of Arch-linux one should modify /usr/lib/waydroid/data/scripts/waydroid-net.sh and ensure there is a line: LXC_USE_NFT=false
No network in Waydroid: network is unreachable
1,452,633,174,000
I have a bunch of LXC containers running on a machine. All of them have their rootfs in the default location /var/lib/lxc/*/rootfs. This directory lives on a rather small partition on the host. I have a much, much bigger partition mounted on /home. Is there an option to move the backing storage to /home? Preferably per container. I know I could have done that before I had a couple of running containers (lxc-create -P PATH). But now they're up and I don't want to lose them.
rootfs is the configuration option. If the container is stopped, you can move the backing directory wherever you want and specify that in the config file: lxc.rootfs = /home/utsname This is probably better than using a symlink. LXC also allows backing files. You can use block devices and raw images. Source: https://linuxcontainers.org/lxc/manpages/man5/lxc.container.conf.5.html
Moving LXC container's backing storage
1,364,456,540,000
I have a bridge set up between eth0 and br0, the bridge works fine, but sometimes, for unknown reasons and circumstances, I keep getting these off vethXXXXXX interfaces added to the bridge. When this happens my LXC instances can't talk to the internet. When I run brctl delif br0 vethNbUtXk && brctl delif br0 vethYqTf0F, all is well again. Any idea where these odd looking interfaces are coming from? root@ubuntuserver:/var/lib/lxc# brctl show bridge name bridge id STP enabled interfaces br0 8000.080027ca5f7a no eth0 vethNbUtXk vethYqTf0F lxcbr0 8000.000000000000 no virbr0 8000.000000000000 yes Example ifconfig when one of these odd vethXXXXXX adapters got created vethPBkvAC Link encap:Ethernet HWaddr fe:14:5c:cb:62:d6 inet6 addr: fe80::fc14:5cff:fecb:62d6/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3194 errors:0 dropped:0 overruns:0 frame:0 TX packets:3214 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:309019 (309.0 KB) TX bytes:311213 (311.2 KB)
This might shed some light: Virtual Ethernet device. Might giving you something like this: Sure you do not have some configuration under /var/lib/lxc/ with lxc.network.type = veth ? grep -r 'veth' /var/lib/lxc/
Extra bridge interfaces get added automatically