date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,589,317,106,000
I was able to encrypt a directory so that it can not be deleted. And I encrypted the file in the folder. But then I was able to delete the encrypted file in the encrypted folder, in which case, encrypting them was pointless b/c it did not save my file from being deleted. To encrypt the folder, I used "sudo mount -t decryptfs ~/file ~/file". During the process it asked me if I wanted a clear text passthrough and if I wanted to encrypt the file (I think that's what it was), but the program would only work if I put yes for #1, and no for #2. To encrypt the file I used "gpg -c filename". There must be a way to prevent the file from being deleted, or, not even being able to get to the file since I would think an encrypted folder would protect the contents, otherwise, what's the point. I looked for another way to encrypt, found vera-crypt, but that is for the entire hdd, apparently. Is there a simple solution here, or should I look for a completely different method for encrypting the directory? Thank you.
I found the answer on a video. To lock a folder: "sudo chmod 700 filename" "sudo chmod root:root filename" To unlock a folder: "sudo chmod 777 filename" "sudo chown root:root filename" It's been working using these. I can not delete, move to trash, open, or copy after locking it. Thanks for the help!
How can I encrypt my folder so there is no access to its contents?
1,589,317,106,000
I am using Ubuntu and I have an encrypted flash drive which synchronizes with a folder on my desktop whenever I plug it into the computer (using rsync and a systemctl service that runs the bash script doing rsync whenever this exact device is plugged in). How could I create a passwordless encrypted flash drive that automatically unlocks once I connect it to my own desktop, so I don't have to enter the password every time? EDIT This is how I did it after the information I got from the accepted answer: 1) I created a random keyfile: sudo dd if=/dev/urandom of=/etc/luks/keyfile bs=1024 count=4 2) Then I used the cryptsetup luksAddKey command to add it to my already encrypted flash drive (where I found the relevant uuid by using the cryptsetup luksUUID command) sudo cryptsetup -v luksAddKey /dev/disk/by-uuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX /etc/luks/keyfile 3) To ensure the device uses the keyfile instead of the password on my machine I had to create a mapper by adding the following line to /etc/crypttab using the UUID of the device my_crypt_mapper /dev/disk/by-uuid/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX /etc/luks/keyfile luks
You're using cryptsetup, it has an option to use a key file instead of a password, so just use - - k e y - f i l e I don't know how your rsync & systemctl is set up to do the other things automatically, but just add the cryptsetup --keyfile... decrypt command in there. You might want some other method to ensure the keyfile is secure, like your home or system drive is encrypted?
Passwordless encryption of flash drive
1,589,317,106,000
Is it possible to encrypt a LMV partition after having installed an OS? I just finally managed to get LVM but I couldn't figure out how to get it encrypted. The installer didn't seem to provide for that to happen. To be more specific, I installed Lubuntu 18.10 which utilizes the Calamares installer. I've installed Lubuntu successfully but it's not encrypted. Is it possible to use LUKS to do something at this point or is there an alternative? EDIT: I am aiming for FDE. I want all partitions encrypted (/, home, swap).
@ThatRandomGuy, You have already done with installation so / file encryption is not possible. During installation you need to tick the check box "Encrypt the new Lubuntu installation for security" as shown in below figure. In case if you need to to encrypt home it possible by following below article. Creating Encrypted filesystem in RedHat Enterprise Linux 7 and Variants The guide is for RHEL based operating system but steps are exactly similar and still this works for Ubuntu based operating system as well.
How to encrypt LVM partition with LUKS after installation?
1,589,317,106,000
I have the second half of my hard drive encrypted. Now I want to encrypt the first half as well. Being the second partition I can't extend the LUKS partition. Can I add the first partition to the existing LUKS or do I have to set another LUKS for this one partition?
You'll need to create another LUKS container. Your LVM volume group can span both LUKS containers. You'd simply create an LVM physical volume in the LUKS container and add it to the LVM volume group. However, that means both LUKS containers would need to be unlocked before you can bring up the LVM volume group.
Add new partition to LUKS+LVM
1,589,317,106,000
I have a common setup of two OpenVPN clients (A and B) that are both connected to the server S. According to this question, each client has an encrypted channel with the server, but the server strips down the encryption in order to route the traffic to another client. For my understanding, if the server gets compromised, an attacker can see an unencrypted traffic between A and B from within S. Am I right? If yes, how can I force some kind of end-to-end encryption between A and B, so it will be safe to use telnet(for instance) in this case?
It's in principle possible to use OpenVPN in onion-like layers. Let's assume that you are forced to keep layer 1 (L1) unchanged: S stays the L1 open VPN server A and B stay L1 clients of L1 server S Below L1, we add L2 as follows: A acts as the L2 open VPN server B acts as a L2 open VPN client of L2 server A With this setup, assuming that A and B tunnel all the sensitive information trough L2, a compromised S L1 server cannot eavesdrop the L2 traffic.
OpenVPN multiple client communication with end-to-end encryption
1,589,317,106,000
I'm on debian 9 I want to boot from an encrypted root,using a usb key Editing /etc/crypttab... cifr /dev/md0 /dev/disk/by-uuid/88D9-A79B:/FILE luks,keyscript=/lib/cryptsetup/scripts/passdev Reboot..works fine. Only one problem,systemd stuck with this error,then boot..but I have to wait 1:30 minute! journalctl return this error dev-disk-by\x2duuid-88D9\x2dA79B:-FILE.device: Job dev-disk-by\x2duuid-88D9\x2dA79B:-FILE.device/start failed with result 'timeout'. USB key is formatted with vfat
Solution found. The command /lib/systemd/system-generators/systemd-cryptsetup-generator Convert the /etc/crypttab in systemd service. But it add those bad lines wich cause error After=dev-disk-by\x2duuid-88D9\x2dA79B Requires=dev-disk-by\x2duuid-88D9\x2dA79B So the solution is,after boot run /lib/systemd/system-generators/systemd-cryptsetup-generator This create the service in /tmp /tmp/[email protected] In my case in luksmd0,in your case can change. Edit the file,remove the two lines Then copy it to /etc/systemd/system And reboot. At reboot the command systemctl status [email protected] must return active.
Systemd and encrypted root on debian
1,589,317,106,000
Security is highered if only one machine can do the decryption. How would you suggest to allow only one computer to be able to decrypt an LUKS partition? I would simply need to get a set of variable specific to my machine and add them in the passphrase but I don't which one to choose. Which variables would you choose that would act as a "machine ID"?
Security is highered if only one machine can do the decryption. Availability can take a serious hit if that machine goes bust, though. How would you suggest to allow only one computer to be able to decrypt an LUKS partition? I would simply need to get a set of variable specific to my machine and add them in the passphrase [...] Well, you could base it on some hardware serial numbers (sudo dmidecode to see some) but this is less useful than you think. If the bad guys have physical access to the computer, they can make it show them the hardware serial numbers and defeat the scheme. If the bad guys don't have physical access to the computer, you can just use a key file stored on a non-encrypted partition of an internal disk, or on a thumb drive, or on an SD card, or on an optical disk, etc.
How to set a machine specific encryption to allow only one machine to decrypt data
1,589,317,106,000
I installed FreeBSD 10.1 and chose to encrypt the ZFS root volume and Swap during install. I'm now realizing that on the remote KVM vserver I'm on the encryption makes rebooting harder and probably doesn't have the security effect I had in mind when I installed FreeBSD (all disks are decrypted as soon as I enter the passphrase during bootup). So I'm now thinking of leaving the root volume unencrypted and leave encryption to specific folders / jails and the swap. Does this make sense? However I found lots of information about how to enable encryption on FreeBSD but none on disabling it. Is this even possible? If so: How?
Unfortunately, there isn't going to be an easy way to disable the encryption because the bits are already encrypted on disk. So a re-install with the encryption disabled may be your only option.
FreeBSD 10: Disable ZFS full disk encryption
1,424,518,278,000
I cannot change my login screen background to the desktop background in my elementary os which is based on ubuntu. I have decrypted my home folder according to this link: http://www.howtogeek.com/116179/how-to-disable-home-folder-encryption-after-installing-ubuntu/ and when I ran the command to check for encryption I got the following result- ~$ ls -A /home lost+found ramiz [note: no .ecryptfs folder there!!] ~$ sudo blkid | grep swap /dev/sda5: UUID="00ec9e13-4cfe-4f73-905a-fef05d73caa2" TYPE="swap" I have come to understand that it means neither my home nor swap partition is encrypted. Still I cannot change my login screen wallpaper. I have tried the simple version given in the official website - i.e., 1) Open “Files” 2) In plank, right click on “Files” and click “Open new window as admin” 3) Insert your password 4) On the new ADMIN window, go to: /usr/share/backgrounds 5) Paste your new image file there [/usr/share/backgrounds] (make sure it’s a JPEG file with, at least, 1920x1080 resolution) 6) Go to “System Settings”, “Desktop” and your new image file should be there 7) Click on it 8) Log out This wasn't helpful so I went for the decryption as I read that it was due to the fact that my home folder is encrypted that I cannot change my login screen background to the wallpaper I downloaded. (I read that here:- http://elementaryos.org/answers/login-screen-isnt-the-wallpaper-i-want-i-also-want-to-delete-all-the-default-wallpapers-1 ) Please help me. I have installed elementaryos for a friend of mine recently and she doesn't have this problem. I could change my login screen before(have tried only once!) but cannot anymore. I always thought that the lock icon beside my partition in the gparted app is saying that its encrypted. The lock icon is there. Please help me!!
Got it! 1) Copy all the pics to /usr/share/backgrounds 2) Then take terminal and change directory to that folder. cd /usr/share/backgrounds 3) Then for every image_name.jpg that you just added, type the command: sudo chmod a+rw image_name.jpg 4) Now exit terminal and check your System settings -> Desktop . Your custom wallpapers will be available there. Selecting from it also changed the login screen background in my lightdm.
cannot change my login screen background
1,424,518,278,000
I will install Ubuntu 11.04 to an encrypted VolumeGroup with the installer: https://i.sstatic.net/Skil9.png AES Blowfish Serpent Which one is the fastest from these 3? Or Using AES with only a 128 bit key gives the best performance/speed? Does someone has a graph/statistics about them? (or e.g.: what is the performance/speed difference between using an encrypted VG or not using encryption at all.) If there aren't any statistics about this, then how could I do one? What tests are there on the "market"? Thanks.
This table shows that AES with 128 bit keys would be faster. In fact, even with 256 bit keys it'd be faster. (of course, a comparison chart for the exact implementation would be better). That being said, since it's such a popular algorithm I'd be tempted to pick it based on popularity because it's something you're going to want around and well supported. I'm sure the other two will be too, but I'm more sure that AES will be.
What encryption to use for good performance?
1,424,518,278,000
I have a USB stick encrypted with LUKS + Ext4. I have forgotten the password... However, I know which words will be included in the password and have a list of all permutations of those words. About 10,000 permutations. Instead of me trying each and every permutation 1 by 1 manually (which will be a long, slow, and painfully tedious process), is it possible to automate this process? I know this sounds like some sort of malicious brute force attack, but it's not. If I wanted something like that, I could have easily downloaded some dodgy software from the internet. Instead, I want to use something which is safe on my computer, a script (or any safe solution) which is custom built for me specifically. Is this possible?
Well, in the most naive case you can roughly do something like for a in 'fo' 'foo' 'fooo' do for b in 'ba' 'bar' 'baar' do for c in 'bz' 'baz' 'bazz' do echo -n "$a$b$c" | cryptsetup open /dev/luks luks \ && echo "'$a$b$c' is the winner!" \ && break 3 done done done and it goes through all the puzzle pieces ... foobarbz foobarbaz foobarbazz ... etc. in order. (If you have optional pieces, add '' empty string. If your pieces are in random order, well, think about it yourself). To optimize performance, you can: patch cryptsetup to keep reading passphrases from stdin (lukscrackplus on github for one such example but it's dated) generate the complete list of words, split it into separate files, and run multiple such loops (one per core, perhaps even across multiple machines) compile cryptsetup with a different/faster crypto backend (e.g. nettle instead of gcrypt), difference was huge last time I benchmarked it find a different implementation meant to bruteforce LUKS But it's probably pointless to optimize if you have either too little (can go through in a day w/o optimizing) or way too many possibilities (no amount of optimizing will be successful). At the same time, check: are you using the wrong keyboard layout? is the LUKS header intact? (with LUKS1 there is no way to know for sure, but if you hexdump -C it and there is no random data where it should be, no need to waste time then) There's also a similar question here: https://security.stackexchange.com/q/128539 But if you're really able to narrow it down by a lot, the naive approach works too.
Automate multiple password enties to decrypted LUKS + Ext4 USB stick
1,424,518,278,000
I'd like to know if there are other tools like gpg to encrypt a file using AES encryption. I'd like the encryption to be a standardized format so I can use a programming language to decrypt the file on the other end. I am aware of zip file format but thought there might be more than this?
It may be impossible to do better than GPG's decades of secure tested encryption, but there are some other encryption tools available, ArchWiki has good info on them here https://wiki.archlinux.org/index.php/Disk_encryption Though they focus on disk & folder encryption you could encrypt a folder at a time, or treat each file as a "disk" if you wanted. Block device options are: dm-crypt including LUKS loop-AES a TrueCrypt fork like VeraCrypt Stacked filesystem (folder) options are: eCryptfs - currently user / home folder encryption on Android & many linux's EncFS
Is there a common tool besides gpg for encrypting files in AES?
1,424,518,278,000
When a daemon is executed, is the executable copied to memory? If so, can it be copied encrypted? If not, is there a way to prevent the executable from being copied to memory? The executable is stored on an encrypted tmpfs.
When a program is executed, the necessary code pages are loaded into memory on demand. This is transparent: the kernel loads the pages when it needs them, and tries to be smart by preloading pages that are likely to be needed soon. The code has to be decrypted before it can be executed. If the code is stored on an encrypted filesystem, it is decrypted inside the filesystem driver stack, just like any other piece of data stored in a file. It is pointless to encrypt a RAM filesystem. The key exists on the live system anyway (to decrypt the file). A subject can access the files if and only if he can access the key, so you need to do access control on the key. You might as well cut the middleman and control access to the files. Access control on a live system relies on permissions. Cryptography is not involved. If you don't want certain users to access a particular file, change the file's permissions accordingly. If someone has physical access to the machine, they have all the permissions they want. No amount of cryptography can change that. Cryptography protects access to offline data, which is stored separately from the key.
are daemon tmpfs executables copied unencrypted to memory upon execution? (prevent if so?)
1,424,518,278,000
If I use openssl to generate some random data (for a keyfile, for example): openssl rand -hex 2048 >/tmp/file Is this 4097 bits (or bytes?) of entropy? -rw-rw-r-- 1 username username 4097 Oct 30 20:01 /tmp/file
Is this 4097 bits (or bytes?) of entropy? Neither. Entropy is a property of how the random data was generated (see, e.g., this Crypto.SE post), not how much of it was generated. If openssl rand could generate data with x bits of entropy, that would still be x bits of entropy irrespective of whether you told it to output 1 bit or 1 TB. A detailed discussion of entropy would likely be off-topic here. Maybe ask on Cryptography SE.
Entropy: whats the difference between bits and bytes?
1,424,518,278,000
I encrypted a disk using cryptsetup. I want to be able to visualize that a known text before encrypting the disk became gibberish after encrypting. How do I do such comparison? Here's an example for the best scenario: In a decrypted disk, assume I make a text file that has the word "test string" inside it. I will somehow be able to visualize "test string" before encryption, and then after encryption, visualize that the "test string" became gibberish. I would want to use the same methods to visualize "test string" and the gibberish so that I can be sure that it's "test string" that became gibberish. If it means I have to find "test string" in hex, then so be it. I just need to be able to see that there's "test string" and then "test string" is nowhere to be found (and instead there are other gibberish). Any idea what kind of methods I should use to probe the disk to find "test string"?
For example, consider the server I work on. The hard disk has a small /boot partition, /dev/sda1, which is by necessity not encrypted, and a large encrypted partition, /dev/sda2, which hosts a LUKS container, which, when opened by cryptsetup automatically at boot after entering the passphrase, appears as /dev/mapper/Serverax. In the container there is a LVM physical volume, on which lives a LVM volume group; the volume group contains the logical volumes Root, Home, Srv and Swap. $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 20G 0 disk ├─sda1 8:1 0 294M 0 part /boot └─sda2 8:2 0 19.7G 0 part └─Serverax 252:0 0 19.7G 0 crypt ├─Serverax-Root 252:1 0 10.7G 0 lvm / ├─Serverax-Swap 252:2 0 1G 0 lvm [SWAP] ├─Serverax-Srv 252:3 0 6G 0 lvm /srv └─Serverax-Home 252:4 0 2G 0 lvm /home To see the raw data on the disk, read some blocks directly from /dev/sda2. In the example, the skip=$((2*1024)) skips over the 2 MiB LUKS header, and lands in the LVM header: $ sudo dd if=/dev/sda2 bs=1K count=1 skip=$((2*1024)) 2>/dev/null | hd 00000000 33 b2 f7 1b 03 ce a6 3a 87 b4 03 98 7d a7 b1 cc |3......:....}...| 00000010 1a c9 99 80 01 19 c0 db f0 54 a7 4c 1c 2b 9c ea |.........T.L.+..| 00000020 f3 84 b0 d8 0c 54 c0 fe ec c0 06 a8 8c c0 6b 10 |.....T........k.| ... 00000200 d4 0b 67 3b ba d1 21 06 58 ce 84 b4 3b 3b e0 f2 |..g;..!.X...;;..| 00000210 4d eb 99 d3 15 63 81 f3 92 b7 ff c2 17 95 ed b3 |M....c..........| 00000220 92 51 ab dc 29 84 9b 6f 68 cc a9 fe 35 cd e0 08 |.Q..)..oh...5...| 00000230 1f d1 e0 52 34 46 13 90 38 c4 3d 18 30 1a 1d c8 |...R4F..8.=.0...| 00000240 1c 05 2f 17 0b ad 39 6f 56 9c 28 71 e3 f7 78 10 |../...9oV.(q..x.| 00000250 97 09 cb 49 50 f5 b1 06 a1 8a e0 4d 7a 0e 39 94 |...IP......Mz.9.| 00000260 15 2d 05 b5 94 75 c0 a2 d1 bf 78 3d ba 30 06 61 |.-...u....x=.0.a| 00000270 e6 82 8d 4a 60 90 81 e7 0a 34 5a f8 03 fc a6 89 |...J`....4Z.....| 00000280 12 11 19 b2 2b 44 9b 0a 07 c1 40 d9 4b df bd 54 |[email protected]| 00000290 0a 40 2b 4f 1f 55 f5 e2 fa 10 41 3b f9 58 5a 2f |.@+O.U....A;.XZ/| ... The same data, decrypted, can be read from /dev/mapper/Serverax; note that this time there is no skip=: $ sudo dd if=/dev/mapper/Serverax bs=1K count=1 2>/dev/null | hd 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000200 4c 41 42 45 4c 4f 4e 45 01 00 00 00 00 00 00 00 |LABELONE........| 00000210 be af fb 35 20 00 00 00 4c 56 4d 32 20 30 30 31 |...5 ...LVM2 001| 00000220 47 41 70 58 43 62 74 55 65 6b 33 41 6b 53 54 73 |GApXCbtUek3AkSTs| 00000230 4f 6b 6a 49 49 72 6e 53 66 54 41 77 6e 31 53 6e |OkjIIrnSfTAwn1Sn| 00000240 00 00 60 ed 04 00 00 00 00 00 20 00 00 00 00 00 |..`....... .....| 00000250 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000260 00 00 00 00 00 00 00 00 00 10 00 00 00 00 00 00 |................| 00000270 00 f0 1f 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| 00000280 00 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 |................| 00000290 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00000400
After encrypting a disk, how do you check that a known plain-text inside the disk became gibberish? [closed]
1,424,518,278,000
My system has full disk encryption except for /boot. I've set a GRUB password by following this post, but then was able to disable it by booting into Kali Live and running: mkdir /mnt/dev/sda2 sudo mount /dev/sda2 /mnt/dev/sda2 sudo vim /mnt/dev/sda2/grub/grub.cfg 3 commands. Easy peasy. A user will only see the GRUB menu or be asked to enter a GRUB password if they have physical access to the machine (right?). If they have physical access, they can disable GRUB. If a user doesn't have access to GRUB (ie they connect via ssh or this is a VM (because ssh only starts after GRUB finishes, right?)), then they'll never be stopped by it. So it seems like in the only situation that a GRUB password is used, it is also easily bypassed. If a GRUB password ever stops someone, they can disable it. So, what's the point? Is GRUB passwords as useless as I think? Do they increase security in any other way? What if we throw /boot encryption into the mix? Does that improve things? Thank you!
By that logic, why use any passwords at all? After all, one can simply move the disk from one computer to another and access anything as root. User or bootloader passwords in general are not designed to stop data access, encryption is designed to do that. GRUB passwords are not about restricting data access or modification from a booted machine. Instead, they are useful as part of a wider system of locking down a computer. For example, you can set up BIOS/UEFI passwords to prevent booting from unauthorised devices and restrict physical access to the machine. GRUB passwords add an extra layer of protection by preventing unauthorised users from modifying the boot parameters or selecting different boot entries. They are not designed to act as a wholesale method of securing a machine. In a real, locked down scenario, you would not only have a GRUB password, but would have restrictions on (for example) booting untrusted or unattested devices. In these scenarios bootloader passwords can present an actual benefit.
GRUB password seems useless, so why even bother?
1,424,518,278,000
I aim to hide data into .txt and .xml files, from Linux OS, aand keep it readable on any OS. With Apple OS (OS X), I used to write 'secrete' data into files (actually in the Resource Fork of the file), but out of the content of the file. Example: a file bob.txt, when you open it with Text.app it displays "hello you", but I placed undisplayed text into bob.txt (e.g.: "my_hat_is_redwine"). How to "HIDE" textdata with Unix/LinuxTerminal, outside of the content ? Conditions are: - opening the file, - zero content alteration. I have been looking for EOF (end of file), but it does NOT exist on Linux (EOD is an old stuff, very old). I think about setfattr, that's quite ok but I feel sure there is a deeper/stronger way. I mean, editing the full file bits chain, and adding bits between content and metada, for instance.
What you describe on MacOSX is just storing regular data in the data fork of a file, and "secret" data in the resource fork. Major Linux filesystems provide a more general mechanism, called extended attributes, which can be written using the setfattr command and read back using getfattr. For example:: $ echo "Hello, world" > test $ setfattr -n user.secret -v "Not-easily viewable content goes here" test $ cat test Hello, world $ getfattr -n user.secret test # file: test user.secret="Not-easily viewable content goes here" Note that: Extended attributes are namespaced; user-defined attributes names must begin with user. You can store several extended attributes in parallel, e.g., user.secret1 and user.secret2 Not all filesystems support extended attributes: ext2/3/4, xfs, btrfs do (but they require a mount option which might not be the default on your Linux distro); some other don't (e.g., tmpfs)
Add data in a file, but not out of content? (steganography)
1,424,518,278,000
A system is installed with an encrypted VG. How can I find out if they used dm_crypt or luks? It's running Fedora 14, so I think it's one of those. Or are they the same thing?
I am creating a new answer because the currently accepted answer is incorrect in calling them the same. LUKS adds key management to dm-crypt. It's Linux Unified Key Setup. Without LUKS, you can only have a single master password. LUKS allows you to have multiple keys that can decrypt the single master key that the disk is encrypted with. This allows you to rotate passwords or provide multiple administrators a key that can be revoked later if needed. Also, passwords are better protected against dictionary attacks through the use of PBKDF2 making LUKS + dm-crypt much stronger to crack. @stribika is correct about using cryptsetup luksDump /dev/sdXX to correctly detect the presense of the LUKS header. I believe dm-crypt has it's own header to record the type of encryption used, but I am not 100% sure on that issue.
Is the VG encrypted with dm_crypt OR luks? How to find out?
1,424,518,278,000
I just did a fresh install of ubuntu server with encryption. However from the looks of it I am unable to access the majority of the space. I believe its not mounted to a directory but it shows as mounted because the root directory. How can I either mount the 3.6T to /home or extend the root to use the whole disk space? $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 73M 1 loop /snap/core22/607 loop1 7:1 0 73.9M 1 loop /snap/core22/817 loop2 7:2 0 163M 1 loop /snap/lxd/24643 loop3 7:3 0 173.5M 1 loop /snap/lxd/25112 loop4 7:4 0 49.8M 1 loop /snap/snapd/18596 loop5 7:5 0 53.3M 1 loop /snap/snapd/19457 nvme0n1 259:0 0 3.6T 0 disk ├─nvme0n1p1 259:1 0 1G 0 part /boot/efi ├─nvme0n1p2 259:2 0 2G 0 part /boot └─nvme0n1p3 259:3 0 3.6T 0 part └─dm_crypt-0 253:0 0 3.6T 0 crypt └─ubuntu--vg-ubuntu--lv 253:1 0 100G 0 lvm / result of trying to mount: $ sudo mount /dev/mapper/dm_crypt-0 /mnt/Cloud/ mount: /mnt/Cloud: unknown filesystem type 'LVM2_member'. dmesg(1) may have more information after failed mount system call. $ $ sudo fdisk -l Disk /dev/loop0: 72.99 MiB, 76537856 bytes, 149488 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop1: 73.86 MiB, 77443072 bytes, 151256 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop2: 163 MiB, 170917888 bytes, 333824 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop3: 173.46 MiB, 181882880 bytes, 355240 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop4: 49.84 MiB, 52260864 bytes, 102072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop5: 53.26 MiB, 55844864 bytes, 109072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/nvme0n1: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors Disk model: CT4000P3PSSD8 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 57C945BD-62AC-47B7-B0B3-2481E5CB4230 Device Start End Sectors Size Type /dev/nvme0n1p1 2048 2203647 2201600 1G EFI System /dev/nvme0n1p2 2203648 6397951 4194304 2G Linux filesystem /dev/nvme0n1p3 6397952 7814033407 7807635456 3.6T Linux filesystem Disk /dev/mapper/dm_crypt-0: 3.64 TiB, 3997492576256 bytes, 7807602688 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 100 GiB, 107374182400 bytes, 209715200 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
The whole 3.6T encrypted partition has been assigned to be managed by LVM, but only 100G of it has been actually allocated for use as your root partition. You can't mount the /dev/mapper/dm_crypt-0 directly, because it has been initialized as a LVM physical volume (PV for short) which acts as a container for one or more LVM logical volumes (LVs for short), each of which can contain a filesystem (or something else, like a swap partition or a raw database, if you wish). The advantage of LVM over traditional partitions for you is that it allows you to create multiple logical volumes, and still extend them without having to care about whether there is free space to the "right" of them, like with traditional partitions. You can even extend those logical volumes on-line, while the system is running. If you add another disk later, you can add it to the same LVM volume group (VG for short) with the existing one, and then use the disk space of both disks as one large pool. You could allow a filesystem to be extended beyond the limits of any single disk, if you need to. There are three commands that are useful to display LVM status and available space: sudo vgs displays your LVM volume group(s), their attributes, total size and the amount of free (unallocated) capacity, which you can use for extending existing LVs or creating new ones as you wish. sudo lvs displays the attributes and size of each LVM logical volume. sudo pvs displays information on your LVM physical volume(s): the device path, the volume group it belongs to, its attributes, size and how much of it has been allocated. (This can be important if you are planning an on-line data migration: using pvmove you can move entire filesystems from one PV to another while the filesystem is in use.) Note: by default, Ubuntu has named your initial volume group as ubuntu-vg and the logical volume that contains your root filesystem ubuntu-lv. There are two ways to refer to the logical volumes as devices: the old way (from before the 2.6.x kernels): /dev/ubuntu-vg/ubuntu-lv the unified Device-Mapper way: /dev/mapper/ubuntu--vg-ubuntu--lv When using the Device-Mapper style paths, any dashes in the VG and LV names need to be doubled, as a single dash is used as a separator between the VG and LV names. This only applies when using the names as part of a /dev/mapper/... device path: when using a VG or LV name alone as a parameter to some LVM command or its option, the dash-doubling is not required. For example, to extend your current root filesystem to 200G, you could run just one command: sudo lvextend -r -L 200G /dev/mapper/ubuntu--vg-ubuntu--lv If you want to create a separate /home filesystem, you could make a new logical volume for it with: sudo lvcreate -n home-lv -L 1T ubuntu-vg sudo mkfs.ext4 /dev/mapper/ubuntu--vg-home--lv # or whatever filesystem type you wish Note that mounting your new filesystem to /home will be rather difficult unless you can login directly as root, since your existing home directory will be at /home/<username> and if you are logged as a regular user, it will be in use. Mounting another filesystem to /home would hide your home directory under the new filesystem, temporarily preventing access to it by any new logins. Note that while you can extend many filesystem types while the filesystem is mounted and in use, shrinking a filesystem can be more difficult, or practically impossible as in case of XFS filesystems (so far). So, when using LVM, it can be useful to err on the low side when allocating disk space, since you can easily extend filesystems as long as you have unallocated LVM space to do so.
No understanding Why i cant access dm_crypt-0 -> LVM2_Member
1,424,518,278,000
I am not new to Linux but I am new to Linux Mint since I switched from Ubuntu due to reasons out of scope of this question (snapd, update borked my computer) I selected for FDE (Full Disk Encryption) during the graphical installation process. I then saw the option to encrypt the home folder and I clicked that as well. I then remembered that from the Ubuntu FDE documentation that only the /home partition is encrypted. However the Mint documentation is much less clearer on that regard: If you are new to Linux use home directory encryption instead (you can select it later during the installation). source When I checked the Known Issues page, the wording seemed to imply that both were separate Benchmarks have demonstrated that, in most cases, home directory encryption is slower than full disk encryption. source Can they be both enabled at once or will encrypting the home folder remove all other FDE on the system?
I have answered the question by myself. I emailed Clement (project leader of Mint) and this was his response: You can have one or the other, or both, or none at all. FDE is faster and safer (it doesn't just encrypt your home, but also the entire HDD including swap, temporary files which might be left on the HDD..etc.). HDE is more convenient since it's tied to your login password and doesn't require entering an extra password. It provided additional security in the past since it unmounted the decrypted home on logout, but this is no longer the case, so if you're using FDE, you don't really need HDE anymore. In terms of performance, on modern specs, both are pretty good and not noticeable. Regards, Clement Lefebvre Linux Mint
(Linux Mint) Does selecting "Encrypt Home Folder" after you chose Full Disk Encryption only encrypt the home folder?
1,424,518,278,000
I'm running Arch Linux. I want to clone a 2 disk encrypted logical volume in a single volume group (LUKS on LVM). There is a slight catch. I want to swap some of the drives. I have: VG1: LV: PV(OldDrive1) + PV(OldDrive2) sda (OldDrive1) -vg1-luks_encrypted_lv sdb (OldDrive2) -vg1-luks_encrypted_lv I have two other drives (NewDrive1 and NewDrive2). I want to create a VG2 that is a clone of VG1. However, I want to swap some drives around. So I want: VG1: LV: PV(OldDrive1) + PV(NewDrive1) VG2: LV: PV(OldDrive2) + PV(NewDrive2) sda (OldDrive1) -vg1-luks_encrypted_lv sdb (OldDrive2) -vg2-cloned_luks_encrypted_lv sdc (NewDrive1) -vg1-luks_encrypted_lv sdd (NewDrive2) -vg2-cloned_luks_encrypted_lv My current plan is to clone each drive. I was previously thinking about using dd, but after some reading maybe I need to use pvmove?: OldDrive1 -> NewDrive1 OldDrive2 -> NewDrive2 Could I then just swap the physical drives in LVM, because the drives are bit-by-bit clones? I'm worried I'm missing something. How would I incorporate the news drives into the LV? I would appreciate some advice, because I don't want to lose any data. Thanks. Edit: @telcoM 's answer worked very well. Thank you very much. I used the on-line method. If anyone wants to do something similar there are few things worth noting. On Step 7: The lvconvert -m default is now raid1, not lvm's own mirror system. Read man lvconvert for more details. Since I wanted to immediately split the mirror, it was much easier to just use the lvm's legacy mirror with the mirrorlog stored in memory: lvconvert --type mirror -m +1 --mirrorlog core vg1/luks_encrypted_lv OldDrive2 NewDrive2 Just remember that --mirrorlog core puts the mirrorlog in memory. So don't turn off your computer before running lvconvert --splitmirrors or you will lose your mirrorlog file. On Step 9: Before you do vgsplit you need to unmount the filesystem and deactivate the logical volume. On Step 11: Most people probably realize this, but you need to assign a UUID to $uuid before you run cryptsetup luksUUID --uuid $(uuid) /dev/mapper/VG2-LVx. Run something like uuid=$(uuidgen) first.
"Need" is a strong word - there is more than one way to achieve what you want. With pvmove, you could do it on-line, while the encrypted LV is in use. 1.) pvcreate NewDrive1 2.) vgextend VG1 NewDrive1 3.) pvmove OldDrive2 (means effectively: "move any LVM-allocated extents from OldDrive2 to any other drive(s) in VG1 so that OldDrive2 becomes completely unallocated, if possible." This will take some time: you might want to run it within a screen/tmux session with a -verbose option.) 4.) Use pvs or pvdisplay OldDrive2 to make sure OldDrive2 is now completely unallocated. 5.) pvcreate NewDrive2 6.) vgextend VG1 NewDrive2 7.) for every LV in VG1: lvconvert -m +1 VG1/LVx OldDrive2 NewDrive2 ("create a mirror from VG1/LVx, allocating space for the mirror from OldDrive2 and NewDrive2"). If there is no space for on-disk mirror log, you might need to use --mirrorlog core option here. 8.) once mirrors are in sync, for every LV in VG1: lvconvert --splitmirrors 1 --name LVcopyx VG1/LVx OldDrive2 NewDrive2 ("Split off one mirror of LVx located on OldDrive2 and/or NewDrive2 and name it "LVcopyx" to avoid a name conflict.) 9.) vgsplit VG1 VG2 OldDrive2 NewDrive2 ("Split OldDrive2 and NewDrive2 off VG1, taking their LVs with them, and name the resulting new VG as VG2.") 10.) for every LV in VG2: lvrename VG2 LVcopyx LVx to restore the original LV name(s), now that the copies have been separated into their own VG and there is no more conflict. You now have a new VG2 that contains copies of VG1's LVs as they existed at the point of splitting the LV mirrors at step 8.). 11.) Before actually using VG2, you'll need vgchange -ay VG2, and then cryptsetup luksUUID --uuid $(uuid) /dev/mapper/VG2-LVx to give it an unique UUID distinct from its VG1 counterpart, and once you have unlocked the encryption, you should also give the filesystem inside it a new UUID too. For BtrFS, this is vital (btrfstune -u /dev/mapper/VG2-LVx-crypt); for other filesystems this is essentially just a convenience so that UUID-based mounting will work. If you can have the VG off-line and can unplug/re-plug disks, you could also: 1.) Unmount, cryptsetup luksClose and de-activate the VG (vgchange -an VG1). To avoid unwanted auto-activation at boot or hot-plug time, also mark it as exported (vgexport VG1). 2.) Clone the drives as you planned. 3.) Unplug drives so that the system will only see OldDrive2 and NewDrive2. If your hardware allows doing it hot, use echo 1 > /sys/block/<device name>/device/delete for a graceful hot-unplug. 4.) Boot the system or run vgscan after a graceful hot-unplug. Then import and rename the VG: vgimport VG1, then vgrename VG1 VG2. Use vgchange --uuid VG2 to give the new VG2 a new identity distinct from the old VG1, and use pvchange --uuid OldDrive2 and pvchange --uuid NewDrive2 to do the same at the PV level. After importing and renaming, remember that you'll need to activate the VG before you can mount it or do any other operations to it: vgchange -ay VG2. 5.) After activating the VG, use cryptsetup luksUUID --uuid=$(uuid) /dev/mapper/VG2-LVx to do give the LUKS container a distinct new identity, and after unlocking the encryption, use a filesystem-specific tool to do the same at the filesystem level too. (this is especially important for BtrFS: btrfstune -u /dev/mapper/VG2-LVx-crypt) 6.) Now you can plug OldDrive1 and NewDrive1 back in (use vgscan if you hot-plug), and vgimport VG1 "again", and activate it with vgchange -ay VG1. You now have two fully separate VGs you can use as you see fit.
LVM: How to clone multi-disk encrypted logical volume?
1,424,518,278,000
I've setup a server with luks devices (not used for root partition), they are listed in /etc/crypttab this way # <target name> <source device> <key file> <options> luks_device_1 /dev/mapper/vg-lv_1 none luks ... I've also setup a tang server and bound the devices to tang using the command clevis luks bind -d /dev/mapper/vg-lv_1 tang '{"url":"http://svr"}' finally I enable the units clevis-luks-askpass.path and clevis-luks-askpass.service to have the automatic unlocking mechanism working at boot. However the devices are not unlocked at boot, the password is asked on the console, unless I add in the file /etc/crypttab the string _netdev in the options section. But I'm not really fond of that because _netdev is supposed to be used for network devices. Did I miss something ?
Actually, according the manpage clevis-luks-unlockers(7) having the option _netdev in /etc/crypttab is necessary to trigger the automatic unlocking. After a reboot, Clevis will attempt to unlock all _netdev devices listed in /etc/crypttab when systemd prompts for their passwords. This implies that systemd support for _netdev is required.
tang / clevis: automatic unlocking for luks device not triggered unless defined as _netdev
1,424,518,278,000
openssl this way can only encrypt small files: openssl rsautl -encrypt -pubin -inkey public_key.pem -in secret.txt -out secret.enc openssl as I found suggested here throws an error: openssl smime -encrypt -aes-256-cbc -binary -in secret.txt -outform DER -out secret.txt.der public_key.pem not that you're supposed to be using smime because that's for mail but still see the Error: unable to load certificate 140222726453056:error:0909006C:PEM routines:get_name:no start line:crypto/pem/pem_lib.c:745:Expecting: TRUSTED CERTIFICATE Why is openssl complaining about a trust certificate? That's my business. I know it means that it wants the PEM to be of a different format rather than taking it personally. What do I want? I want to use bash to encrypt any file with strong encryption using a public PEM file (or other public key) so that my project counterpart can use their private key to decrypt it. It would be awesome if I could use powershell native tools as well but that's a big ask and just thought I'd throw it in, in hope of avoiding gitbash for Windows-homed recipients. I could use gpg but we do not want to introduce generating password generated keys.
Well, the first problem is that you really don't want to encrypt a large file with an asymmetric cipher. Nobody does this, as they are slow, and limited in size in any case. What you do is create a session key for a symmetric cipher, encrypt the session key with the asymmetric cipher, encrypt the large file with the symmetric cipher, and then package things together. This is what both SSL and PGP/GPG do. The rsautil command you found is a low level tool about doing the asymmetric encoding. But you would need several other commands to do the full process. There are two basic solutions. Sign your public key with your private key to create a certificate. This should let openssl smime work. Use GPG. This is what it is designed for. I recommend this for what you've described. Probably gpg -se, or gpg -sea if you are emailing. (Don't do gpg -c, which I think is your "password generated keys".) Using either SSL or GPG, both parties must generate public and private keys, sign the public key with the private key, and send the signed public key to the other party. Generally, the only passwords involved are about encrypting the private key so that somebody who can see your files can't decrypt your data. The major difference between SSL and PGP/GPG is the certification model. SSL uses "Certificate Authorities". PGP/GPG uses a chain of certificates, where Anne signs Bob's key, and Bob signs Carol's key, and thus Anne can trust that Carol's key is belongs to Carol. Finally, if you are emailing the files, you might just look to see if you can get either SMIME or PGP integration in your email client.
How can I use an unsigned public key to encrypt a large file using openssl?
1,424,518,278,000
I'm new to GPG and I've learned some basic GPG commands recently. I just found that using the same GPG command to encrypt the same data always generates different result. Below is an example. Is is normal? I took a quick look at the GPG man page and found there is a parameter --faked-system-time which seems indicates GPG encryption will use the system time as an input. Is this true? root@EPC:~# echo xxx | gpg -ear [email protected] -----BEGIN PGP MESSAGE----- hQIMAzha1PwyPp1PAQ//byuSt9hlwYAZAjXxC/eychTXbEvA8HnuCDaclczOVd3r FSrKfMPAcyWE+XLWZfQrKJ0gKQIF2lJNFHCXieDYi2AA0pOUKzatUvlccJV7BKk2 mY2LmH6R7rNh5Us+Es/xut03TWmjVFzXtsHRQBazVpn19PyB7ybusWTv05POmLqD nZe2l0uhjfQxtBG/a5leNF8cg5vhun7i0tHL/6y4yYCjEBO8zCl4lmwTaLwfVAPM VoLCtX+3UBmolRsA3zod/Fbo9bFAH7lT1w1nTd0Oq6jnXNQZib81pWtfgCZ1WI9S XfEptr8TtOZenEgY8azXIzORhiV1rXJkmqS1ofIhAn4FhvEljQN3h0buml3pfVyn Q9b/toEBLNS/Vin3+NQP/wzp3iB0ykRrTSVT0BZsfE52do5tqtbSPFPkZoF1ncof u8kRH5ccDXAT0tUqZnyfkvasadtr05yjV+W0A6rwjQ4TE7AdTpICiKrcrLLDd47s 42a4bbm0BVd63uHG7fwBXZ7lsdG+3Mjs+WwEDURVAGUc0qv3dGQf3m7+P3vivTsv dT2I65c0tlyOMjOqSvUzBia153gRz6aNuf0YlvD1l6ULiR7pqkG9Zu+EWXdDWsXE xnhZXxX9Y9kxz3GLtaWmTOFJeWlfzJ07vtE5I2qSZXT8krsc06WthdzHPOOnRWbS QAH+iINECeVZCS88z41su7kHeDaPDHSTS+YRToLL9K+1Y4jSrQ0aCK7qx1reHKC6 NqPDBzVeMHzXtWIylJwgM3E= =d1Ye -----END PGP MESSAGE----- root@EPC:~# echo xxx | gpg -ear [email protected] -----BEGIN PGP MESSAGE----- hQIMAzha1PwyPp1PARAAg+cR4+vLw2uFKWUUuf7j8Yf3WZU7v3Xxw+gT5F/yo3fo dViI7dwW/Q5mq32HSiUxDqDsEULObcyQFr6/B2by+9t6/4SHf2UIYMFd5nZvprIt gcQswXEVw6BLpmutDgg0/letxlFtSON70d8aB/OqoaL3OxQX3b6prw3ZCv/UcoYa BLXm8W24F2donPHos4BYoSWNFKzNq/Z/6LnTkVaF1o8Z7OSIKCnhV3t4vwnaC/os q34AA65f/lrDTgudMo7UoznLlDLK+VeBJaU9772z76uZnm+LVEDPt695kKBlpmBt qQNzBY8ZL2sUQ2aq2RqjWA12dh06r3P8k45RyMKvn6Iubf0rfUK9kHUknQRM7q3A gcQBAO2yR97ZcCAffPLjk3ZGcJ7eh92PRET8d8l2/lAKDQ24GdJENcgHBXhGu8VV FivA9AkGThzRxtghSV1ZuX3yi7MbOSjdrs0hx82ZH73kj34+GYSkz8Q4sHK3KnM8 AZ4MUeTdM9eQET8fQVziLQU+PRxFqNo3DKb0uoqI41/klntc4VgiUOMYsqiESJBs LF0AspsRQFWd6hO9y314tsetoIcK3xPkLq9kX2T0qYuIdgvLoe9i+kE2QxdsamBH 0Rlrm1AcRmIKBkjClGSbMsHsa6uJZD+zfDOpmWa/zgtxWe9tdjH3Er5NlR6VRDLS QAFqDVk3yWTnq2OZotpggz+OhUz0aGKiPWhpvpdQS4yQGfiE0a94drOBEnEpuPLJ kb0j1SLja8v09v4rLmZhhBk= =0COd -----END PGP MESSAGE----- root@EPC:~#
Yes, GnuPG stores the time in the encrypted data. To see this, run echo xxx | gpg -ear [email protected] > encryptedexample (using an id for which you have a private key), then gpg --list-packets --verbose encryptedexample will end with something like :literal data packet: mode b (62), created 1599715776, name="", raw data: 4 bytes where 1599715776 is the timestamp of the packet, in seconds since the epoch.
Same gpg command applied on the same string but got different result
1,424,518,278,000
I have an encrypted external disk on a linux server. On the server, I can do this locally to decrypt cryptsetup -d keyfile luksOpen /dev/sdx1 /mnt/decrypted but I prefer to avoid doing that on the server side. I want to access the server (via ssh/sshfs) and only decrypt the data remotely on my client machine. To access and decrypt the data remotely, I have to mount the encrypted /dev/sdx1 locally on the server (without decrypting it!!) to /mnt/encrypted mount /mnt/encrypted via sshfs on a client machine (then use luksOpen to decrypt) How can I do step 1 without decrypting data? Thanks, Chris ps: maybe I should just use an encrypted container (a file on the server's file system) and not a whole partition? This way I could mount the folder containing the encrypted container/file remotely via sshfs? (and only decrypt it on the client machine)
I can mount and decrypt luks remotely (via sshfs) if I use a luks container (and not a luks partition) to hold the encrypted data. I just had to create a luks container (a file that holds internally the encrypted filesystem), this file is a normal file on a mounted partition so it can be mounted remotely via sshfs and decrypted later (via loop device -> mapper device -> mount). I have tested this and I can confirm it works.
mount crypto_LUKS partition without decrypting (locally)
1,424,518,278,000
I have a file that is decrypted with a similar command: gpg --batch --yes -q -d --passphrase-fd 0 -o "/var/file.out" "/var/file.gpg" < /var/secret.key I want to change the content of the /var/file.gpg but the decryption should continue work as before. Any idea how to encrypt it (I was able to find some examples with pass phrases (which I suppose is what the key file is used for) and sender and receiver (which I suppose I don't need) and it was not working so far)
Ok. I've found what I needed: passphrase=$(head -n 1 /var/secret.key) gpg --symmetric --batch --yes --passphrase $passphrase --output some.gpg toEncrypt.txt
How to encrypt file with gpg and a passphrase only?
1,424,518,278,000
I apologise if this is not the correct community. I have a string that I believe was encrypted using an RC2 cipher. I know the secret key and the IV but I am struggling to decrypt it using OpenSSL. I know what the plain text should be. $ echo MY_CIPHER_TEXT | openssl enc -d -base64 -rc2 -iv MY_IV I am prompted for the decryption password, which I enter, but i always received a response bad magic number I believe this means openssl does not recognise MY_CIPHER_TEXT as ciphered text, but I am struggling to understand why. Can someone help explain why i am getting the "Bad Magic Number" response? MY_CIPHER_TEXT = nKZQD6RKk9ozeGV5WOMVL9TDZTgg9mOZjDpBDqIocR8OGC+WcB4xAwDx7XTaJNv9v+Y3sEzNphtET6sXxBd0e/0Oh6g2d0LrKls2BFHGbaMynEVW2xy4xLP40se55zdawVLGImSxgiBtf9unfIJYN4EpdPlMiiB2TuvyEoUUtqQ= MY_VI = jqn76XOl4To=
Knowing the algorithm RC2 isn't enough; you also need to match the mode of operation and for some modes padding scheme. OpenSSL commandline (and for the most part the EVP API as well) defaults to CBC mode and 'PKCS5' (technically PKCS7) padding, which may or may not be correct. openssl enc by default does password-based encryption and decryption, which means the actual key and IV (except for ECB, which has no IV) used for the cipher are derived by a hashing process called Password-Based Key Derivation Function (PBKDF) -- and a nonstandard one to boot; any argument you give as -iv is ignored -- which is good because the argument you gave is invalid anyway, see below. The OpenSSL PBKDF (like other better ones) uses a random 'salt' which must be stored in an OpenSSL-specific format at the beginning of the ciphertext, and the lack of that salt is causing your error message bad magic number. For more details see https://crypto.stackexchange.com/questions/3298/is-there-a-standard-for-openssl-interoperable-aes-encryption/35614#35614 . Since you have the key, NOT a password, and the IV, convert them both to hex (not base64) and use: openssl enc -base64 -d -rc2[-mode] -K $key_in_hex -iv $iv_in_hex # note that's -K uppercase not -k lowercase # you can use -a as a synonym for -base64 # For a block mode like CBC if standard PKCS5/7 padding wasn't used # add -nopad and handle the last few bytes manually as needed. # If your input is more than 76 chars per line (as your Q showed) # and OpenSSL version before 1.1.0 you also need -A (uppercase). There are many ways to convert base64 to hex, but a convenient one is: somevar=$( echo some_base64 | openssl base64 -d | xxd -p ) # xxd -p outputs only the hex with no labels or ASCII etc # and thus is suitable as an argument to openssl enc # without any processing by tools like sed, tr, awk
OpenSSL Usage to decrypt a string
1,424,518,278,000
I wanted to create bootable usb (dd bs=4M if=input.iso of=/dev/sdc) but sdc is my hard disk with two partitions: 1 simply ext4 and 2 encrypted LUKS. After this action I got Windows installed on my hard disk. How to recover encrypted partition after run dd comand?
It depends whether you have overwritten the beginning of the encrypted partition. The beginning of the encrypted partition is where the key is stored¹. If you've lost that, the data is undecipherable, and the only solution is to restore from a backup. If you only overwrote the first 4MB of the disk, and if the non-encrypted partition was before the encrypted partition, then you've lost the non-encrypted partition but not the encrypted partition. (You may be able to recover some files from the non-encrypted partition even if the beginning has been overwritten, but don't get your hopes up: it's unreliable.) If the encrypted partition is intact, all you need to do is find where it started. When you overwrote the beginning of the disk, that overwrote the partition table, which indicates where partitions are located. Get Testdisk and ask it to locate partitions — it looks for magic values that indicate the beginning of a filesystem or other volume types including LUKS volumes. Provided that the partition is intact, Testdisk should find it and you should be able to recover it. However, you mention installing Windows; that is likely to have overwritten the encrypted partition as well. If you've installed Windows, just write off the data on the disk and restore from backup. ¹ The encryption key is not directly derived from your password. Rather, the encryption key is stored at the beginning of the volume, itself encrypted with a key derived from the password. This allows having multiple passwords (keep multiple copies of the encrypted key, each encrypted with a different password) and changing the password without reencrypting the whole partition (just reencrypt the key slot).
How to recover encrypted partition after dd command [closed]
1,369,798,071,000
I have found a few old posts claiming that tmpfs can execute in place. Is this true? If so, how? If not, is there a ram drive alternative? Can this be done with a ram drive that is encrypted? If so, how?
You're going to need some very specialized hardware to do what you're trying to do. Here are the constraints: The program must be in RAM, because that's where the CPU can find it. It doesn't matter how it got there. The program must not be in RAM unencrypted. I don't know where you want to store the encryption key. Let's assume it's stored in a TPM module, because I'm sure you don't want to store it in RAM. Therefore, to execute your program, the CPU must ask the TPM module to decrypt every single instruction it reads. This is not something you can do purely in software, unless maybe you have explicit control over your CPU's cache... which on most CPUs, you don't. For all practical purposes, you're going to need an unencrypted copy in RAM, even if an encrypted copy is in RAM alongside it.
Execute in Place an encrypted ram drive
1,369,798,071,000
If I encrypt a ZFS disk that is in mirrored or in RAID-Z (raidz, raidz1, raidz2, raidz3) configurations then what happens if a disk fails. Can I still access the data if I replace a HDD.
I can't answer this with specific regard to Linux, but assuming you're talking about either full-disk or full-partition encryption, my experience with ZFS on FreeBSD suggests that it won't matter. As long as ZFS can import the pool, you'll be able to remove a failed drive, provision a new drive with or without full-(drive|partition) encryption, and resilver the existing functional devices (encrypted or not) onto the newly-provisioned device (encrypted or not). All that matters is that the redundancy of the pool is sufficient that you can do a zpool import -N and successfully import the (degraded) pool so that you can remove and replace the failed device. # zpool status tank pool: tank state: ONLINE scan: scrub repaired 0B in 01:51:20 with 0 errors on Mon Jul 26 05:11:07 2021 config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ada1p3.eli ONLINE 0 0 0 ada0p3 ONLINE 0 0 0 errors: No known data errors Here device /dev/ada0p3 is an unencrypted partition, while ada1p3 is an encrypted partition (and ada1p3.eli is the decrypted handle for it). There's no real reason for this setup on my machine, it's just the end result of when I was futzing around playing with booting from a full-disk encrypted pool.
encrypted ZFS disk failure
1,369,798,071,000
I am testing to add a user to my database with a bash script! my code for my bash script is : mysql -u root <<MYSQL_SCRIPT USE mail_server; INSERT INTO mail_server.virtual_users (`id`, `domain_id`, `password` , `email`) VALUES ('1', '1', ENCRYPT('test@test', CONCAT('$6$', SUBSTRING(SHA(RAND()), -16))), '[email protected]'); MYSQL_SCRIPT when i run my script the user is saved but the password is saved in the database as '0 *' !!! all my user password gets password 0 * hope someone can help me
Your problem comes from the fact that whithin a here-document (the <<SOMETHING thing), $, \ and ` have a special meaning. To avoid this, you should switch to a version of the here-document where these characters are no longer special, by quoting any part of the first delimiter. Examples are any of these: mysql -u root <<\MYSQL_SCRIPT MYSQL_SCRIPT mysql -u root <<'M'YSQL_SCRIPT MYSQL_SCRIPT mysql -u root <<MYSQL_SCRIP\T MYSQL_SCRIPT mysql -u root <<MY"SQL_SC"RIPT MYSQL_SCRIPT mysql -u root <<'MYSQL_SCRIPT' MYSQL_SCRIPT
encryption password by script failed
1,369,798,071,000
I have read `cryptsetup luksOpen <device> <name>` fails to set up the specified name mapping https://www.saout.de/pipermail/dm-crypt/2014-August/004272.html And tried cryptsetup open --type luks <device> <dmname> --key-file /root/luks.key still getting error 22 cryptsetup luksFormat <device> --key-file /root/luks.key -q output command successful. Followed steps here: https://gist.github.com/huyanhvn/1109822a989914ecb730383fa0f9cfad Created key with openssl genrsa -out /root/luks.key 4096 chmod 400 /root/luks.key $ sudo dmsetup targets striped v1.6.1 linear v1.3.1 error v1.5.1 Edit 1 Realised dm_crypt is not loaded, so did $ modprobe dm_crypt To check $ lsmod | grep -i dm_mod $ which cryptsetup Also checked $ blkid /dev/data /dev/data: UUID="xxxxxxxxxxxx" TYPE="crypto_LUKS" Edit 2 More missing module: modprobe aes_generic modprobe xts Kernel $ uname -r 4.9.0-12-amd64 OS is Debian Stretch And it's an Azure provided image, I'm not sure if they have patched anything related to this.
It's a naming conflict, I already have /dev/mapper/data due to the previous testing, so have to test it with another name. cryptsetup open --type luks /dev/data new_name # 1st time sucess cryptsetup open --type luks /dev/data new_name # 2nd time fail
cryptsetup failed with code 22 invalid argument
1,369,798,071,000
On the Fedora wiki it is mentioned that LUKS offers this protection. LUKS does provide passphrase strengthening but it is still a good idea to choose a good (meaning "difficult to guess") passphrase. What is it exactly and how is it accomplished?
A similar phrase appears in other places (e.g., this Red Hat 5 page), where a bit more detail is given: LUKS provides passphrase strengthening. This protects against dictionary attacks. Just from that I would expect it to mean that the password is being salted and probably has other improvements applied to the process (e.g., hashing it N times to increase the cost). Googling around, this phrase seems to have first appeared in conjunction with LUKS around 2006 in the Wikipedia article on Comparison of disk encryption software. There the description of "passphrase strengthening" goes to the article on "Key stretching", which is about various techniques to make passwords more resilient to brute-force attacks, including using PBKDF2. And indeed, LUKS1 did use PBKDF2 (LUKS2 switched to Argon2), according to the LUKS FAQ. So that's what passphrase strengthening means in this context: using PBKDF2 and similar to make passwords more difficult to crack. The FAQ also has a short description: If the password has lower entropy, you want to make this process cost some effort, so that each try takes time and resources and slows the attacker down. LUKS1 uses PBKDF2 for that, adding an iteration count and a salt. The iteration count is per default set to that it takes 1 second per try on the CPU of the device where the respective passphrase was set. The salt is there to prevent precomputation. For specifics, LUKS used SHA1 as the hashing mechanism in PBKDF2 (since 1.7.0 it's SHA256), with iteration count set so that it takes about 1 second. See also section 5.1 of the FAQ: How long is a secure passphrase? for a comparison of how using PBKDF2 in LUKS1 made for a considerable improvement over dm-crypt: For plain dm-crypt (no hash iteration) this is it. This gives (with SHA1, plain dm-crypt default is ripemd160 which seems to be slightly slower than SHA1): Passphrase entropy Cost to break 60 bit EUR/USD 6k 65 bit EUR/USD 200K 70 bit EUR/USD 6M 75 bit EUR/USD 200M 80 bit EUR/USD 6B 85 bit EUR/USD 200B ... ... For LUKS1, you have to take into account hash iteration in PBKDF2. For a current CPU, there are about 100k iterations (as can be queried with cryptsetup luksDump. The table above then becomes: Passphrase entropy Cost to break 50 bit EUR/USD 600k 55 bit EUR/USD 20M 60 bit EUR/USD 600M 65 bit EUR/USD 20B 70 bit EUR/USD 600B 75 bit EUR/USD 20T ... ...
In regards to dm-crypt with LUKS, what is meant by "passphrase strengthening"?
1,369,798,071,000
I'm trying to encrypt some files using openssl , it showing me following error openssl: relocation error: openssl: symbol EVP_mdc2 version OPENSSL_1_1_0 not defined in file libcrypto.so.1.1 with link time reference Do I have to reinstall it ? or some dependencies ?
From Wikipedia Because of patent concerns support for MDC-2 has been disabled in OpenSSL on most Linux distributions and is not implemented by many other cryptographic libraries. The algorithm itself is available in OpenSSL, but it is not compiled in. If you really want to use it, download the source package, modify the debian/rules file so that this line: CONFARGS = --prefix=/usr --openssldir=/usr/lib/ssl --libdir=lib/$(DEB_HOST_MULTIARCH)\ no-idea no-mdc2 no-rc5 no-zlib no-ssl3 enable-unit-test no-ssl3-method enable-rfc3779\ enable-cms does not include no-mdc2 and compile it (it might be as simple as dpkg-buildpackage -us -uc). Otherwise use aes-256 with something like sha256.
Openssl Relocation error
1,369,798,071,000
I import [E] subkey to different folder from ~/.gnupg, and export subkey's public key with --homedir option. I can see subkey's public key has less lines than master's plublic, using diff results that they has some starting lines the same, but then different lines at the bottom so at the end it's still different public key. My question: Are they different public key? (I still need double confirm here). If they're different then encryption/decryption with subkey is on its own and there's no related to master's key and other subkeys?
In asymmetric cryptography you always deal with key-pairs. For each secret key there is a corresponding public key. So to answer your first question: yes, the public key of a primary key pair is different from the public key of its subordinate key pair. I tried to reproduce your experiment and created a GnuPG test key with a primary key (ID 0xA6271DD4) and a subordinate key (ID 0x5336E1DC). I then exported the subordinate key to a file and checked, which packets it contains. $ gpg --export-secret-subkey 5336E1DC! > subkey.gpg $ gpg --list-packets subkey.gpg | grep "\(packet\|keyid\)" :secret key packet: keyid: 877AA505A6271DD4 :user ID packet: "testtest <test@test>" :signature packet: algo 1, keyid 877AA505A6271DD4 :secret sub key packet: keyid: B0389BEB5336E1DC :signature packet: algo 1, keyid 877AA505A6271DD4 $ Please note that both the user ID and the secret subordinate key are signed by the primary key. On the first look it seems that both the primary and subordinate secret key were exported. Show more info about the first secret packet. $ gpg --list-packets subkey.gpg | head # off=0 ctb=95 tag=5 hlen=3 plen=277 :secret key packet: version 4, algo 1, created 1546169910, expires 0 pkey[0]: [2048 bits] pkey[1]: [17 bits] gnu-dummy S2K, algo: 0, simple checksum, hash: 0 protect IV: keyid: 877AA505A6271DD4 # off=280 ctb=b4 tag=13 hlen=2 plen=20 :user ID packet: "testtest <test@test>" $ When exporting a secret key in GnuPG, the corresponding public key is always exported with it. So this secret key packet contains a public key of 2048 bits plus probably its 17 bits hash. But the secret key itself is missing, only a stub was exported: gnu-dummy S2K, algo: 0, simple checksum, hash: 0. To wrap it up: When exporting a secret sub key, you always export the public sub key and the public primary key (necessary to verify the signatures) with it. You write that your public sub key has fewer lines than your public master key. I was not able to reproduce that. With GnuPG you can export a public key without any of its subkeys, in the example above by the command gpg --export A6271DD4! > pubkey.gpg (please note the exclamation mark). On the other hand, it is not possible to export just a public sub key. But if comparing a master key with a master key plus its sub key, the latter one naturally has more lines. So to better understand your observation it would be good to know the exact commands you used.
GPG: [E] subkey's public key is the same as master's public key?
1,369,798,071,000
So I have LVM on LUKS in Arch Linux. My whole disk is encrypted and my boot partition is on an external disk. So do I need to encrypt the swap partition if the disk is encrypted when powered off?
If you’re swapping to a logical volume in your encrypted LVM on top of LUKS, then no, you don’t need additional encryption for your swap, it’s already encrypted.
If I have full disk encryption do I need to encrypt the swap partition?
1,369,798,071,000
I want maximize the number of folders encrypted with ecryptfs and decrypted at login with the module pam_ecryptfs.so. Which folders cannot possibly be encrypted before login in? I guess a lsof ran by pam_exec.so should give me the answer. Do you have a better strategy? Example of whitelisted folders: /boot, /etc/pam.d PS: I am using Ubuntu Desktop 16.10. Please don't mention full disk LUKS encryption (which is done).
Ecryptfs is designed to encrypt a user's home directory. Although it can be used otherwise, it wasn't designed for that and won't be easy to set up. Ecryptfs normally gets mounted at login time, so it can only encrypt the user's data. A user's data is normally under the user's home directory, that's what the home directory is for. System-wide files cannot be encrypted with ecryptfs unless you mount it before logging in. But if that's what you want, then there's no point in using ecryptfs: it would be harder to set up and would be slower than using LUKS/dmcrypt to encrypt the whole filesystem, and there would be no security benefit whatsoever. Using ecryptfs to protect a home directory has an advantage over whole disk encryption when you want the machine to boot unattended, but want the user's file to be protected until the user logs in. If the decryption happens before logging in then there's no point in using ecryptfs. Looking for dependencies of pam_exec.so is futile. What you need before logging in is a whole lot of system services. The pam_exec module is just one part of the login process, the rest of the login process also needs to be available, as do the login program, the logging subsystem, all the programs used to initialize devices, and many, many other system services. If you want all of these to be encrypted, then you need something that requests the decryption key very early in the boot process, and the way to do that is whole-disk encryption, with LUKS.
What are the folder that I cannot set as encrypted folders decrypted at login? [closed]
1,369,798,071,000
I would like to know about the safety of RSA based encryption, especially when used to encrypt files with gpg or encrypt connections over ssh. Is it possible to reconstruct a working private key, given some public key? Is it easier to decrypt one small file? Or is the complexity of decrypting a small file equal to the complexity needed to reconstruct a whole private key? How does the bit strength affect the security/complexity of an RSA encryption? As far as I know RSA (like every asymmetric encryption) needs a high entropy to generate a matching private/public key pair in contrast to for example AES, because symmetric encryption really "uses" each bit entropy directly for encryption?! Is the rise of complexity to decrypt some file by increasing the bit strength exponential? Why are most RSA encryption tools limited to 4096 bits strength?
You should read the manual, it explains a lot. I'll comment mostly. The public key in RSA consists of 2 large carefully chosen primes that are multiplied. To attack RSA, the attacker needs to find these factors. So yes, actually, when you publish your public key the attacker knows what number to factorize. But in fact the public keys are meant to be public and safe to publish, assuming the mathematics around factorisation of large numbers stays hard. RSA in PGP is not used to encrypt data directly. It encrypts a symmetric key. Here is the other opportunity for the attacker, but the symmetric key is of course a random number for every message in PGP. In this way you don't need to encrypt data multiple times for every recipient (it would multiply the amount of data), but you encrypt the symmetric key for each recipient. Decryption of small and large files is not relevant on RSA side, because of the symmetric cipher used there. A cipher is strong when the best approach to directly decipher it is brute force. Assuming this is the best approach that the attacker has, bit length of the key makes the problem exponentially harder. I am not sure why it is 4096 bit as maximum, to be honest, but I can imagine that you need multiple algorithms that need to be proven secure, working properly and be efficient/usable for the user.
RSA: private key safety? (ssh/gpg) [closed]
1,369,798,071,000
Our company needs to exchange data with a vendor using 128 bit AES encryption. Everything I've read suggests that AES can encrypt files using a passphrase not a pre-shared key. Is there a way to create a shared key between us and the vendor to encrypt/decrypt AES encrypted files? I could use any tool but I'm partial to using openssl.
I'm not sure what do you mean by pass phrase vs PSK. A PSK key could be a pass phrase. If OpenSSL is not a requirement, a very good tool to perform file encryption in command line is mcrypt. Mcrypt supports AES 128, 192 and 256 bits encryption and has all the options one would expect to find in a standard encryption system.
OpenSSL - how to encrypt files with AES key
1,369,798,071,000
I want to make my linux laptopwith LUKS Encryption. OS : Mint 17. Is it better to make the disk LUKS-encrypted: at the installation process ? after it ?
During the installation, the installer will show you an option to encrypt the disk with LUKS.
Mint, install and LUKS/LVM
1,369,798,071,000
I have a kind-of-raspberries park, all connected via VPN to a single one server. I'd like to encrypt each rasp', in order to avoid intrusions, if one is stolen. Each rasp' has already a system installed (Mint, debian-based). What would be the best encryption tool to use ? The rasp is connected by VPN to the server. Each rasp store some data, got through VPN from the server. Once these data are on the rasp disk, there are encrypted. Once a user connect the rasp - wifi - , s/he can download decrypted media. But such media are unreadable if you steal and read the HDD. Oh, and the wifi is password free -- yes I know it's weird! Little schema:
In your application, you want to protect at least: The actual media files obtained from the server The VPN keys that allow the Raspberry to connect to the VPN (otherwise an intruder need only extract that key, connect to the VPN themselves, and get the files) You can encrypt only those things, or you can encrypt the entire root filesystem. Perhaps you might as well do the latter while you are at it. You can use luks to encrypt the Raspberry's root filesystem. You can find instructions for setting up a Debian (and presumably Mint) system with encrypted root from many sources. Basically, you can do it from the installer before the system is installed. Converting a system after the fact is much more trouble because you have to: boot an alternate system mount the target root filesystem and copy its contents elsewhere (implies having spare storage) reformat the target root filesystem with luks copy everything back Add the new encrypted partition to /etc/crypttab, adjust /etc/fstab With chroot or similar, regenerate the target's initramfs so that it will have the ability to decrypt the root at boot. A separate root filesystem and /boot filesystem is a must, so you should start with a system that already has that if you go with the conversion option. Converting an existing system to encrypted root is a bit of an expert procedure. I quickly found a tutorial but I haven't read it so I cannot vouch for it. If the Raspberry must boot unattended on its own, then you are faced with the problem that whatever you do is insecure: the unit must be able to unlock its own decryption, which means that the keys will be accessible to a thief. Still, the mitigation you propose in your comment, which is to physically separate the key on different media, is not a bad compromise if you must go that route. This other question covers how to make the system access its own decryption key at boot time in an automated fashion. Instead of a keyscript that just echos the passphase as proposed in that solution, you would create a short script that: mounts the physically separate media loads and outputs the passphrase from a file over there unmounts the separate media That script would run from inside the initramfs. Untested: such a script could look something like this: mkdir -p /mnt/key mount /dev/disk/by-id/sd-card-whatever-the-device-name-is-part1 /mnt/key cat /mnt/key/root-filesystem-key umount /mnt/key ...and add a keyscript option in /etc/crypttab that names that script.
Encrypt Debian-based system
1,369,798,071,000
I have an encfs container in my home directory. I had been using Cryptkeeper to open the container, but since updating to Linux Mint 17 from 16 it's not working any more. Gnome Encfs manager doesn't work either. I've also tried mounting the container from the command line using these directions, but no luck. The name of my mountpoint is .rthg_encfs I tried the following commands: fusermount -u ~/.rthg_encfs It came back with: fusermount: entry for /home/fren/.rthg_encfs not found in /etc/mtab encfs ~/.rthg ~/rthg It came back with: The directory "/home/fren/.rthg/" does not exist. Should it be created? (y,n) Of course I don't want to create a new container, because I already have one that I want to mount.
Please try encfs ~/.rthg_encfs ~/rthg
Mounting Encfs Container
1,369,798,071,000
I have the server's webroot directory mounted on a partition which is being protected with LUKS encryption. I want to know what happens to the files within when that partition is being decrypted. Does a copy of the unencrypted version of these files goes to the RAM, or; a copy of the unencrypted version of these files goes to the temp directory, or; the server decrypts the files upon demand each time they are being accessed, or; other scenarios I have missed? The reason why I ask this is to have a better understanding on the decryption process and how it affects the server's resource in terms of CPU and RAM and whether disk encryption with LUKS is more efficient compared to file system encryption like eCryptfs. I tried looking at Wikipedia but could not find any such information. Not sure if this is the best place to ask this question. Feel free to migrate if you think otherwise. Thanks.
LUKS is a block device encryption layer which sits on top of a block device and encrypts/decrypts all accesses to that device. No unencrypted data ever touches the physical device. LUKS then provides a virtual block device which gets used by the system to access the files. It is thus transparent to applications, which have no idea that encryption is taking place. Consider a trivial system with a single block device and a root file system on /dev/sda1. If we encrypt this with LUKS, then it will handle all direct access to that device, and will provide its own device, for instance /dev/mapper/encrypted-root which will actually be used by the system. This might be in /etc/fstab: /dev/mapper/encrypted-root / ext4 noatime 0 0 So every access to a file on this filesystem will pass through LUKS. Also note that LUKS does not know or care what data it is processing. Thus you can layer anything on top of it, whether it be a straight filesystem, a software RAID array, an LVM volume, etc. LUKS can also be placed on top of any of these. Note well that data cached in RAM by the normal caching processes may be unencrypted.
What happens to the files when they are being decrypted?
1,369,798,071,000
How can I force pidgin to use only encrypted connections when using MSN chat? I'm on Ubuntu 10.04
It isn't encrypted, you can use the off-the-record plugin that's available as a third-party plugin for Pidgin. FYI Off the record is available for Gaim, Adium and also as a compiled binary for Pidgin on Windows.
Is MSN chat encrypted or not? If not how can I make it encrypted?
1,710,439,366,000
Does anyone have a solution for using the Yubikey Security Key as a second factor for file-based crypto containers like VeraCrypt or something else? I know the Security Key doesn't allow PGP, but now I don't have another key.
I'll assume you're on Linux. Short answer, from the top of my head: should be no problem. LUKS can be used to create encrypted files, then you can put a file system in there, and mount the result. Something like the following (untested!): CONTAINER=yans-encrypted-image-file DEVICENAME=yans-volume fallocate -l 10G "$CONTAINER" cryptsetup luksFormat "$CONTAINER" sudo cryptsetup luksOpen "$CONTAINER" "$DEVICENAME" sudo mkfs.xfs "/dev/mapper/$DEVICENAME" # Now ready to mount, e.g. via udisksctl mount -b "/dev/mapper/$DEVICENAME" # To close: udisksctl unmount -b "/dev/mapper/$DEVICENAME" sudo cryptsetup close "$DEVICENAME" There's plenty of guides out there on how to enroll your Yubikey as LUKS secret provider. How packaging for such things works: sadly kind of depends on your Linux distro, so I'll have to let you research that on your own.
Yubikey security key for file based container
1,710,439,366,000
On Linux is possible: with luks+ext4 or luks+btrfs there are a lot of how-to's on web. Is also possible using ZFS. On Solaris 11 is possible something like this? Using a passphrase on boot is ok only in interactive mode, with an headless pc without a "remote control" like ILO4 or similar is not nice
Still open to a solution the actually answer is NO.
Solaris11: is possible to create an encrypted root and unlock it via usb?
1,710,439,366,000
I checked: https://cateee.net/lkddb/web-lkddb/CRYPTO_RSA.html The help text is: Generic implementation of the RSA public key algorithm. Is this driver for an encryption HW device, ex: from Intel/AMD CPU?
No, CRYPTO_RSA provides a generic implementation of RSA, i.e. one which isn’t hardware-specific. It works on any system the kernel works on.
kernel config, CONFIG_CRYPTO_RSA, what is this config for?
1,710,439,366,000
I wanted to encrypt a flash drive but it didn't went well. Then I tried to remove the encryption but I am unable to do it. Here is some info: LUKS header information Version: 2 Epoch: 4 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 9f4cbeda-4733-4aa9-873f-764705300bee Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes] Keyslots: Tokens: Digests: 0: pbkdf2 Hash: sha256 Iterations: 100669 Salt: 59 c1 f4 ec 5a d2 17 ae 9f 2a 06 73 9c c2 b8 8e e8 02 0e 26 5c 8a 5a 33 a9 3e 98 ce 20 04 b8 c0 Digest: f2 f9 3b 7e 53 48 2b 24 05 4d c3 b9 42 4c 3b 1e ef 8a 1f f5 22 85 25 de fc f9 e4 02 ac 0f 8b 9d
You cannot "remove" LUKS encryption AFAIK, you need to format/recreate your partition. Steps to convert it back to a normal USB drive: sudo unmount /mount/point sudo cryptsetup close /dev/mapper/name cat /dev/zero > /dev/device1 sudo mkfs.ext4 /dev/device1 (or mkfs.exfat/mkfs.vfat/mkfs.ntfs) The cat command is not strictly necessary but really desired. If you don't run it you may discover files filled with random data. mkfs.ext4 may refuse to run saying that there's data on the partition, in which case use the -F flag.
Remove Cryptsetup LUKS encryption [duplicate]
1,710,439,366,000
At reboot, with USB sticks inserted, the TPM will not allow passphraseless booting of the server. With a USB HDD inserted passphraseless booting of the server is possible. Our servers are running Centos 8.3 with Linux kernel version 4.18.0-240. TPM2 modules are used with LUKS version 1 encryption of an LVM group consisting of several partitions including / /home swap etc. LUKS header slot 0 is used for the passphrase, and slot 1 for the TPM. The LUKS header slot 1 is bound to PCR values 0,1,2 and 3. Thus the BIOS (0), BIOS configuration (1), Options ROM (2) and Options ROM configuration (3). From what I read the Options ROM consists of the firmware that is loaded during the POST boot process. If any changes occur from the state of the system when the TPM was signed, the TPM won't allow the system to boot without a passphrase. As USB sticks have firmware that might get loaded during the boot process, I initially thought that binding only to PCR values 0 and 1, ie without the Options ROM, would solve the problem. This did not work. Any advice on why it won't boot from the TPM with a USB stick attached will be appreciated.
We have figured out which PCR value changes after a USB stick gets inserted and the system reboots. By not binding to that PCR value we managed to boot off the TPM with a USB stick inserted. In our case PCR 1 (BIOS config) kept changing when a USB stick was inserted and the system rebooted. The command we used to query the PCR values was tpm2_pcrread. This lists the PCR values in sha1 and sha256. We redirected the stdout of the PCR values to a file before and after the changes and using diff command we monitored the changes between each file.
Not booting off TPM with USB disk inserted
1,710,439,366,000
I have added a second disk to my LVM system. I created a physical volume there, added it to the volume group of ubuntu, 'vgubuntu', extended logical volume to fill the whole disk. How do I extend the LUKS system partition to fill the whole logical volume? Here's more info provided by pvdisplay, vgdisplay and lvdisplay: --- Physical volume --- PV Name /dev/mapper/nvme0n1p3_crypt VG Name vgubuntu PV Size <464.53 GiB / not usable 0 Allocatable NO PE Size 4.00 MiB Total PE 118919 Free PE 0 Allocated PE 118919 PV UUID DwO3R1-DeRo-c83D-qx5F-xjC5-icXG-x3j28i --- Physical volume --- PV Name /dev/nvme1n1p1 VG Name vgubuntu PV Size <476.94 GiB / not usable 0 Allocatable yes (but full) PE Size 4.00 MiB Total PE 122096 Free PE 0 Allocated PE 122096 PV UUID 9UyJR4-m0G9-sYPG-BBkW-2WEg-TBdR-DAj0u3 root@omen15:~# vgdisplay --- Volume group --- VG Name vgubuntu System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 8 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 2 Max PV 0 Cur PV 2 Act PV 2 VG Size 941.46 GiB PE Size 4.00 MiB Total PE 241015 Alloc PE / Size 241015 / 941.46 GiB Free PE / Size 0 / 0 VG UUID ANNTFf-p9hU-O4R3-jwDQ-bZhP-v8tm-hVL8Fn root@omen15:~# lvdisplay --- Logical volume --- LV Path /dev/vgubuntu/root LV Name root VG Name vgubuntu LV UUID rxnIOU-yNg2-ythJ-Dz5V-N3Sr-X7DQ-WzbUUF LV Write Access read/write LV Creation host, time ubuntu, 2021-07-24 17:25:39 +0300 LV Status available # open 1 LV Size <940.51 GiB Current LE 240770 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:1 --- Logical volume --- LV Path /dev/vgubuntu/swap_1 LV Name swap_1 VG Name vgubuntu LV UUID MOvhEP-64w3-wHHO-wmDh-YkSU-XARL-7hRQIf LV Write Access read/write LV Creation host, time ubuntu, 2021-07-24 17:25:39 +0300 LV Status available # open 2 LV Size 980.00 MiB Current LE 245 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:2 And here's what df -h prints: root@omen15:~# df -h Filesystem Size Used Avail Use% Mounted on tmpfs 1.6G 2.1M 1.6G 1% /run /dev/mapper/vgubuntu-root 925G 7.3G 871G 1% / tmpfs 7.6G 12M 7.6G 1% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup /dev/nvme0n1p2 705M 251M 403M 39% /boot /dev/nvme0n1p1 511M 5.3M 506M 2% /boot/efi tmpfs 1.6G 2.0M 1.6G 1% /run/user/1000
You have LUKS configured on the PV level so "under" your LVM setup so unfortunately you need to start over -- your PV must be encrypted you can't "extend" the existing LUKS/dm-crypt device to the second disk. The structure should look like disk -> partition -> LUKS -> PV -> VG -> LV (it is possible to configure encryption on the LV level but your existing configuration is encrypted on the PV level). So you need to shrink your root LV back, remove your newly created PV from vgubuntu and then create LUKS on nvme1n1p1 (cryptsetup luksFormat /dev/nvme1n1p1), unlock it (cryptsetup luksOpen /dev/nvme1n1p1 nvme1n1p1_crypt) and use /dev/mapper/nvme1n1p1_crypt as the second PV. You'll also need to add the new LUKS device to /etc/crypttab.
How do I extend LUKS partition to fill the whole logical volume of 2 disks on LVM?
1,710,439,366,000
I am trying to set up SELinux and an encrypted additional partition that I mount at startup using a systemd service. If I run SELinux in permissive mode, everything runs ok (partition is correctly mounted, data can be accessed and service runs properly). If I run SELinux in enforcing mode (enforcing=1), I am not able to mount such partition with the error: /dev/mapper/temporary-cryptsetup-1808: chown failed: Permission denied sh[1777]: Failed to open temporary keystore device. sh[1777]: Command failed with code 5: Input/output error Any ideas to fix that? Audit2allow does not return any additional rules to be added Edit 1 after @A.B comment: I used cat instead of tail. Audit2allow suggest no additional allow rules, but analyzing the log file I find some denial of interest: type=AVC msg=audit(1624863678.748:72): avc: denied { getattr } for pid=1894 comm="cryptsetup" path="/dev/dm-0" dev="devtmpfs" ino=5388 scontext=system_u:system_r:sysadm_t:s0-s15:c0.c1023 tcontext=system_u:object_r:fixed_disk_device_t:s15:c0.c1023 tclass=blk_file permissive=1 type=AVC msg=audit(1624863678.748:73): avc: denied { read } for pid=1894 comm="cryptsetup" name="dm-0" dev="devtmpfs" ino=5388 scontext=system_u:system_r:sysadm_t:s0-s15:c0.c1023 tcontext=system_u:object_r:fixed_disk_device_t:s15:c0.c1023 tclass=blk_file permissive=1 Searching for every "cryptsetup" entry in the audit log I find this: ~# cat /var/log/audit/audit.log | grep "cryptsetup" type=AVC msg=audit(1624863678.748:72): avc: denied { getattr } for pid=1894 comm="cryptsetup" path="/dev/dm-0" dev="devtmpfs" ino=5388 scontext=system_u:system_r:sysadm_t:s0-s15:c0.c1023 tcontext=system_u:object_r:fixed_disk_device_t:s15:c0.c1023 tclass=blk_file permissive=1 type=SYSCALL msg=audit(1624863678.748:72): arch=14 syscall=195 success=yes exit=0 a0=bfebd34c a1=bfebd2e0 a2=bfebd2e0 a3=bfebd370 items=0 ppid=1891 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cryptsetup" exe="/usr/sbin/cryptsetup" subj=system_u:system_r:sysadm_t:s0-s15:c0.c1023 key=(null) type=AVC msg=audit(1624863678.748:73): avc: denied { read } for pid=1894 comm="cryptsetup" name="dm-0" dev="devtmpfs" ino=5388 scontext=system_u:system_r:sysadm_t:s0-s15:c0.c1023 tcontext=system_u:object_r:fixed_disk_device_t:s15:c0.c1023 tclass=blk_file permissive=1 type=SYSCALL msg=audit(1624863678.748:73): arch=14 syscall=5 success=yes exit=6 a0=bfebf7ac a1=131000 a2=0 a3=10022cc0 items=0 ppid=1891 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cryptsetup" exe="/usr/sbin/cryptsetup" subj=system_u:system_r:sysadm_t:s0-s15:c0.c1023 key=(null) Edit 2: Looking for any changes in the refpolicy repo, I found this Novembre 2020 commit and this February 2021 commit. I don't know if they may apply to the case in hand.
Solved assigning to cryptsetup the lvm_exec_t context. In the lvm.fc file cryptsetup was defined as /bin/cryptsetup but I had to change it to /usr/sbin/cryptsetup where it actually was.
SELinux and cryptsetup: chown failed and can't access temporary keystore
1,581,905,517,000
BACKGROUND: I have already created a passphrase list to use with bruteforce-luks. Unfortunately, that list was not enough to find the correct passphrase. I am assuming I missed out a word or 2. I am fairly sure some or all of the words are in the actual passphrase, but I am unsure of the remaining possible 1 to 3 words. REVISED QUESTION: I think between 1 to 3 words are missing. What's the best approach for me to use? I can't seem to remember what those words would be. ORIGINAL QUESTION: How would I go about attempting to use bruteforce-luks across 4 servers? The servers are in a single building, and are currently loaded using a Linux Live Disk. I have a usb stick which is luks encrypted and have forgotten the passphrase. Tried to bruteforce it at home, but my 2 core CPU was only giving me 4 passphrase attempts per second. I'm hoping to make use of the 8 cores per server (32 cores in total) at the same time to attempt to speed up the bruteforce processing.
Solution: Create program which allows you to type in all the words you usually use in passwords. Run the program which will do a sort of permutation but not only change the order of the words, but also build up the number of words used per permutation. Save the generated passphrase text file (ended up being over 5,000,000 passphrases) Use the 32 cores I had available and split the passphrase text file into 4. I file per server. Use dd to create an additional 3 copies of the USB stick which needs to be brute-forced Plug 1 USB stick into each of the 4 server, and 1 forth of the text file per server. Install and run bruteforce-luks with all cores and all server and around 24 hours later, the passphrase was found. Thank you for all the close votes instead of helping. Gotta love all the helpful people in this place. :)
Using cores on multiple servers for bruteforce-luks [closed]
1,581,905,517,000
Wanted to create a list of all the benefits of always running from a live CD (with or without persistence/as long as personal data is saved on encrypted drive locally)?
Using a live ISO without persistence means the main filesystem is read-only, so nothing can be changed or "broken" permanently. It also means all changes (new files & data) are in ram and lost on reboot/shutdown. Saving your personal data manually could lead to better backup habits... If you have enough ram to use the toram option, all file reads & writes will be at ram speeds, maybe 2GB-5GB per second, much faster than a regular cd/dvd/hard drive or ssd. The "best encryption" part of the question really is too broad, but just use the defaults the big distros use: GPG, LUKS, eCryptfs
what are the benefits of always running from live cd for data protection? [closed]
1,581,905,517,000
Ok, so I've been debugging the last update of my system and found this warning running journalctl -b -p warning: systemd-cryptsetup: Key file <location> is world-readable. This is not a good idea! Generally I understand what is the matter and that it's bad. Though I don't know how to fix this properly without ruining my boot. Then comes the question: what permissions do I have to set for that file to make my system working correctly while removing the warning? Is 600 ok? Or do some system groups also need an access to this file?
Ok, so indeed 600 permissions are considered a good and the only good practice for securing a key file (source: archwiki). Also the tricky thing here is that when the kernel is updated initramfs generation procedure resets the key file permission to 644 which basically allows to dump it's content. That means that after each kernel update the right permission set should be restored by user.
systemd-cryptsetup key file readable warning
1,581,905,517,000
I have a encrypted container containing a mountable EXT2 file system with no partition table. How can I mount that file using cryptsetup? My attemps fail with 'VFS: cant find ext4 filesystem'...
To open and mount an encrypted container using plain mode: sudo cryptsetup plainOpen Container DecryptedContainter sudo mount /dev/mapper/DecryptedContainer /mnt When you are done, unmount and close the container: sudo umount /mnt sudo cryptsetup plainClose DecryptedContainer Use the appropriate names for Container and DecryptedContainer, and the correct mount point instead of /mnt. From man cryptsetup: The following are valid plain device type actions: open --type plain <device> <name> create <name> <device> (OBSOLETE syntax) Opens (creates a mapping with) <name> backed by device <device>. <options> can be [--hash, --cipher, --verify-passphrase, --key- file, --keyfile-offset, --key-size, --offset, --skip, --size, --readonly, --shared, --allow-discards] Example: 'cryptsetup open --type plain /dev/sda10 e1' maps the raw encrypted device /dev/sda10 to the mapped (decrypted) device /dev/mapper/e1, which can then be mounted, fsck-ed or have a filesystem created on it.
Testing password against encrypted ext2 container
1,581,905,517,000
I need to create a Rc2Key variable and then convert this to hexadecimal. I have done this with two commands. The Rc2Key variable has to be 16 characters long, so in my test module I used "DummyRC2Key1" as the Rc2Key. Rc2Key="DummyRC2Key1" HexRc2Key=$(printf "${Rc2Key}" | xxd -p) With that done I need to pad the CTF keys with eight 0s. I did this with the following commands and output this padded list to CTFpadlist.csv zeros=00000000 while read CTFlist; do echo $CTFlist$zeros; done < CTFlist.csv > CTFpadlist.csv With the padded CTF's and the HexRc2Key, I need to encrypt the CTFpadlist.csv while read CTFpadlist; do echo -n "$CTFpadlist" | xxd -r -p | openssl enc -rc2-cbc -nopad -K "${HexRc2Key}" -iv 0000000000000000 | xxd -plain | tr d '/n'; done < CTFpadlist.csv > EncCTFlist.csv Here is the problem—my hex comes out with "/" like so: 24a8/be115/59a9/c62bbfe6249fbc/44af127fcf97a0a43 This is not an acceptable hex. What am I doing wrong here?
Rather than tr d '/n'; you probably meant tr -d '\n';
Hex has unknown character / in the output
1,581,905,517,000
I recently installed Ubuntu 14.04 on my laptop. I already had a single Windows 7 partition and two linux partitions. The problem is that Ubuntu overwrote the grub bootloader, and now there is no option to boot to my encrypted debian install Here is my disk layout Windows partition /dev/sda1 Extended partition /dev/sda4 Ubuntu / and /boot on /dev/sda5 /boot for Debian (ext3) /dev/sda2 LUKS volume LVM ROOT-FS / for Debian SWAP-FS swap for Debian I want to be able to boot to the encrypted debian install, Ubuntu and Windows from the grub boot screen. How can I do that? I don't want to use paid or closed-source software. Bonus: move the debian /boot and grub to a usb stick and boot from that.
You can boot a live CD and chroot into the encrypted disk. From there, you can run grub-install /dev/sda and then update-grub. This will rescan the disks for all OSes.
Ubuntu overwrites grub, no boot option encrypted debian
1,581,905,517,000
While installing CentOS 7, i put password for disk encrypt. Now while working remotely on that machine and doing reboot it always ask password to be inserted on-site. Is this normal? or there is a way still keep disk encrypted but make the reboot work?
It is very normal, and it is because disk is in list of disks to be automatically mounted. If you don't want to be asked for password, you should remove encrypted disk from /etc/fstab. After doing this, you will be prompted for password only when you want to mount encrypted disk. Good luck!
CentOS 7 - while installing i set disk encrypted with password, but on reboot always it asking password
1,581,905,517,000
I'm in the process of setting up a FreeBSD 10 server on a KVM vserver. I'm really new to FreeBSD in general (coming from Debian Linux). I installed the system choosing ZFS with encrypted root volume and encrypted swap. I chose this solution to protect my files (Emails, Filesharing, etc.) from outside access. I then realized that I have to enter the passphrase on every bootup and the files (of course) are decrypted afterwards and available to everyone with access. Is there a sane solution that I'm missing that would make it possible to encrypt only certain parts of the base system (to be able to boot without VNC and enter the passphrase via SSH)? Is the whole idea of encrypting on a server stupid (since the volumes need to be decrypted for the services to work)? Would encrypted jails be a solution or just increasing complexity?
Is there a sane solution that I'm missing that would make it possible to encrypt only certain parts of the base system (to be able to boot without VNC and enter the passphrase via SSH)? There are lots of ways to encrypt files. PEFS might be what you want. Is the whole idea of encrypting on a server stupid (since the volumes need to be decrypted for the services to work)? No, not really, the data is still encrypted at rest, which can be important, and it also means that it's harder for someone with access to the host (KVM server) to gain access to the guest (FreeBSD) data. Would encrypted jails be a solution or just increasing complexity? You're still going to need to enter the key for them at boot, you're just going to be able to do it via SSH instead of VNC, but I imagine you'd have different keys for each jail so multiple pass-phrases to enter. Personally, I'd stick with the encrypted disk. Rebooting shouldn't be so common that entering the pass-phrase at boot is a major issue, IMHO.
FreeBSD 10 and ZFS: Methods for encrypting files
1,401,572,875,000
I've been trying to string together the right flags in cryptsetup: cryptsetup -y -h sha512 -c twofish-xts-plain64 -s 512 luksFormat /dev/sdx But it hasn't been working. When I enter the command listed above, nothing happens; it doesn't even prompt me for a password. Trying this command with aes-xts-plain64 doesn't work either. Perhaps I'm doing something wrong? Or maybe there's a different program I could try. At this point I'd be willing to try anything.
With a little help from reddit, I've figured it out. tcplay did the trick. It encrypted my USB flash drive with the following command: tcplay -c -d /dev/sdx -a SHA512 -b TWOFISH-256-XTS tcplay is essentially a TrueCrypt clone. It's available in the Ubuntu repositories. -c tells tcplay to create a new volume. -d specifies device. -a specifies hash. -b specifies cipher. If tcplay gives you problems, you can use the eCryptfs. Here's a helpful tutorial.
How do you encrypt a USB drive partition with the Twofish cipher and SHA-512 hash without using TrueCrypt? [closed]
1,401,572,875,000
Last night there was a power outage and now my computer will not boot up. After pushing the power button and seeing the BIOS I only see a black screen with a white cursor at the top left of the screen. I plugged in an external hard drive and put in my live CD hoping I could use the live CD to copy my files from my home folder to my external drive. But when I opened the home folder each containing folder says I do not have access to them and they are all empty. I assume this is because I encrypted my home folder when I first installed Ubuntu. How can I get my files back?
Try to open a terminal. sudo passwd root Then su You are root now. You can use chmod 755 to modify the permission.
How to get my files from my encrypted home folder on Ubuntu 12.04?
1,401,572,875,000
On Debian LAMP with different PHP based CMSs I use the MTA sSMTP to send email via an email proxy (Gmail); the emails I send are only contact-form inputs transferred from any such CMS to my own email account: CMS contact-form input → Eail proxy (Gmail) → Main email account I use (also Gmail) My sSMTP conf looks similar to this: #!/bin/bash set -eu read -p "Please paste your Gmail proxy email address: " \ gmail_proxy_email_address read -sp "Please paste your Gmail proxy email password:" \ gmail_proxy_email_password && echo cat <<-EOF > /etc/ssmtp/ssmtp.conf root=${gmail_proxy_email_address} AuthUser=${gmail_proxy_email_address} AuthPass=${gmail_proxy_email_password} hostname=${HOSTNAME:-$(hostname)} mailhub=smtp.gmail.com:587 rewriteDomain=gmail.com FromLineOverride=YES UseTLS=YES UseSTARTTLS=YES EOF As you can see a file named /etc/ssmtp/ssmtp.conf will be created and will contain the email address and its account's password. If the unlikely happens and an hacker finds out the email address and password I could be in a lot of trouble in cases I store payment information (I don't, I never did, and not planning to but still, it should be taken seriously). How could I protect the aforementioned file? Maybe encrypting it somehow? As of the moment I don't want to use an email server with configuring email DNS records, etc.
ssmtp has to use your login and password to send the mail. If it would be encrypted, ssmtp would have to decrypt it, so the hacker could do the same. The file /etc/ssmtp/ssmtp.conf should have only the necessary permissions to allow ssmtp to access the file and you should secure your system to prevent unauthorized access. See https://wiki.archlinux.org/index.php/SSMTP: Because your email password is stored as cleartext in /etc/ssmtp/ssmtp.conf, it is important that this file is secure. By default, the entire /etc/ssmtp directory is accessible only by root and the mail group. The /usr/bin/ssmtp binary runs as the mail group and can read this file. There is no reason to add yourself or other users to the mail group. If you use an app password (see also the web page referenced above) the credentials should not be usable for interactive login.
Protecting an email-proxy (Gmail) account being used by a Mail Transfer Agent (MTA)
1,401,572,875,000
I have a specific question about OpenSSL's enc command, but I suppose it applies more generally to Unix/Linux file permissions. I have a bash script which has the following decrypt command: openssl enc -d -aes-256-cbc -in secret.enc -out secret -pass file:./pass.bin I understand if the file permission for pass.bin is set to 700, it's essentially full permission for the owner, and permission denied for everyone else. The bash script that contains the decrypt command (let's just call it "script") is also set to 700 such that it can only be executed by the owner. My understanding then, presuming that I'm not the owner of either files, is that if I attempt to read "script" or "pass.bin", I would get a "permission denied" response. However, what happens if I run the decrypt command in the command line? Will this still result in a decrypted "secret" file?
The fact that this happens to be an OpenSSL command is not important. The script is not readable by anyone but the owner (and root, let's not forget that). This means that a non-owner can't execute it as ./script.sh or run it with e.g. bash script.sh. Had the script been readable, a non-owner would be able to run it, but the decryption, requiring the pass.bin file, would not succeed (since it's not readable by the non-owner). This is also what would happen if you ran the decryption in the shell as a non-owner of the pass.bin file. To convince yourself about these things, set up a new user and try it to see what happens.
Unix File Permissions and Decryption with OpenSSL's enc command
1,401,572,875,000
I ran across an example pair where there was an original and a 'flip' version: Original rsync -aHAXxv --numeric-ids --delete --progress -e "ssh -T -c arcfour -o Compression=no -x" user@<source>:<source_dir> <dest_dir> "Flip" rsync -aHAXxv --numeric-ids --delete --progress -e "ssh -T -c arcfour -o Compression=no -x" [source_dir] [dest_host:/dest_dir] The example seems to be identical right up to the point of the final quotation mark. What is the second example labeled flip and how is it functionally different from the 'original'? Are the two commands functionally equivalent?
The original author did a sloppy write-up and then you omitted part of what he wrote. He wrote: rsync ... user@<source>:<source_dir> <dest_dir> rsync ... [source_dir] [dest_host:/dest_dir] but it would have been better to write: rsync ... user@<source_host>:<source_dir> <dest_dir> rsync ... <source_dir> user@<dest_host>:<dest_dir> And then the "flip" would have been much more obvious. Basically, the first form "pulls" the files from the remote "source" host to the local "dest" host, and the second form "pushes" the files from the local source to the remote dest. He doesn't mention which is faster. I also think he's wrong to omit the -z option. I use it all the time and I get fantastic speedup. And while we're being pedantic, you can also do these: rsync ... <source_dir> <dest_dir> rsync ... user@<source_host>:<source_dir> user@<dest_host>:<dest_dir> The first is just a local copy; you might want to use it in circumstances where cp, ditto, etc. don't have the options you want. The second form would let you copy files between two remote servers. Finally, if we're being really pedantic, the -AX arguments are Linux-only AFAICT.
rsync 'flip' characterization [closed]
1,401,572,875,000
Does PGPDesktop support OpenBSD or e.g. Fedora, or can I only encrypt my whole hard drive when I use Windows?
If you really want to encrypt a disk under OpenBSD, this can help.
Does PGPDesktop supports OpenBSD or e.g. Fedora?
1,401,572,875,000
How can I install GnuPG on my CentOS 7 system? I want to use GnuPG alongside Thunderbird and Enigmail to manage pgp keys, as per the instructions in this link. The problem is that the download instructions I find for linux all have to do with Debian. Here is an example. EDIT typing which gpg resulted in /usr/bin/gpg, however, it is not clear that this is the same aspect of GnuGPG that needs to be integrated with Thunderbird and Enigmail to manage gpg keys. Before this question can be considered answered, I need to know that GnuPG is installed in a way that can run with Thunderbird and Enigmail. Thus, the answer would give instructions for checking status, and instructions for downloading if it is not properly installed yet. I imagine this might only take several lines of actual methods.
install Enigmail Thunderbird > Add-ons Menu > Enigmail If you do not already have a PGP key, generate one: gpg --gen-key (follow its prompts to complete the process; default values are generally fine) 3. Restart Thunderbird. Enigmail will probably auto-detect the presence of your GnuPG keychain and use it. If it does not, point it to your GnuPG dir: Thunderbird > Enigmail > Key Management In the Key Management window, select File > Import Keys from File and show Enigmail to your /home/$USER/.gnupg directory. Import your key; ignore errors from Enigmail that it already knew about your key. You should now see your key listed in the Key Management window. 5. Email somebody!
installing GnuGPG with Thunderbird on CentOS 7
1,401,572,875,000
I am trying to encrypt a test file I have and decrypt the file using a bash script. I searched online and found I can use openssl and salt to do this. I found the following code online: FNAME=$1 if [[ -z "$FNAME" ]]; then echo "cryptde <name of file>" echo " - cryptde is a script to decrypt des3 encrypted files" exit; fi openssl des3 -d -salt -in "$FNAME" -out "${FNAME%.[^.]*}" How does it work?
FNAME=$1 This assigns the first parameter to FNAME if [[ -z "$FNAME" ]]; then If the string $FNAME is zero in length, then echo the help output and exit openssl des3 -d -salt -in "$FNAME" -out "${FNAME%.[^.]*}" This line runs the openssl command's des3 module (man des3) takes $FNAME as input file name, then writes output to $FNAME without a .extension. This final argument is a regular expression that strips the extension off (.[NOT .]any number of times.
How to encrypt a file using Bash?
1,401,572,875,000
I've never encrypted files in my Linux distribution. Now I need to do that. I'm on Arch. I went to the documentation, it says that support of TrueCrypt is discontinued and after examining other libraries there I decided to use dm-crypt. But I can't figure out how to simply encrypt a file with it. It requires creating a partition or something like a container. I don't need that. How can I encrypt a file with dm-crypt?
dm-crypt is a transparent disk encryption subsystem. That being said, it's better suited to encrypt disks and partitions. It can encrypt files, but they have to be mapped as devices for this to work. If you want to encrypt only one file, GnuPG could be a better tool. Example: gpg -c filename See Also: nixCraft: Linux: HowTo Encrypt And Decrypt Files With A Password 7 Tools to Encrypt/Decrypt and Password Protect Files in Linux
How can I encrypt a file with dm-crypt? [closed]
1,328,417,304,000
I'm having trouble with escaping characters in bash. I'd like to escape single and double quotes while running a command under a different user. For the purposes of this question let's say I want to echo the following on the screen: 'single quote phrase' "double quote phrase" How can I escape all the special chars, if I also need to switch to a different user: sudo su USER -c "echo \"'single quote phrase' \"double quote phrase\"\"" Of course, this doesn't produce the right result.
You can use the following string literal syntax: > echo $'\'single quote phrase\' "double quote phrase"' 'single quote phrase' "double quote phrase" From man bash Words of the form $'string' are treated specially. The word expands to string, with backslash-escaped characters replaced as specified by the ANSI C standard. Backslash escape sequences, if present, are decoded as follows: \a alert (bell) \b backspace \e \E an escape character \f form feed \n new line \r carriage return \t horizontal tab \v vertical tab \\ backslash \' single quote \" double quote \nnn the eight-bit character whose value is the octal value nnn (one to three digits) \xHH the eight-bit character whose value is the hexadecimal value HH (one or two hex digits) \cx a control-x character
How to escape quotes in the bash shell?
1,328,417,304,000
Are you literally "ending a file" by inputting this escape sequence, i.e. is the interactive shell session is seen as a real file stream by the shell, like any other file stream? If so, which file? Or, is the Ctrl+D signal just a placeholder which means "the user has finished providing input and you may terminate"?
The ^D character (also known as \04 or 0x4, END OF TRANSMISSION in Unicode) is the default value for the eof special control character parameter of the terminal or pseudo-terminal driver in the kernel (more precisely of the tty line discipline attached to the serial or pseudo-tty device). That's the c_cc[VEOF] of the termios structure passed to the TCSETS/TCGETS ioctl one issues to the terminal device to affect the driver behaviour. The typical command that sends those ioctls is the stty command. To retrieve all the parameters: $ stty -a speed 38400 baud; rows 58; columns 191; line = 0; intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^D; eol = <undef>; eol2 = <undef>; swtch = <undef>; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R; werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0; -parenb -parodd cs8 -hupcl -cstopb cread -clocal -crtscts -ignbrk -brkint -ignpar -parmrk -inpck -istrip -inlcr -igncr icrnl ixon -ixoff -iuclc -ixany -imaxbel iutf8 opost -olcuc -ocrnl onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0 isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke That eof parameter is only relevant when the terminal device is in icanon mode. In that mode, the terminal driver (not the terminal emulator) implements a very simple line editor, where you can type Backspace to erase a character, Ctrl-U to erase the whole line... When an application reads from the terminal device, it sees nothing until you press Return at which point the read() returns the full line including the last LF character (by default, the terminal driver also translates the CR sent by your terminal upon Return to LF). Now, if you want to send what you typed so far without pressing Enter, that's where you can enter the eof character. Upon receiving that character from the terminal emulator, the terminal driver submits the current content of the line, so that the application doing the read on it will receive it as is (and it won't include a trailing LF character). Now, if the current line was empty, and provided the application will have fully read the previously entered lines, the read will return 0 character. That signifies end of file to the application (when you read from a file, you read until there's nothing more to be read). That's why it's called the eof character, because sending it causes the application to see that no more input is available. Now, modern shells, at their prompt do not set the terminal in icanon mode because they implement their own line editor which is much more advanced than the terminal driver built-in one. However, in their own line editor, to avoid confusing the users, they give the ^D character (or whatever the terminal's eof setting is with some) the same meaning (to signify eof).
Why does Ctrl-D (EOF) exit the shell?
1,328,417,304,000
You should never paste from web to your terminal. Instead, you should paste to your text editor, check the command and then paste to the terminal. That's OK, but what if Vim is my text editor? Could one forge a content that switches Vim to command mode and executes the malicious command?
Short answer: In many situations, Vim is vulnerable to this kind of attack (when pasting text in Insert mode). Proof of concept Using the linked article as a starting point, I was able to quickly create a web page with the following code, using HTML span elements and CSS to hide the middle part of the text so that only ls -la is visible to the casual viewer (not viewing the source). Note: the ^[ is the Escape character and the ^M is the carriage return character. Stack Exchange sanitises user input and protects against hiding of content using CSS so I’ve uploaded the proof of concept. ls ^[:echom "This could be a silent command."^Mi -la If you were in Insert mode and pasted this text into terminal Vim (with some qualifiers, see below) you would see ls -la but if you run the :messages command, you can see the results of the hidden Vim command. Defence To defend against this attack it’s best to stay in Normal mode and to paste using "*p or "+p. In Normal mode, when putting text from a register, the full text (including the hidden part) is pasted. This same doesn’t happen in Insert mode (even if :set paste) has been set. Bracketed paste mode Recent versions of Vim support bracketed paste mode that mitigate this type of copy-paste attack. Sato Katsura has clarified that “Support for bracketed paste appeared in Vim 8.0.210, and was most recently fixed in version 8.0.303 (released on 2nd February 2017)”. Note: As I understand it, versions of Vim with support for bracketed paste mode should protect you when pasting using Ctrl-Shift-V (most GNU/Linux desktop environments), Ctrl-V (MS Windows), Command-V (Mac OS X), Shift-Insert or a mouse middle-click. Testing I did some testing from a Lubuntu 16.04 desktop machine later but my results were confusing and inconclusive. I’ve since realised that this is because I always use GNU screen but it turns out that screen filters the escape sequence used to enable/disable the bracketed paste mode (there is a patch but it looks like it was submitted at a time when the project was not being actively maintained). In my testing, the proof of concept always works when running Vim via GNU screen, regardless of whether Vim or the terminal emulator support bracketed paste mode. Further testing would be useful but, so far, I found that support for bracketed paste mode by the terminal emulator block my Proof of Concept – as long as GNU screen isn’t blocking the relevant escape sequences. However, user nneonneo reports that careful crafting of escape sequences may be used to exit bracketed paste mode. Note that even with an up-to-date version of Vim, the Proof of Concept always works if the user pastes from the * register while in Insert mode by typing (Ctrl-R*). This also applies to GVim which can differentiate between typed and pasted input. In this case, Vim leaves it to the user to trust the contents of their register contents. So don’t ever use this method when pasting from an untrusted source (it’s something I often do – but I’ve now started training myself not to). Related links What you see is not what you copy (from 2009, first mention of this kind of exploit that I found) How can I protect myself from this kind of clipboard abuse? Recent discussion on vim_dev mailing list (Jan 2017) Conclusion Use Normal mode when pasting text (from the + or * registers). … or use Emacs. I hear it’s a decent operating system. :)
Is Vim immune to copy-paste attack?
1,328,417,304,000
I can use the "script" command to record an interactive session at the command line. However, this includes all control characters and colour codes. I can remove control characters (like backspace) with "col -b", but I can't find a simple way to remove the colour codes. Note that I want to use the command line in the normal way, so don't want to disable colours there - I just want to remove them from the script output. Also, I know can play around and try find a regexp to fix things up, but I am hoping there is a simpler (and more reliable - what if there's a code I don't know about when I develop the regexp?) solution. To show the problem: spl62 tmp: script Script started, file is typescript spl62 lepl: ls add-licence.sed build-example.sh commit-test push-docs.sh add-licence.sh build.sh delete-licence.sed setup.py asn build-test.sh delete-licence.sh src build-doc.sh clean doc-src test.ini spl62 lepl: exit Script done, file is typescript spl62 tmp: cat -v typescript Script started on Thu 09 Jun 2011 09:47:27 AM CLT spl62 lepl: ls^M ^[[0m^[[00madd-licence.sed^[[0m ^[[00;32mbuild-example.sh^[[0m ^[[00mcommit-test^[[0m ^[[00;32mpush-docs.sh^[[0m^M ^[[00;32madd-licence.sh^[[0m ^[[00;32mbuild.sh^[[0m ^[[00mdelete-licence.sed^[[0m ^[[00msetup.py^[[0m^M ^[[01;34masn^[[0m ^[[00;32mbuild-test.sh^[[0m ^[[00;32mdelete-licence.sh^[[0m ^[[01;34msrc^[[0m^M ^[[00;32mbuild-doc.sh^[[0m ^[[00;32mclean^[[0m ^[[01;34mdoc-src^[[0m ^[[00mtest.ini^[[0m^M spl62 lepl: exit^M Script done on Thu 09 Jun 2011 09:47:29 AM CLT spl62 tmp: col -b < typescript Script started on Thu 09 Jun 2011 09:47:27 AM CLT spl62 lepl: ls 0m00madd-licence.sed0m 00;32mbuild-example.sh0m 00mcommit-test0m 00;32mpush-docs.sh0m 00;32madd-licence.sh0m 00;32mbuild.sh0m 00mdelete-licence.sed0m 00msetup.py0m 01;34masn0m 00;32mbuild-test.sh0m 00;32mdelete-licence.sh0m 01;34msrc0m 00;32mbuild-doc.sh0m 00;32mclean0m 01;34mdoc-src0m 00mtest.ini0m spl62 lepl: exit Script done on Thu 09 Jun 2011 09:47:29 AM CLT
The following script should filter out all ANSI/VT100/xterm control sequences for (based on ctlseqs). Minimally tested, please report any under- or over-match. #!/usr/bin/env perl ## uncolor — remove terminal escape sequences such as color changes while (<>) { s/ \e[ #%()*+\-.\/]. | \e\[ [ -?]* [@-~] | # CSI ... Cmd \e\] .*? (?:\e\\|[\a\x9c]) | # OSC ... (ST|BEL) \e[P^_] .*? (?:\e\\|\x9c) | # (DCS|PM|APC) ... ST \e. //xg; print; } Known issues: Doesn't complain about malformed sequences. That's not what this script is for. Multi-line string arguments to DCS/PM/APC/OSC are not supported. Bytes in the range 128–159 may be parsed as control characters, though this is rarely used. Here's a version which parses non-ASCII control characters (this will mangle non-ASCII text in some encodings including UTF-8). #!/usr/bin/env perl ## uncolor — remove terminal escape sequences such as color changes while (<>) { s/ \e[ #%()*+\-.\/]. | (?:\e\[|\x9b) [ -?]* [@-~] | # CSI ... Cmd (?:\e\]|\x9d) .*? (?:\e\\|[\a\x9c]) | # OSC ... (ST|BEL) (?:\e[P^_]|[\x90\x9e\x9f]) .*? (?:\e\\|\x9c) | # (DCS|PM|APC) ... ST \e.|[\x80-\x9f] //xg; print; }
Removing control chars (including console codes / colours) from script output
1,328,417,304,000
For example: "\e[1;5C" "\e[Z" "\e-1\C-i" I only know bits and pieces, like \e stands for escape and C- for Ctrl, but what are these numbers (1) and letters (Z)? What are the ;, [ and - signs for? Is there only trial and error, or is there a complete list of bash key codes and an explanation of their syntax?
Those are sequences of characters sent by your terminal when you press a given key. Nothing to do with bash or readline per se, but you'll want to know what sequence of characters a given key or key combination sends if you want to configure readline to do something upon a given key press. When you press the A key, generally terminals send the a (0x61) character. If you press <Ctrl-I> or <Tab>, then generally send the ^I character also known as TAB or \t (0x9). Most of the function and navigation keys generally send a sequence of characters that starts with the ^[ (control-[), also known as ESC or \e (0x1b, 033 octal), but the exact sequence varies from terminal to terminal. The best way to find out what a key or key combination sends for your terminal, is run sed -n l and to type it followed by Enter on the keyboard. Then you'll see something like: $ sed -n l ^[[1;5A \033[1;5A$ The first line is caused by the local terminal echo done by the terminal device (it may not be reliable as terminal device settings would affect it). The second line is output by sed. The $ is not to be included, it's only to show you where the end of the line is. Above that means that Ctrl-Up (which I've pressed) send the 6 characters ESC, [, 1, ;, 5 and A (0x1b 0x5b 0x31 0x3b 0x35 0x41) The terminfo database records a number of sequences for a number of common keys for a number of terminals (based on $TERM value). For instance: TERM=rxvt tput kdch1 | sed -n l Would tell you what escape sequence is send by rxvt upon pressing the Delete key. You can look up what key corresponds to a given sequence with your current terminal with infocmp (here assuming ncurses infocmp): $ infocmp -L1 | grep -F '=\E[Z' back_tab=\E[Z, key_btab=\E[Z, Key combinations like Ctrl-Up don't have corresponding entries in the terminfo database, so to find out what they send, either read the source or documentation for the corresponding terminal or try it out with the sed -n l method described above.
Where do I find a list of terminal key codes to remap shortcuts in bash?
1,328,417,304,000
Whenever I'm at the console login, I press up arrow intentionally to see the previously typed commands. But I see this ^[[A. But when I press Ctrl Alt Print Screen Scroll Lock Pause Break Page Up Page Down Win keys doesn't echo any characters. What might be the reason behind? Does ^[[A sort of characters imply anything?
Keyboards send events to the computer. An event says “scan code nnn down” or “scan code nnn up”. At the other end of the chain, applications running in a terminal expect input in the form of a sequence of characters. (Unless they've requested raw access, like the X server does.) When you press A, the keyboard sends the information “scan code 38 down”. The console driver looks up its keymap and transforms this into “character a” (if no modifier key is pressed). When you press a key or key combination that doesn't result in a character, the information needs to be encoded in terms of characters. A few keys and key combinations have corresponding control characters, e.g. Ctrl+A sends the character ␁ (byte value 1), Return sends the character ␍ (Ctrl+M, byte value 13), etc. Most function keys don't have a corresponding character and instead send a sequence of characters that starts with the ␛ (escape, byte value 27) character. For example, the key Up is translated into the escape sequence ␛[A (three characters: escape, open bracket, capital A). The user name prompt on the console is dumb and doesn't understand most escape sequences. It doesn't have the line edition and history features that you're used to: those are provided by the shell, and until you log in, you don't have a shell. So it simply displays the escape sequence. There is no glyph for the ␛ character, so it's displayed as ^[. The ^ sign is traditionally used as a prefix for control characters, and escape is ^[ because of its byte value: it's the byte value of [, minus 64. If you press Up at a shell prompt, this sends the same 3-character sequence to your shell. The shell interprets this as a command sequence (typically to recall the previous history item). If you press Ctrl+V then Up at a shell prompt, this inserts the escape sequence at the prompt: Ctrl+V is a command to insert the next character literally instead of interpreting it as a command, so the ␛ character is not interpreted as the start of an escape sequence. Some keys are only modifiers and are not transmitted to terminal applications. For example, when you press Shift, this information stays in the terminal driver, and is taken into account if you then press A, so the driver sends A to the application instead of a. Additionally some function keys may not be mapped in your console. For a similar view in the GUI, see What is bash's meta key?
Is there any reason why I get ^[[A when I press up arrow at the console login screen?
1,328,417,304,000
When we use clear command or Ctrl+L in terminal, it clears terminal but we can still scroll back to view the last used commands. Is there a way to completely clear the terminal?
You can use tput reset. Besides reset and tput reset you can use following shell script. #!/bin/sh echo -e \\033c This sends control characters Esc-C to the console which resets the terminal. Google Keywords: Linux Console Control Sequences man console_codes says: The sequence ESC c causes a terminal reset, which is what you want if the screen is all garbled. The oft-advised "echo ^V^O" will only make G0 current, but there is no guarantee that G0 points at table a). In some distributions there is a program reset(1) that just does "echo ^[c". If your terminfo entry for the console is correct (and has an entry rs1=\Ec), then "tput reset" will also work.
How do I clear the Gnome terminal history?
1,328,417,304,000
I sometime want to pipe the color-coded output fror a process, eg. grep... but when I pipe it to another process, eg. sed, the color codes are lost... Is the some way to keep thes codes intact ? Here is an example which loses the colored output: echo barney | grep barney | sed -n 1,$\ p
Many programs that generate colored output detect if they're writing to a TTY, and switch off colors if they aren't. This is because color codes are annoying when you only want to capture the text, so they try to "do the right thing" automatically. The simplest way to capture color output from a program like that is to tell it to write color even though it's not connected to a TTY. You'll have to read the program's documentation to find out if it has that option. (e.g., grep has the --color=always option.) You could also use the expect script unbuffer to create a pseudo-tty like this: echo barney | unbuffer grep barney | sed -n 1,$\ p
Where do my ANSI escape codes go when I pipe to another process? Can I keep them?
1,328,417,304,000
Whenever I use a pager like less or an editor like nano in the shell (my shell is GNU bash), I see a behaviour I cannot explain completely and which differs to the behaviour I can observe with other tools like cat or ls. I would like to ask how this behaviour comes about. The —not easy to explain— behaviour is that normally all output to stdout/stderr ends up being recorded in the terminal-emulators backbuffer, so I can scroll back, while (not so normally to me) in the case of using less or nano, output is displayed by the terminal-emulator, yet upon exiting the programs, the content "magically disappears". I would like to give those two examples: seq 1 200 (produces 200 lines in the backbuffer) seq 1 200 | less (lets me page 200 lines, yet eventually "cleans up" and nothing is recorded in the backbuffer) My suspicion is that some sort of escape codes are in play and I would appreciate someone pointing my to an explanation of this observed behavioural differences. Since some comments and answers are phrased, as if it was my desire to change the behaviour, this "would be nice to know", but indeed the desired answer should be the description of the mechanism, not as much the ways to change it.
There are two worldviews here: As far as programs using termcap/terminfo are concerned, your terminal potentially has two modes: cursor addressing mode and scrolling mode. The latter is the normal mode, and a program switches to cursor addressing mode when it needs to move the cursor around the screen by row and column addresses, treating the screen as a two-dimensional entity.termcap and terminfo handle translating this worldview, which is what programs see, into the worldview as seen by terminals. As far as a terminal (emulated or real) is concerned, there are two screen buffers, only one of which is displayed at any time. There's a primary screen buffer and an alternate screen buffer. Control sequences emitted by programs switch the terminal between the two. For some terminals, usually emulated ones, the alternate screen buffer is tailored to the usage of termcap/terminfo. They are designed with the knowledge that part of switching to cursor addressing mode is switching to the alternate screen buffer and part of switching to scrolling mode is switching to the primary screen buffer. This is how termcap/terminfo translate things. So these terminals don't show scrolling user interface widgets when the alternate screen buffer is being displayed, and simply have no scrollback mechanism for that screen buffer. For other terminals, usually real ones, the alternate screen buffer is pretty much like the primary. Both are largely identical in terms of what they support. A few emulated terminals fall into this class, note. Unicode rxvt, for example, has scrollback for both the primary and alternate screen buffers. Programs that present full-screen textual user interfaces (such as vim, nano, less, mc, and so forth) use termcap/terminfo to switch to cursor-addressing mode at start-up and back to scrolling mode when they suspend, or shell out, or exit. The ncurses library does this, but so too do non-ncurses-using programs that build more directly on top of termcap/terminfo. The scrolling within TUIs presented by less or vim is nothing to do with scrollback. That is implemented inside those programs, which are just redrawing their full-screen textual user interface as appropriate as things scroll around. Note that these programs do not "leave no content" in the alternate screen buffer. The terminal simply is no longer displaying what they leave behind. This is particularly noticable with Unicode rxvt on some platforms, where the termcap/terminfo sequences for switching to cursor addressing mode do not implicitly clear the alternate screen buffer. So using multiple such full-screen TUI programs in succession can end up displaying the old contents of the alternate screen buffer as left there by the last program, at least for a little while until the new program writes its output (most noticable when less is at the end of a pipeline). With xterm, one can switch to displaying the alternate screen buffer from the GUI menu of the terminal emulator, and see the content still there. The actual control sequences are what the relevant standards call set private mode control sequences. The relevant private mode numbers are 47, 1047, 1048, and 1049. Note the differences in what extra actions are implied by each, on top of switching to/from the alternate screen buffer. Further reading How to save/restore terminal output Entering/exiting alternate screen buffer OpenSSH, FreeBSD screen overwrite when closing application What exactly is scrollback and scrollback buffer? Building blocks of interactive terminal applications
How do some tools (e.g. nano , less) manage to leave no content in terminals after exit? [duplicate]
1,328,417,304,000
For example, I have a output: Hello, this is the output. (let's say that for example hello is colored red, and the is colored green, and output is colored purple). Now, let's say that this is the output of a command named x. If I use this command, the output becomes white: x | grep hello I've read that one could use grep --color=always. However, this changes the color to highlight the result I searched for instead of keeping the original line colors. I want to keep the original line colors. How do I use grep while keeping them?
You could do this, x | grep --color=never hello To quickly test it, you can do, ls -l /etc/ --color=always | grep --color=never .
How do I use grep while keeping the original text color? [duplicate]
1,328,417,304,000
I've been given this one-liner to know how much memory my processes consume. $ ps -u $USER -o pid,rss,command | \ awk '{print $0}{sum+=$2} END {print "Total", sum/1024, "MB"}' Now I'd like to create an alias for that command, but have an issue escaping ' characters: $ alias mm=ps -u $USER -o pid,rss,command | \ awk '{print $0}{sum+=$2} END {print "Total", sum/1024, "MB"}' bash: alias: -u: not found bash: alias: myuser: not found bash: alias: -o: not found bash: alias: pid,rss,command: not found Total 0 MB I tried to escape the single quotes, but still it doesn't work. $ alias mm='ps -u $USER -o pid,rss,command | \ awk \'{print $0}{sum+=$2} END {print "Total", sum/1024, "MB"}\'' > Can you help me understand how to create this alias?
To make this an alias, which is possible, you need to use double quotes around the entire value for the alias. You'll also need to escape a few things within the alias as a result, and you need to escape any of the field arguments to awk since these will get interpreted as arguments by Bash as well when you're setting the alias. This worked for me: $ alias mm="ps -u $USER -o pid,rss,command | \ awk '{print \$0}{sum+=\$2} END {print \"Total\", sum/1024, \"MB\"}'" In the above I've done the following: Double quotes around alias' value alias mm="ps -u ...." Escaped awk's double quotes awk '{print \$0}{sum+=\$2} END {print \"Total\", sum/1024, \"MB\"} Escaped awk's fields awk '{print \$0}{sum+=\$2} END Would I use this? Probably not, I'd switch this to a Bash function instead, since it'll be easier to maintain and understand what's going on, but here's the alias if you still want it.
How to escape single quotes correctly creating an alias [duplicate]
1,328,417,304,000
I have seen in some screen-shots (can't remember where on the web) that the terminal can display the [username@machine /]$ in bold letters. I'm looking forward to getting this too because I always find myself scrolling through long outputs to find out with difficulty the first line after my command. How can I make the user name etc. bold or coloured?
You should be able to do this by setting the PS1 prompt variable in your ~/.bashrc file like this: PS1='[\u@\h \w]\$ ' To make it colored (and possibly bold - this depends on whether your terminal emulator has enabled it) you need to add escape color codes: PS1='\[\e[1;91m\][\u@\h \w]\$\[\e[0m\] ' Here, everything not being escaped between the 1;91m and 0m parts will be colored in the 1;91 color (bold red). Put these escape codes around different parts of the prompt to use different colors, but remember to reset the colors with 0m or else you will have colored terminal output as well. Remember to source the file afterwards to update the current shell: source ~/.bashrc
How to make the terminal display user@machine in bold letters?
1,328,417,304,000
I want to use printf to print a variable. It might be possible that this variable contains a % percent sign. Minimal example: $ TEST="contains % percent" $ echo "${TEST}" contains % percent $ printf "${TEST}\n" bash: printf: `p': invalid format character contains $ (echo provides the desired output.)
Use printf in its normal form: printf '%s\n' "${TEST}" From man printf: SYNOPSIS printf FORMAT [ARGUMENT]... You should never pass a variable to the FORMAT string as it may lead to errors and security vulnerabilities. Btw: if you want to have % sign as part of the FORMAT, you need to enter %%, e.g.: $ printf '%d%%\n' 100 100%
Using `printf` to print variable containing `%` percent sign results in "bash: printf: `p': invalid format character"
1,328,417,304,000
Reading the details of CVE-2009-4487 (which is about the danger of escape sequences in log files) I am a bit surprised. To quote CVE-2009-4487: nginx 0.7.64 writes data to a log file without sanitizing non-printable characters, which might allow remote attackers to modify a window's title, or possibly execute arbitrary commands or overwrite files, via an HTTP request containing an escape sequence for a terminal emulator. Clearly, this is not really about a security hole in nginx, but in the terminal emulators. Sure, perhaps cating a log file to the terminal happens only by accident, but greping a logfile is quite common. less perhaps sanitizes escape sequences, but who knows what shell commands don't change escape sequences ... I tend to agree with the Varnish response: The wisdom of terminal-response-escapes in general have been questioned at regular intervals, but still none of the major terminal emulation programs have seen fit to discard these sequences, probably in a misguided attempt at compatibility with no longer used 1970'es technology. [..] Instead of blaming any and all programs which writes logfiles, it would be much more productive, from a security point of view, to get the terminal emulation programs to stop doing stupid things, and thus fix this and other security problems once and for all. Thus my questions: How can I secure my xterm, such that it is not possible anymore to execute commands or overwrite files via escape sequences? What terminal emulators for X are secured against this attack?
VT100 terminals (which all modern terminal emulators emulate to some extent) supported a number of problematic commands, but modern emulators or distributions disable the more problematic and less useful ones. Here's a non-exhaustive list of potentially risky escape sequences (not including the ones that merely make the display unreadable in some way): The arbitrary log file commands in rxvt and Eterm reported by H.D. Moore. These are indeed major bugs, fortunately long fixed. The answerback command, also known as Return Terminal Status, invoked by ENQ (Ctrl+E). This inserts text into the terminal as if the user had typed it. However, this text is not under control of the attacker: it is the terminal's own name, typically something like xterm or screen. On my system (Debian squeeze), xterm returns the empty string by default (this is controlled by the answerbackString resource). The Send Device Attributes commands, ESC [ c and friends. The terminal responds with ESC [ … c (where the … can contain digits and ASCII punctuation marks only). This is a way of querying some terminal capabilities, mostly obsolete but perhaps used by old applications. Again, the terminal's response is indistinguishable from user input, but it is not under control of the attacker. The control sequence might look like a function key, but only if the user has an unusual configuration (none of the usual settings I've encountered has a valid function key escape sequence that's a prefix of the terminal response). The various device control functions (DCS escapes, beginning with ESC P). I don't know what harm can be done through DECUDK (set user-defined keys) on a typical terminal emulator. DECRQSS (Request Status String) is yet another command to which the terminal responds with an escape sequence, this time beginning with \eP; this can be problematic since \eP is a valid key (Alt+Shift+P). Xterm has two more experimental features: ESC P + p … and ESC P + q …, to get and set termcap strings. From the description, this might be used at least to modify the effect of function keys. Several status report commands: ESC [ … n (Device Status Report). The terminal responds with an escape sequence. Most of these escape sequences don't correspond to function key escape sequences. One looks problematic: the report to ESC [ 6 n is of the form ESC [ x ; y R where x and y are digit sequences, and this could look like F3 with some modifiers. Window manipulation commands ESC [ … t. Some of these allow the xterm window to be resized, iconified, etc., which is disruptive. Some of these cause the terminal to respond with either a escape sequence. Most of these escape sequences look low-risk, however there are two dangerous commands: the answers to ESC [ 2 0 t and ESC [ 2 1 t include the terminal window's icon label and title respectively, and the attacker can choose these. At least under Debian squeeze, xterm ignores these commands by default; they can be enabled by setting the allowWindowOps resource, or selectively through the disallowedWindowOps resource. Gnome-terminal under Ubuntu 10.04 implements even the title answerbacks by default. I haven't checked other terminals or versions. Commands to set the terminal title or icon name. Under xterm and most other X terminals, they are ESC ] digit ; title ESC \. Under Screen, the escape sequence is ESC k title ESC \. I find the concern over these commands overrated. While they do allow some amount of mischief, any web page has the same issue. Acting on a window based solely on its title and not on its class is akin to opening a file whose name was given to you by an untrusted party, or not quoting a variable expansion in a shell script, or patting a rabid dog on the nose — don't complain if you get bitten. I find Varnish's response disingenuous. It's feels like it's either trying to shift the blame, or in security nazi mode (any security concern, genuine or not, justifies blackballing a feature). The wisdom of terminal-response-escapes in general have been questioned at regular intervals, but still none of the major terminal emulation programs have seen fit to discard these sequences, probably in a misguided attempt at compatibility with no longer used 1970'es technology. (…) Instead of blaming any and all programs which writes logfiles, it would be much more productive, from a security point of view, to get the terminal emulation programs to stop doing stupid things, and thus fix this and other security problems once and for all. Many of the answerbacks are useful features: an application does need to know things like the cursor position and the window size. Setting the window title is also very useful. It would be possible to rely entirely on ioctl calls for these, however this would have required additional code and utilities to make these ioctl calls and transate them into unix-style text passing on file descriptors. Changing these interfaces now would be a lot of work, for little benefit. Text files are not supposed to contain non-printing characters such as control characters. Log files are generally expected to be text files. Log files should not contain control characters. If you're worried that a file might contain escape sequences, open it in an editor, or view it with less without the -r or -R option, or view it through cat -v.
How to avoid escape sequence attacks in terminals?
1,328,417,304,000
To my understanding man uses less as a pager, and when searching for keywords using less it "highlights" keywords with italics. I find that really inconvenient, so I'd like to change this to something like vim's set hlsearch where the found pattern has a different background. I attempted to run man -P vim systemd but that quit with error status 1, so it looks like I'm stuck with less. There was nothing that I was able to find in man less that helped (instead I found out that option -G will turn off highlighting all together which is even worse than italics). That being said does anyone know how to achieve search highlighting (change background color) in man pages? FYI I run Ubuntu 14.10 I came across this question seems to ask about the same thing but I am not sure if I follow how does this work (LESS_TERMCAP_so). The less man page does not mention this. (I get strange results with this solution)
Found an answer over on the superuser: https://superuser.com/questions/566082/less-doesnt-highlight-search Looks like it has to do with your TERM setting. For example, less highlighting acts normally (white background highlight) when in a normal gnome-terminal window, but when I'm in tmux, italics happens. The difference for me is that TERM is being set to "screen" when in tmux, but "xterm-256color" when not. When I set "TERM=xterm-256color" in the tmux window, highlighting in less goes back to background highlighting.
Make Less highlight search patterns instead of italicizing them
1,328,417,304,000
When I use less at the command line the output is like but when I use less from within a bash script I get: How can I use less in a bash script and not get all the escape characters and have it work like it does at the interactive command line?
Use -R flag: -r or --raw-control-chars Causes "raw" control characters to be displayed. The default is to display control characters using the caret notation; for example, a control-A (octal 001) is displayed as "^A". Warning: when the -r option is used, less cannot keep track of the actual appearance of the screen (since this depends on how the screen responds to each type of control character). Thus, various display problems may result, such as long lines being split in the wrong place. -R or --RAW-CONTROL-CHARS Like -r, but only ANSI "color" escape sequences are output in "raw" form. Unlike -r, the screen appearance is maintained correctly in most cases. ANSI "color" escape sequences are sequences of the form: ESC [ ... m where the "..." is zero or more color specification characters For the purpose of keeping track of screen appearance, ANSI color escape sequences are assumed to not move the cursor. You can make less think that characters other than "m" can end ANSI color escape sequences by setting the environment variable LESSANSIENDCHARS to the list of characters which can end a color escape sequence. And you can make less think that characters other than the standard ones may appear between the ESC and the m by setting the environment variable LESSANSIMIDCHARS to the list of characters which can appear. From less man page.
How to use less in a script without getting ESC escape characters?
1,328,417,304,000
Can't figure out how to escape everything while using awk. I need to enclose each input string with with single quotes, e.g. input string1 string2 string3 output 'string1' 'string2' 'string3' Been fighting with escaping ' " $0 and everything else and I just cannot make it work. Either $0 is passed to bash directly, or something else happens.
Here are a couple of ways: use octal escape sequences. On ASCII-based systems where ' is encoded as byte 39 (octal 047), that would be: awk '{print "\047" $0 "\047"}' input 'string1' 'string2' 'string3' pass the quote as a variable $ awk -v q="'" '{print q $0 q}' input 'string1' 'string2' 'string3'
awk print apostrophe/single quote
1,328,417,304,000
I just noticed that it seems like the flag -e does not exist for the echo command in my shell on Linux. Is this just a messed up setting or is it "normal"? Some code as an example: #!/bin/sh echo -e "\e[3;12r\e[3H" Prints: -e \e[3;12r\e[3H This worked before! I guess some stty commands went terribly wrong and now it does not work anymore. Somebody suggested that my sh was actually just bash.
Because you used sh, not bash, then echo command in sh doesn't have option -e. From sh manpage: echo [-n] args... Print the arguments on the standard output, separated by spaces. Unless the -n option is present, a newline is output following the arguments. And it doesn't have \e, too: If any of the following sequences of characters is encountered during output, the sequence is not output. Instead, the specified action is performed: \b A backspace character is output. \c Subsequent output is suppressed. This is normally used at the end of the last argument to suppress the trailing new‐ line that echo would otherwise output. \f Output a form feed. \n Output a newline character. \r Output a carriage return. \t Output a (horizontal) tab character. \v Output a vertical tab. \0digits Output the character whose value is given by zero to three octal digits. If there are zero digits, a nul character is output. \\ Output a backslash. All other backslash sequences elicit undefined behaviour.
Escape sequences with "echo -e" in different shells
1,328,417,304,000
I was reading this message from the zsh mailing list about key bindings and I'd like to know which key I need to press: ^X^I (I think Ctrl-X Ctrl-I, the capital X and I) ^[^@ (I think Ctrl-Esc-@ ??) ^X^[q(I think Ctrl-X Esc-q ??) ^XQ (I think Ctrl-X and Q ??) From the Archlinux wiki page on zsh ^[[1;3A ^[[1;3D From bindkey ^[[1;5C ^[[A I know that ^[ means Esc, but I'm not sure how to find others. Is there any official reference or website that lists these?
^c is a common notation for Ctrl+c where c is a (uppercase) letter or one of @[\]^_. It designates the corresponding control character. The correspondence is that the numeric code of the control character is the numeric code of the printable character (letter or punctuation symbol) minus 64, which corresponds to setting a bit to 0 in base 2. In addition, ^? often means character 127. Some keys send a control character: Escape = Ctrl+[ Tab = Ctrl+I Return (or Enter or ⏎) = Ctrl+M Backspace = Ctrl+? or Ctrl+H (depending on the terminal configuration) Alt (often called Meta because that was the name of the key at that position on historical Unix machines) plus a printable character sends ^[ (escape) followed by that character. Most function and cursor keys send an escape sequence, i.e. the character ^[ followed by some printable characters. The details depend on the terminal and its configuration. For xterm, the defaults are documented in the manual. The manual is not beginner-friendly. Here are some tips to help: CSI means ^[[, i.e. escape followed by open-bracket. SS3 means ^[O, i.e. escape followed by uppercase-O. "application mode" is something that full-screen programs usually turn on. Some keys send a different escape sequence in this mode, for historical reasons. (There are actually multiple modes but I won't go into a detailed discussion because in practice, if it matters, you can just bind the escape sequences of both modes, since there are no conflicts.) Modifiers (Shift, Ctrl, Alt/Meta) are indicated by a numerical code. Insert a semicolon and that number just before the last character of the escape sequence. Taking the example in the documentation: F5 sends ^[[15~, and Shift+F5 sends ^[[15;2~. For cursor keys that send ^[[ and one letter X, to indicate a modifier M, the escape sequence is ^[[1;MX. Xterm follows an ANSI standard which itself is based on historical usage dating back from physical terminals. Most modern terminal emulators follow that ANSI standard and implement some but not all of xterm's extensions. Do expect minor variations between terminals though. Thus: ^X^I = Ctrl+X Ctrl+I = Ctrl+X Tab ^[^@ = Ctrl+Alt+@ = Escape Ctrl+@. On most terminals, Ctrl+Space also sends ^@ so ^[^@ = Ctrl+Alt+Space = Escape Ctrl+Space. ^X^[q = Ctrl+X Alt+q = Ctrl+X Escape q ^XQ = Ctrl+X Shift+q ^[[A = Up ^[[1;3A = Alt+Up (Up, with 1;M to indicate the modifier M). Note that many terminals don't actually send these escape sequences for Alt+cursor key. ^[[1;3D = Alt+Left ^[[1;5C = Ctrl+Right There's no general, convenient way to look up the key corresponding to an escape sequence. The other way round, pressing Ctrl+V followed by a key chord at a shell prompt (or in many terminal-based editors) inserts the escape sequence literally. See also How do keyboard input and text output work? and key bindings table?
What does the ^ character mean in sequences like ^X^I?
1,328,417,304,000
I would like to output this on completion of my bash script. /\_/\ ( o.o ) > ^ < I have tried the following but all return errors. echo /\\_/\\\n\( o.o \)\n > ^ < echo \/\\_\/\\\r\n( o.o )\r\n > ^ < echo /\\_/\\\n\( o.o \)\n \> ^ < How do I escape these characters so that bash renders them as a string?
In this case, I'd use cat with a (quoted) here-document: cat <<'END_CAT' /\_/\ ( o.o ) > ^ < END_CAT This is the best way of ensuring the ASCII art is outputted the way it is intended without the shell "getting in the way" (expanding variables etc., interpreting backslash escape sequences, or doing redirections, piping etc.) You could also use a multi-line string with printf: printf '%s\n' ' /\_/\ ( o.o ) > ^ <' Note the use of single quotes around the static string that we want to output. We use single quotes to ensure that the ASCII art is not interpreted in any way by the shell. Also note that the string that we output is the second argument to printf. The first argument to printf is always a single quoted formatting string, where backslashes are far from inactive. Or multiple strings with printf (one per line): printf '%s\n' ' /\_/\' '( o.o )' ' > ^ <' printf '%s\n' \ ' /\_/\' \ '( o.o )' \ ' > ^ <' Or, with echo (but see Why is printf better than echo? ; basically, depending on the shell and its current settings, there are possible issues with certain escape sequences that may not play nice with ASCII drawings), echo ' /\_/\ ( o.o ) > ^ <' But again, just outputting it from a here-document with cat would be most convenient and straight-forward I think.
How do you output a multi-line string that includes slashes and other special characters?
1,328,417,304,000
Apologies if this has already been asked, but I have no idea how I can find this out myself - when I search for "^[[A" in any search engine it ignores the "^[[" part altogether. Anyway, my question: Sometimes in terminal (on a Mac) when I hit the arrow keys, the characters "^[[A", "^[[B", "^[[C" or "^[[D" appear. I seem to remember encountering this years ago when using (most likely) DOS as well and I think it occured a lot more frequently too. Why does this happen and what do they mean?
These are ANSI escape codes. The ^[ represents an ESC (escape) character, the next [ is an actual left square bracket, and the letter indicates the function of the escape code. The Esc[ part is called the CSI (Control Sequence Introducer). So the sequence CSI A means arrow up, or CUU (CUrsor Up). Anyway, this scheme dates back to the time of the VT100 display terminal (introduced in 1978). Some of the escape sequences used by the VT100 were standardized by ANSI in the early 1980s and have remained in common use since then. Normally, when you hit the arrow keys, some program (say the shell) is listening and can act upon them. So when you hit the up arrow, it scrolls back in your command history. However, if a program is running that doesn't understand the escape sequences for arrow keys, then they usually end up getting echoed back to the terminal just like any other key you might hit. So that is why you sometimes see ^[[A if you hit the up arrow key.
Caret square bracket square bracket A ^[[A - What does it mean?
1,328,417,304,000
Gilles wrote: character 27 = 033 = 0x1b = ^[ = \e Demizey wrote: ^[ is just a representation of ESCAPE and \e is interpreted as an actual ESCAPE character Then I also found this line from a TechRepublic article Make sure you write the key sequence as \e[24~ rather than ^[[24~. This is because the ^[ sequence is equivalent to the [Esc] key, which is represented by \e in the shell. So, for instance, if the key sequence was ^[[OP the resulting bind code to use would be \e[OP. But I have been using mappings that use ^[ instead of \e. So are they interchangeable? When do I need use one instead of the other?
If you take a look at the ANSI ASCII standard, the lower part of the character set (the first 32) are reserved "control characters" (sometimes referred to as "escape sequences"). These are things like the NUL character, Life Feed, Carriage Return, Tab, Bell, etc. The vast majority can be emulated by pressing the Ctrl key in combination with another key. The 27th (decimal) or \033 octal sequence, or 0x1b hex sequence is the Escape sequence. They are all representations of the same control sequence. Different shells, languages and tools refer to this sequence in different ways. Its Ctrl sequence is Ctrl-[, hence sometimes being represented as ^[, ^ being a short hand for Ctrl. You can enter control character sequences as a raw sequences on your command line by proceeding them with Ctrl-v. Ctrl-v to most shells and programs stops the interpretation of the following key sequence and instead inserts in its raw form. If you do this with either the Escape key or Ctrl-v it will display on most shells as ^[. However although this sequence will get interpreted, it will not cut and paste easily, and may get reduced to a non control character sequence when encountered by certain protocols or programs. To get around this to make it easier to use, certain utilities represent the "raw" sequence either with \033 (by octal reference), hex reference \x1b or by special character reference \e . This is much the same in the way that \t is interpreted as a Tab - which by the way can also be input via Ctrl-i, or \n as newline or the Enter key, which can also be input via Ctrl-m. So when Gilles says: 27 = 033 = 0x1b = ^[ = \e He is saying decimal ASCII 27, octal 33, hex 1b, Ctrl-[ and \e are all equal he means they all refer to the same thing (semantically). When Demizey says ^[ is just a representation of ESCAPE and \e is interpreted as an actual ESCAPE character He means semantically, but if you press Ctrl-v Ctrl-[ this is exactly the same as \e, the raw inserted sequence will most likely be treated the same way, but this is not always guaranteed, and so it recommended to use the programmatically more portable \e or 0x1b or \033 depending on the language/shell/utility being used.
The difference between \e and ^[
1,328,417,304,000
There are many questions on SE that show how to recover from terminal broken by cat /dev/urandom. For those that are unfamiliar with this issue - here what it is about: You execute cat /dev/urandom or equivalent (for example, cat binary_file.dat). Garbage is printed. That would be okay... except your terminal continues to print garbage even after the command has finished! Here's a screenshot of a misrendered text that is in fact g++ output: I guess people were right about C++ errors sometimes being too cryptic! The usual solution is to run stty sane && reset, although it's kind of annoying to run it every time this happens. Because of that, what I want to focus on in this question is the original reason why this happens, and how to prevent the terminal from breaking after such command is issued. I'm not looking for solutions such as piping the offending commands to tr or xxd, because this requires you to know that the program/file outputs binary before you actually run/print it, and needs to be remembered each time you happen to output such data. I noticed the same behavior in URxvt, PuTTY and Linux frame buffer so I don't think this is terminal-specific problem. My primary suspect is that the random output contains some ANSI escape code that flips the character encoding (in fact, if you run cat /dev/urandom again, chances are it will unbreak the terminal, which seems to confirm this theory). If this is right, what is this escape code? Are there any standard ways to disable it?
No: there is no standard way to "disable it", and the details of breakage are actually terminal-specific, but there are some commonly-implemented features for which you can get misbehavior. For commonly-implemented features, look to the VT100-style alternate character set, which is activated by ^N and ^O (enable/disable). That may be suppressed in some terminals when using UTF-8 mode, but the same terminals have ample opportunity for trashing your screen (talking about GNU screen, Linux console, PuTTY here) with the escape sequences they do recognize. Some of the other escape sequences for instance rely upon responses from the terminal to a query (escape sequence) by the host. If the host does not expect it, the result is trash on the screen. In other cases (seen for instance in network devices with hardcoded escape sequences for the Linux console), other terminals will see that as miscoded, and seem to freeze. So... you could focus on just one terminal, prune out whatever looks like a nuisance (as for instance, some suggest removing the ability to use the mouse for positioning in editors), and you might get something which has no apparent holes. But that's only one terminal.
How to prevent random console output from breaking the terminal?
1,328,417,304,000
tput civis successfully hides the cursor. tput cvvis should unhide it, but it doesn't. Any idea what the problem might be?
In the ncurses terminal database, cvvis is used as documented in the terminfo manual page: cursor_visible cvvis vs make cursor very visible and if there is no difference between normal and very visible, the cvvis capability is usually omitted. The feature is used in curs_set: The curs_set routine sets the cursor state to invisible, normal, or very visible for visibility equal to 0, 1, or 2 respectively. If the terminal supports the visibility re-quested, the previous cursor state is returned; otherwise, ERR is returned. The terminfo(5) manual page also says If the cursor needs to be made more visible than normal when it is not on the bottom line (to make, for example, a non-blinking underline into an easier to find block or blinking underline) give this sequence as cvvis. If there is a way to make the cursor completely invisible, give that as civis. The capability cnorm should be given which undoes the effects of both of these modes. Some terminal descriptions may (incorrectly) equate cvvis and cnorm, since some emacs configurations assume that cvvis is always set.
Hide and unhide cursor with tput
1,328,417,304,000
I know that, if a coloured terminal is available, one can colour the output of it using escape characters. But is there a possibility to find out, which colour the output is currently being displayed as? Or better, what colour the text would be, if I would output it right now? I'm asking to not break any previous colour settings, when using these escape characters. The 'default foreground colour' escape character is getting it's information from the colour scheme, rather than the text colour before I changed it.
In general, obtaining the current colours is impossible. The control sequence processing of a terminal happens "inside" the terminal, wherever that happens to be. With a terminal emulator such as xterm or the one built into an operating system kernel that provides the kernel virtual terminals, the internal state of the emulator, including its notion of the current "graphic rendition" (i.e. colour and attributes), is on the machine itself and is theoretically accessible. But for a real terminal this information is in some RAM location on a physically separate machine connected via a serial link. That said, some terminals include a mechanism for reading out such information as part of their terminal protocol, that is sent over that serial link. They provide control sequences that a program can send to the terminal, that cause it to send back information about its internal state, as terminal input. mikeserv has shown you the control sequences that the xterm terminal emulator responds to. But these are specific to xterm. The built-in terminal emulators in the Linux kernel and the various BSD kernels are different terminal types, for example, and don't implement any such control sequences at all. The same goes for whole families of real terminals. DEC VT525 terminals implement a read-out mechanism, but have a set of control sequences that bears no relationship to those used by xterm. One sends the DECRQSS (Request Selection or Setting) sequence to request the current graphic rendition, and the terminal responds by sending the DECRPSS (Report Selection or Setting). Specifically: Host sends: DCS $ q m ST (DECRQSS with the control function part of SGR as the setting) Terminal responds: DCS 0 $ r 0 ; 3 3 ; 4 4 m ST (DECRPSS with the parameters and control function part of an SGR control sequence that sets the current foreground and background colours) Of course, a careful reading of your question reveals that you are waving a chocolate-covered banana at those European currency systems again. What you're actually trying to do, for which you've selected a solution and then asked how to do part of that solution, is preserve the previous state whilst you write some colourized output. Not only is there a DEC VT control sequence for doing this, there's a SCO console terminal sequence for it that is recognized by xterm and various kernel built-in terminal emulators, and a termcap/terminfo entry that tells you what they are for your terminal. The termcap entries are sc and rc. The terminfo entries are save_cursor and restore_cursor. The names are somewhat misleading as to effect (although they do act as a warning that you are relying upon something that is de facto rather than de jure). The actual DECSC, DECRC, SCOSC, and SCORC control sequences save and restore the current graphic rendition as well. Given that the article that you pointed to is all about generating control sequences from shell scripts, the command that you are now looking for is tput. Further reading Jonathan de Boyne Pollard. 2007. Put down the chocolate-covered banana and step away from the European currency systems.. Frequently Given Answers. VT420 Programmer Reference Manual. EK-VT420-RM-002. February 1992. Digital. VT520/VT525 Video Terminal Programmer Information. EK-VT520-RM. July 1994. Digital.
How to determine the current color of the console output?
1,328,417,304,000
Why bother? Clearing scrollback buffer is handy in many ways, for example, when I wish to run some command with long output, and want to quickly scroll to start of this output. When scrollback buffer is cleared, I can just scroll to top, and will be done. Some considerations: There is clear command, according to man, clear clears your screen if this is possible, including its scrollback buffer (if the extended "E3" capability is defined). In gnome-terminal clear does not clear scrollback buffer. (What is "E3" capability, though?) There is also reset, which clears, but it does a little bit more than that, and it is really slow (on my system it takes more than a second, which is significant delay for humans to be noticed). And there is echo -ne '\ec' or echo -ne '\033c', which does the job. And indeed it is much faster than reset. The question is, what is \ec sequence, how it differs from what clear and reset does, and why there is no separate command for it? There is also readline's C-l key sequence, which by default bound to clear-screen command (I mean, readline command, not shell command). What is this command? Which escape sequence it emits? How does it actually work? Does it run shell command? Or what? Again, in gnome-terminal, it seems like it works just by spiting out blank lines until prompt appear in top line of terminal. Not sure about other terminal emulators. This is very cumbersome behavior. It pollutes scrollback with chunks of emptiness, so you must scroll up more, and more. It is like a hack, rather than clean solution. Another question is, is there a readline command for mentioned \ec sequence? I want to bound it to C-l instead because I always want to clear scrollback buffer when I clear the screen. And another question is how to just type such escape sequence into terminal, to perform desired action? Then do not have to think about binding C-l to another readline command (if such command exists). I tried typing Esc, then c but this does not work. UPDATE This question answered mostly here: https://unix.stackexchange.com/a/375784/257159. It is very good answer which explains almost all questions asked here.
From the man bash's readline section: clear-display (M-C-l) Clear the screen and, if possible, the terminal's scrollback buffer, then redraw the current line, leaving the current line at the top of the screen. clear-screen (C-l) Clear the screen, then redraw the current line, leaving the current line at the top of the screen. With an argu‐ ment, refresh the current line without clearing the screen. so press control + alt + L
The easiest way to clear scrollback buffer of terminal + some deeper explanation?
1,328,417,304,000
In people's '.*rc' files I see online or in various code, I tend to see a lot of people who manually use ANSI escape sequences instead of using tput. I had the understanding that tput is more universal/safe, so this makes me wonder: Is there any objective reason one should use escape sequences in place of tput? (Portability, robustness on errors, unusual terminals...?)
tput can handle expressions (for instance in sgr and setaf) which the typical shell-scripter would find less than usable. To get an idea of what is involved, see the output from infocmp with the -f (formatting) option applied. Here is one of examples using those strings from xterm's terminfo descriptions: xterm-16color|xterm with 16 colors, colors#16, pairs#256, setab=\E[ %? %p1%{8}%< %t%p1%{40}%+ %e %p1%{92}%+ %;%dm, setaf=\E[ %? %p1%{8}%< %t%p1%{30}%+ %e %p1%{82}%+ %;%dm, setb= %p1%{8}%/%{6}%*%{4}%+\E[%d%p1%{8}%m%Pa %?%ga%{1}%= %t4 %e%ga%{3}%= %t6 %e%ga%{4}%= %t1 %e%ga%{6}%= %t3 %e%ga%d %; m, setf= %p1%{8}%/%{6}%*%{3}%+\E[%d%p1%{8}%m%Pa %?%ga%{1}%= %t4 %e%ga%{3}%= %t6 %e%ga%{4}%= %t1 %e%ga%{6}%= %t3 %e%ga%d %; m, use=xterm+256color, use=xterm-new, The formatting splits things up - a script or program to do the same would have to follow those twists and turns. Most people give up and just use the easiest strings. The 16-color feature is borrowed from IBM aixterm, which maps 16 codes each for foreground and background onto two ranges; foreground onto 30-37, and 90-97 background onto 40-47, and 100-107 A simple script #!/bin/sh TERM=xterm-16color export TERM printf ' %12s %12s\n' Foreground Background for n in $(seq 0 15) do F=$(tput setaf $n | cat -v) B=$(tput setab $n | cat -v) printf '%2d %12s %12s\n' $n "$F" "$B" done and output show how it works: Foreground Background 0 ^[[30m ^[[40m 1 ^[[31m ^[[41m 2 ^[[32m ^[[42m 3 ^[[33m ^[[43m 4 ^[[34m ^[[44m 5 ^[[35m ^[[45m 6 ^[[36m ^[[46m 7 ^[[37m ^[[47m 8 ^[[90m ^[[100m 9 ^[[91m ^[[101m 10 ^[[92m ^[[102m 11 ^[[93m ^[[103m 12 ^[[94m ^[[104m 13 ^[[95m ^[[105m 14 ^[[96m ^[[106m 15 ^[[97m ^[[107m The numbers are split up because aixterm uses the 30-37 and 40-47 ranges to match ECMA-48 (also known as "ANSI") colors, and uses the 90-107 range for codes not defined in the standard. Here is a screenshot with xterm using TERM=xterm-16color, where you can see the effect. Further reading: infocmp - compare or print out terminfo descriptions Parameterized strings, in the terminfo manual. tput, reset - initialize a terminal or query terminfo database ECMA-48: Control Functions for Coded Character Sets aixterm Command Aren't bright colors the same as bold? (XTerm FAQ)
Is there any objective benefit to escape sequences over tput?
1,328,417,304,000
I want to count how many times a certain sequence of bytes happens inside a file that I have. For example, I want to find out how many times the number \0xdeadbeef occurs inside an executable file. Right now I am doing that using grep: #/usr/bin/fish grep -c \Xef\Xbe\Xad\Xde my_executable_file (The bytes are written in reverse order because my CPU is little-endian) However, I have two problems with my approach: Those \Xnn escape sequences only work in the fish shell. grep is actually counting the number of lines that contain my magic number. If the pattern occurs twice in the same line it will only count once. Is there a way to fix these problems? How can I make this one liner run in Bash shell and accurately count number of times the pattern occurs inside the file?
This is the one-liner solution requested (for recent shells that have "process substitution"): grep -o "ef be ad de" <(hexdump -v -e '/1 "%02x "' infile.bin) | wc -l If no "process substitution" <(…) is available, just use grep as a filter: hexdump -v -e '/1 "%02x "' infile.bin | grep -o "ef be ad de" | wc -l Below is the detailed description of each part of the solution. Byte values from hex numbers: Your first problem is easy to resolve: Those \Xnn escape sequences only work in the fish shell. Change the upper X to a lower one x and use printf (for most shells): $ printf -- '\xef\xbe\xad\xde' Or use: $ /usr/bin/printf -- '\xef\xbe\xad\xde' For those shells that choose to not implement the '\x' representation. Of course, translating hex to octal will work on (almost) any shell: $ "$sh" -c 'printf '\''%b'\'' "$(printf '\''\\0%o'\'' $((0xef)) $((0xbe)) $((0xad)) $((0xde)) )"' Where "$sh" is any (reasonable) shell. But it is quite difficult to keep it correctly quoted. Binary files. The most robust solution is to transform the file and the byte sequence (both) to some encoding that has no issues with odd character values like (new line) 0x0A or (null byte) 0x00. Both are quite difficult to manage correctly with tools designed and adapted to process "text files". A transformation like base64 may seem a valid one, but it presents the issue that every input byte may have up to three output representations depending if it is the first, second or third byte of the mod 24 (bits) position. $ echo "abc" | base64 YWJjCg== $ echo "-abc" | base64 LWFiYwo= $ echo "--abc" | base64 LS1hYmMK $ echo "---abc" | base64 # Note that YWJj repeats. LS0tYWJjCg== Hex transform. Thats why the most robust transformation should be one that starts on each byte boundary, like the simple HEX representation. We can get a file with the hex representation of the file with either any of this tools: $ od -vAn -tx1 infile.bin | tr -d '\n' > infile.hex $ hexdump -v -e '/1 "%02x "' infile.bin > infile.hex $ xxd -c1 -p infile.bin | tr '\n' ' ' > infile.hex The byte sequence to search is already in hex in this case. : $ var="ef be ad de" But it could also be transformed. An example of a round trip hex-bin-hex follows: $ echo "ef be ad de" | xxd -p -r | od -vAn -tx1 ef be ad de The search string may be set from the binary representation. Any of the three options presented above od, hexdump, or xxd are equivalent. Just make sure to include the spaces to ensure the match is on byte boundaries (no nibble shift allowed): $ a="$(printf "\xef\xbe\xad\xde" | hexdump -v -e '/1 "%02x "')" $ echo "$a" ef be ad de If the binary file looks like this: $ cat infile.bin | xxd 00000000: 5468 6973 2069 7320 efbe adde 2061 2074 This is .... a t 00000010: 6573 7420 0aef bead de0a 6f66 2069 6e70 est ......of inp 00000020: 7574 200a dead beef 0a66 726f 6d20 6120 ut ......from a 00000030: 6269 0a6e 6172 7920 6669 6c65 2e0a 3131 bi.nary file..11 00000040: 3232 3131 3232 3131 3232 3131 3232 3131 2211221122112211 00000050: 3232 3131 3232 3131 3232 3131 3232 3131 2211221122112211 00000060: 3232 0a Then, a simple grep search will give the list of matched sequences: $ grep -o "$a" infile.hex | wc -l 2 One Line? It all may be performed in one line: $ grep -o "ef be ad de" <(xxd -c 1 -p infile.bin | tr '\n' ' ') | wc -l For example, searching for 11221122 in the same file will need this two steps: $ a="$(printf '11221122' | hexdump -v -e '/1 "%02x "')" $ grep -o "$a" <(xxd -c1 -p infile.bin | tr '\n' ' ') | wc -l 4 To "see" the matches: $ grep -o "$a" <(xxd -c1 -p infile.bin | tr '\n' ' ') 3131323231313232 3131323231313232 3131323231313232 3131323231313232 $ grep "$a" <(xxd -c1 -p infile.bin | tr '\n' ' ') … 0a 3131323231313232313132323131323231313232313132323131323231313232 313132320a Buffering There is a concern that grep will buffer the whole file, and, if the file is big, create a heavy load for the computer. For that, we may use an unbuffered sed solution: a='ef be ad de' hexdump -v -e '/1 "%02x "' infile.bin | sed -ue 's/\('"$a"'\)/\n\1\n/g' | sed -n '/^'"$a"'$/p' | wc -l The first sed is unbuffered (-u) and is used only to inject two newlines on the stream per matching string. The second sed will only print the (short) matching lines. The wc -l will count the matching lines. This will buffer only some short lines. The matching string(s) in the second sed. This should be quite low in resources used. Or, somewhat more complex to understand, but the same idea in one sed: a='ef be ad de' hexdump -v -e '/1 "%02x "' infile.bin | sed -u '/\n/P;//!s/'"$a"'/\n&\n/;D' | wc -l
How can I count the number of times a byte sequence occurs in a file?
1,328,417,304,000
I know that we can escape a special character like *(){}$ with \ so as to be considered literals. For example \* or \$ But in case of . I have to do it twice, like \\. otherwise it is considered special character. Example: man gcc | grep \\. Why is it so?
Generally, you only have to escape one time to make special character considered literal. Sometime you have to do it twice, because your pattern is used by more than one program. Let's discuss your example: man gcc | grep \\. This command is interpreted by two programs, the bash interpreter and grep. The first escape causes bash to know \ is literal, so the second is passed for grep. If you escape only one time, \., bash will know this dot is literal, and pass . to grep. When grep see this ., it thinks the dot is special character, not literal. If you escape twice, bash will pass the pattern \. to grep. Now grep knows that it is a literal dot.
Why do I have to escape a "dot" twice?
1,328,417,304,000
In .bashrc case "$TERM" in xterm*|rxvt*) PS1="\[\e]0;${debian_chroot:+($debian_chroot)}\u@\h: \w\a\]$PS1" ;; *) ;; esac I understand ${debian_chroot:+($debian_chroot)}\u@\h: \w, but not \[\e]0;. What does it make?
The \e]0; is an escape sequence; \e is replaced with ASCII 27 (ESC), so the terminal receives the 4 characters ESC ] 0 ; tells xterm to set icon and title bar, that ends in BEL (\a). So the sequence \e]0;STUFFGOESHERE\a will set the title of the terminal to STUFFGOESHERE. In your example it'll set the title to user/host/path. FWIW, xterm escape sequences are documented at: https://www.x.org/docs/xterm/ctlseqs.pdf
Meaning of \[\e]0; in PS1 in .bashrc
1,328,417,304,000
I have a shell script that uses the following to print a green checkmark in its output: col_green="\e[32;01m" col_reset="\e[39;49;00m" echo -e "Done ${col_green}✓${col_reset}" After reading about Bash's ANSI-C Quoting, I realized I could use it when setting my color variables and remove the -e flag from my echo. col_green=$'\e[32;01m' col_reset=$'\e[39;49;00m' echo "Done ${col_green}✓${col_reset}" This seems appealing, since it means the message prints correctly whether it's passed to Bash's builtin echo or the external util /bin/echo (I'm on macOS). But does this make the script less portable? I know Bash and Zsh support this style of quoting, but I'm not sure about others.
$'…' is a ksh93 feature that is also present in zsh, bash, mksh, FreeBSD sh and in some builds of BusyBox sh (BusyBox ash built with ENABLE_ASH_BASH_COMPAT). It isn't present in the POSIX sh language yet. Common Bourne-like shells that don't have it include dash (which is /bin/sh by default on Ubuntu among others), ksh88, the Bourne shell, NetBSD sh, yash, derivatives of pdksh other than mksh and some builds of BusyBox. A portable way to get backslash-letter and backslash-octal parsed as control characters is to use printf. It's present on all POSIX-compliant systems. esc=$(printf '\033') # assuming an ASCII (as opposed to EBCDIC) system col_green="${esc}[32;01m" Note that \e is not portable. It's supported by many implementations of printf but not by the one in dash¹. Use the octal code instead. ¹ It is supported in Debian and derivatives that ship at least 0.5.8-2.4, e.g. since Debian stretch and Ubuntu 17.04.
Which shells support ANSI-C quoting? e.g. $'string'
1,328,417,304,000
I'm printing a message in a Bash script, and I want to colourise a portion of it; for example, #!/bin/bash normal='\e[0m' yellow='\e[33m' cat <<- EOF ${yellow}Warning:${normal} This script repo is currently located in: [ more messages... ] EOF But when I run in the terminal (tmux inside gnome-terminal) the ANSI escape characters are just printed in \ form; for example, \e[33mWarning\e[0m This scr.... If I move the portion I want to colourise into a printf command outside the here-doc, it works.  For example, this works: printf "${yellow}Warning:${normal}" cat <<- EOF This script repo is currently located in: [ more messages... ] EOF From man bash – Here Documents: No parameter and variable expansion, command substitution, arithmetic expansion, or pathname expansion is performed on word. If any characters in word are quoted, the delimiter is the result of quote removal on word, and the lines in the here-document are not expanded. If word is unquoted, all lines of the here-document are subjected to parameter expansion, command substitution, and arithmetic expansion.  In the latter case, the character sequence \<newline> is ignored, and \ must be used to quote the characters \, $, and `. I can't work out how this would affect ANSI escape codes. Is it possible to use ANSI escape codes in a Bash here document that is catted out?
In your script, these assignments normal='\e[0m' yellow='\e[33m' put those characters literally into the variables, i.e., \e[0m, rather than the escape sequence. You can construct an escape character using printf (or some versions of echo), e.g., normal=$(printf '\033[0m') yellow=$(printf '\033[33m') but you would do much better to use tput, as this will work for any correctly set up terminal: normal=$(tput sgr0) yellow=$(tput setaf 3) Looking at your example, it seems that the version of printf you are using treats \e as the escape character (which may work on your system, but is not generally portable to other systems). To see this, try yellow='\e[33m' printf 'Yellow:%s\n' $yellow and you would see the literal characters: Yellow:\e[33m rather than the escape sequence. Putting those in the printf format tells printf to interpret them (if it can). Further reading: tput, reset - initialize a terminal or query terminfo database printf - write formatted output (POSIX)
Is it possible to use ANSI color escape codes in Bash here-documents?