date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,346,289,248,000 |
While seeing the manual for swapon command the priorty option is described as
-p, --priority priority
Specify the priority of the swap device. priority is a value
between -1 and 32767. Higher numbers indicate higher
priority. See swapon(2) for a full description of swap
priorities. Add pri=value to the option field of /etc/fstab
for use with swapon -a. When no priority is defined, it
defaults to -1.
Can someone explain what does priority of swap means. What does higher value and lower value of ths setting affect the system and what should be the optimal value for this in home computer?
Edit:
The man page for swapon(2) shows
They may have any non-negative value chosen by the caller
But in my system(debian 10 testing) the default priority value is -1
|
man 2 swapon describes priorities thus:
Each swap area has a priority, either high or low. The default priority is low. Within the low-priority areas, newer areas are even lower priority than older areas.
All priorities set with swapflags are high-priority, higher than default. They may have any nonnegative value chosen by the caller. Higher numbers mean higher priority.
Swap pages are allocated from areas in priority order, highest priority first. For areas with different priorities, a higher-priority area is exhausted before using a lower-priority area. If two or more areas have the same priority, and it is the highest priority available, pages are allocated on a round-robin basis between them.
The sentence you highlighted can’t be taken out of its context; it concerns high priorities, which the default priorities aren’t.
Swap priority only matters if you have multiple swap devices and a reason to prefer some of them to others. If you have a single swap device, it won’t make any difference. If you have multiple swap devices on separate disks, it can be worth changing the priorities so that they are used equally; otherwise, the first device added will be used, then the second device, and so on.
| What is swap priority and why does it matter |
1,346,289,248,000 |
On my Ubuntu system, I noticed that some file managers, when open, they can mount any drive that was connected via one of my USB ports (as non-root). In the attempt of preventing this from happening I configured my /etc/fstab like so:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/vgubuntu-root / ext4 errors=remount-ro 0 1
# /boot was on /dev/nvme0n1p3 during installation
UUID=485794d0-6773-4136-9df9-c8f97fc3c3bc /boot ext4 defaults 0 2
# /boot/efi was on /dev/nvme0n1p2 during installation
UUID=5E62-20EC /boot/efi vfat umask=0077 0 1
/dev/mapper/vgubuntu-swap_1 none swap sw 0 0
#/media/j/sandisk-32GB is my primary USB drive for backups
PARTUUID=d199a40a-b5cc-724b-b70b-1b90e4274ea9 /media/user_xyz/sandisk-32GB ext4 defaults,nofail 0 3
1. How do I prevent automounting or mounting by other than root users of the drives / partition that are not specified in my /etc/fstab?
2. Is it possible to go even further and restrict root from mounting drives other than those whitelisted? E.g., root tries to do mount PARTUUID=this-partition-is-not-whitelisted /media/user_xyz/not-whitelisted and fails, unless they change the configuration that I am trying to set up.
P.S. The particular PARTUUID is just used to convey the idea where I am getting at - I am aware that it is not of a proper format and that root would fail in mounting it because of it.
I am on Ubuntu 22.04 LTS.
|
The solution to question 1) is found here: https://askubuntu.com/questions/1062719/how-do-i-disable-the-auto-mounting-of-internal-drives-in-ubuntu-or-kubuntu-18-04
Basically, turn off udisks2.
systemctl stop udisks2.service
and then test, and do it permanently:
systemctl mask udisks2
This will prevent "normal" users from mounting drives automatically... You should also make sure they are not in the adm or sudo group, as then they could still mount drives.
As Marcus Müller noted in the comments, the solution to 2) would be non-trivial, I think. I can not think of a good answer right off (e.g. if I am root, how do I prevent myself from mounting any drive?). root has to be able to mount drives, it is how the kernel boots, and loads ram disks and such.
Further, as Guntram Blohm noted in the comments, adding:
systemctl mask udisks2
will prevent it from being pulled back into the system in the future, which is a "good thing".
| How do I whitelist drives / partitions that can be mounted to only those that have entries in /etc/fstab? |
1,346,289,248,000 |
Adding /dev/sdb1 /home/[user]/external_drive ntfs defaults,noatime 0 2 to /etc/fstab auto-mounts external drive after machine start/reboot.
However, if additional usb drive was plugged in during the reboot, sometimes it is /dev/sdb1 and it becomes accessible at /home/[user]/external_drive after the reboot.
Is there a way to consistently auto-mount each device so that each device will be accessible via expected folder?
|
Don't use /dev/sdb1 which is not a unique identifier (sdb1 will always be assigned to the first partition on second disk you plug in and during boot the order will be random with multiple plugged in external drives), use UUID instead (UUID is unique for every filesystem, so only the "right" device will be mounted to your /home/[user]/external_drive). You can find UUID of your device from lsblk -f output and then you can put UUID=<uuid> to your fstab instead of /dev/sdb1.
From lsblk -f you'll get something like this
$ lsblk -f /dev/sdb1
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sdb1 ntfs 0274801A52799A9F
so your /etc/fstab entry will look like
UUID=0274801A52799A9F /home/[user]/external_drive ntfs defaults,noatime 0 2
Note that with this entry, device with this specific UUID must be present during boot, I'd suggest adding nofail option to skip the entry (boot won't stop with an error) if the device is not present:
UUID=0274801A52799A9F /home/[user]/external_drive ntfs defaults,noatime,nofail 0 2
| Consistent auto-mount of external hard-drive |
1,346,289,248,000 |
Is there something wrong with my mounting?
//192.168.1.150/Drew /media/Cloud cifs auto,credentials=/home/drew/.credentials/smb,_netdev,uid=drew,gid=drew,rw 0 0
When I run sudo mount -a the drive is mounted, no issues. But, the drive will not automatically mount when the system boots up?
uname -a
Linux drew-desktop 4.14.24-1-MANJARO #1 SMP PREEMPT Sun Mar 4 21:28:02 UTC 2018 x86_64 GNU/Linux
pacman -Q | grep cifs
cifs-utils 6.7-2
ls -l /media | grep Cloud
drwxr-xr-x 2 drew drew 0 Mar 9 17:32 Cloud
|
I had to enable the service that automounts network drives as @Ignacio Vazquez-Abrams said. The details are here.
sudo systemctl enable systemd-networkd-wait-online
| Fstab not automatically mounting SMB storage? |
1,346,289,248,000 |
In my PC running Ubuntu 12.04 LTS I have installed three SATA hard drives. Two of them are installed near a cooler. I want to physically switch two drives (one that is not near the fan should be moved near the fan). How would Ubunutu deal with the switch? The device names in fstab, are they in any way relying on the sata port that they are connected to on my motherboard?
EDIT:
This is my /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda1 during installation
UUID=f04e6038-4412-46c4-b58d-67bfe3f8eddd / ext4 errors=remount-ro 0 1
# /Volumes/Backup was on /dev/sdb1 during installation
UUID=dc25bafb-adbc-4a65-845c-02c9253a795e /Volumes/Backup ext4 defaults 0 2
# /Volumes/Storage was on /dev/sdc1 during installation
UUID=74867f3e-acda-4efc-a6aa-7d21484d64a4 /Volumes/Storage ext4 defaults 0 2
/dev/sdc2 /Volumes/Storage ext4 defaults 0 0
#/dev/sdc1 /media/sdc1 swap sw 0 0
/dev/sdc1 none swap defaults 0 0
EDIT (changed /etc/fstab according to comments):
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
UUID=f04e6038-4412-46c4-b58d-67bfe3f8eddd / ext4 errors=remount-ro 0 1
UUID=dc25bafb-adbc-4a65-845c-02c9253a795e /Volumes/Backup ext4 defaults 0 2
UUID=74867f3e-acda-4efc-a6aa-7d21484d64a4 /Volumes/Storage ext4 defaults 0 2
UUID=88ee73e8-7556-40fa-b696-fbc15161036b none swap defaults 0 0
|
The UUIDs don't change when you reorder the drives. However, your
sdc? entries might change. It's best practice not to rely on the
sd? numbering. Better use UUIDs or LABELs to address your
partitions.
Find the UUID or LABEL
as root:
blkid -o list -c /dev/null
Change the entries
Change the entries that use the /dev/sd? syntax (in your case /dev/sdc1) to use either the UUID or the LABEL, if the file system has one. Use the values from the blkid output.
UUID=24467f3e-bcda-5efc-a6aa-7d21384d64a4
LABEL=swap
| HDD allocation fstab |
1,346,289,248,000 |
I'm looking to map my /home folder to a different location/drive on the machine. When I view the fstab file I see the following:
/dev/mapper/cl-home /home xfs defaults 0 0
/dev/mapper/cl-swap swap swap defaults 0 0
/dev/sda1 /mnt/store/hd2 ntfs defaults,auto 0 0
My question is: what is the cl in /dev/mapper/cl-home referring to ?
Am I ok to enter it like this:
/mnt/store/hd2/home/ /home ntfs defaults 0 0
|
In /etc/fstab, the first column is a volume location and the second column is a directory. The directory is the mount point, i.e. where the files will be accessible. The volume location indicates where the files are stored; there are different types of locations depending on the filesystem type. For a “normal” filesystems, the files that are stored on a disk and the volume location is a disk partition. For a network filesystem such as nfs or cifs, this indicates a host name and an exported path on the host, and so on.
What you currently have, /dev/mapper/cl-home, designates a partition using Linux's volume format (LVM). The volume name is in two parts: cl is a volume group (which covers a section of one or more disks), and home is a logical volume inside this volume group. The system doesn't care that the logical volume home and the directory /home have the same name, but it's convenient for humans to use the same name.
If you want to put your home directory on an existing Windows partition, then you can't just change the volume name here: /home would not be the place where a disk filesystem is mounted. There are several ways you can do this:
You can use a bind mount to make /mnt/store/hd2/home also accessible through /home. The fstab entries would be
/dev/sda1 /mnt/store/hd2 ntfs
/mnt/store/hd2/home /home bind
Note that you are not mounting an NTFS filesystem on /home: it's already mounted on /mnt/store/hd2. You're making a directory tree available at another location; the fact that this other directory tree is entirely located on an NTFS partition is not relevant.
You can make /home a symbolic link to /mnt/store/hd2/home. In this case /home would not appear in /etc/fstab at all.
You can use either a bind mount or a symbolic link for your home directory, and leave the other directories alone.
You can change your home directory to be /mnt/store/hd2/home. Either use a GUI to manage use accounts, or use a command like
sudo usermod --home /mnt/store/hd2/linux-home --move-home joe
I don't recommend any of these options, because NTFS can't store all the Linux file names, types and attributes. All of these options have further gotchas:
Bind mounts are a very useful tool, but they do have downsides. Files are listed at all the locations in enumerations, which has consequences on locate, etc, etc.
A symbolic link doesn't have these downsides, but occasionally some software will record the location of your home directory with the symbolic links expanded. Having a symbolic link for /home can also cause problems due to AppArmor policies.
Even having a home directory outside /home can cause gotchas with security policies, though it should be ok with any major distribution nowadays.
Rather than put your home directory on an NTFS filesystem, I recommend keeping it on a Linux filesystem. To access your Windows files from Linux, access them under /mnt/store/hd2. Create symbolic links in your home directory to places under /mnt/store/hd2 for convenience.
| Mapping the home folder to a different location in fstab |
1,346,289,248,000 |
In my current setup, I have a RAID0 array of 2x3TB HDDs with btrfs, two partitions:
/
/home
Under the /home directory, there are two users, both admin, one of which is myself.
So far, this setup is working out pretty nicely, although btrfs is fairly slow.
I recently acquired a pretty nice 500 GB SATA HDD. I'm going to format it w/ ext4 or XFS for increased performance for ephemeral things like my testing VMs and such. I would like to mount it under my home directory at boot, ie:
/home/haneefmubarak/extradrive
The first thing that came to my mind was to use /etc/fstab, but AFAICT then the permissions won't be set correctly for me to normally use it.
Essentially, I want to mount the drive so that it is mounted at ~/extradrive with permissions set like any other directory, so that I "own" the entire drive. How can I go about doing this?
|
Method #1
Try a line like this in /etc/fstab:
UUID=XX /home/user/extradrive ext3 rw,noauto,user,sync 0 2
Method #2
Examples are also shown using UID/GID too:
UUID=XX /home/user/extradrive ext3 rw,exec,uid=userX,gid=grpX 0 2
NOTE
You can also override when doing the actual manual mounting like this using mount + options:
$ sudo mount <device> <mount-point> -o uid=foo -o gid=foo
Method #3
Lastly, you can avoid the whole business by making the top level of the mounted extra drive owned by userX/groupX like so, after manually mounting the HDD:
$ sudo chown -R userX.groupX <directory>
Then in /etc/fstab do
<device> <directory> ext3 user,defaults 0 2
The userX should now be able to access the drive upon reboots.
NOTE: There's an assumption that the /home/userX has already been mounted with several of the options above. So take care that its been mounted prior.
References
How to change owner of mount point
| How can I mount a drive under my home directory at boot? |
1,346,289,248,000 |
I have an NFS mount in fstab:
10.0.12.10:/share1 /net/share1 nfs rw 0 0
which defaults to root as owner and group and 777 permissions. How do I specify another owner and different permissions? I can use chown and chmod, but it certainly should be possible straight from the mount command?
The system OS is Ubuntu Server 14.04.
|
It isn't possible from the mount command, because mount has to handle a variety of different filesystem types - including ones that might not support 'classic' ugo unix style permissions.
You are "stuck with" chown/chgrp/chmod. (Where applicable).
Bear in mind the server has permissions on its own filesystem. It may well be doing some manner of mapping - more commonly you'll see root -> nobody, but NFSv4 and idmap opens a whole new can of worms there. (It doesn't apply direct uid/gid ownership, but rather maps userids against a common directory.)
| How to specify owner and permissions for an NFS mount? |
1,346,289,248,000 |
I have a backup script that mounts and unmounts a USB drive.
I just noticed that its warning me:
EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
My question:
How can I get it to run e2fsck automatically when the mount command is run?
This is how it looks in /etc/fsck
UUID=c870ccb3-e472-4a3e-8e82-65f4fdb73b38 /media/backup_disk_1 auto defaults,rw,noauto 0 3
So <pass> is 3, so I was expecting fsck to be run when required.
EDIT
This is how I ended up doing it, based on the given answer:
(In a Bash script)
function fsck_disk {
UUID=$1
echo "Checking if we need to fsck $UUID"
MCOUNT=`tune2fs -l "UUID=$UUID" 2> /dev/null | sed -n '/Mount count:\s\+/s///p'`
if [ "$MCOUNT" -eq "$MCOUNT" ] 2> /dev/null
then
echo "Mount count = $MCOUNT"
if (( $MCOUNT > 30 ))
then
echo "Time to fsck"
fsck -a UUID=$UUID \
1>> output.log \
2>> error.log
else
echo "Not yet time to fsck"
fi
fi
}
fsck_disk a60b1234-c123-123e-b4d1-a4a111ab2222
|
According to man fstab:
The sixth field (fs_passno). This field is used by the fsck(8) program to determine the order in which filesystem checks are done at reboot time. The root filesystem should be specified with a fs_passno of 1, and other filesystems should have a fs_passno of 2. Filesystems within a drive will be checked sequentially, but filesystems on different drives will be checked at the same time to utilize parallelism available in the hardware. If the sixth field is not present or zero, a value of zero is returned and fsck will assume that the filesystem does not need to be checked.
So 3 is void. Moreover the fstab influences just on boot time check not every time a device is mounted. So to check during the boot, change 6th field to 2. If your wants to make check every mount you can do it by simple script or even make alias (for example
alias bk_mount='fsck -a UUID=c870ccb3-e472-4a3e-8e82-65f4fdb73b38 && \
mount /media/backup_disk_1'
| Run fsck automatically when calling mount from command line |
1,460,513,809,000 |
I am encountering a problem in which mounting a remote CIFS server without an fstab entry works, but mounting through fstab does not.
The following command works:
$ sudo mount -t cifs //w.x.y.z/Home$ /mnt/dir -o domain=A,username=B,password='C',sec=ntlmssp,file_mode=0700,dir_mode=0700
However, if I instead add the following line to /etc/fstab and try to mount by the mount command (e.g., mount -a or mount /mnt/dir), I receive the error listed below:
$ tail -n 1 /etc/fstab
//w.x.y.z/Home$ /mnt/dir cifs domain=A,username=B,password='C',sec=ntlmssp,file_mode=0700,dir_mode=0700
error:
$ sudo mount /mnt/csif
mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
Explicitly setting dump and fsck pass order to 0 does not help. Both commands seem to do the same thing
|
When you type the mount command, the part password='C' is first handled by the shell and becomes password=C before it gets to the mount command. This is not done with fstab entries, so you must remove the single quotes. If your password contains special characters you can replace them by their octal code, in particular \040 for space.
| mounting a CIFS filesystem directly or via fstab |
1,460,513,809,000 |
I have a debian server where I need to auto-mount a Samba share during startup.
I did the following:
Added the following line to /etc/fstab:
//192.168.1.1/FRITZ.NAS/WD-1600BEVExternal-01/share /srv/nas cifs credentials=/home/rlommers/.smbcredentials,rw,uid=rlommers,gid=rlommers 0 0
This works with a sudo mount --all
However, I would like this mount to be mounted automatically at boot time, and that doesn't happen.
Any clue on this issue? So the mount works fine, but it's not mounted automatically during boot of the server.
|
You are hiting a known systemd "feature"; on top of it, the system might be trying to mount the remote SAMBA share before networking is operational.
Modify your fstab to to add to the mounting options ,noauto,x-systemd.automount,_netdev
//192.168.1.1/FRITZ.NAS/WD-1600BEVExternal-01/share /srv/nas cifs credentials=/home/rlommers/.smbcredentials,rw,uid=rlommers,gid=rlommers,noauto,x-systemd.automount,_netdev 0 0
For the explanation, corrected to the new syntax by myself - Cute systemd trick of the day: auto-mounting remote shares
If you have remote drives – cifs, nfs, whatever – in /etc/fstab with
typical options, then you’ll probably find that the system will sit
there and wait for the network to come up on boot, then mount them,
before boot completes. That’s not terrible, but it’s not awesome
either.
...
to make it super awesome, add two options: noauto and x-systemd.automount.
Then what happens is the share gets mounted as soon as something tries to access it…but not before.
So boot runs as fast as possible, and as soon as you actually try to access the share, it gets mounted. Thanks, systemd!
Also from Arch Wiki to explain this feature - fstab
Automount with systemd
Remote filesystem
The same applies to remote filesystem mounts. If you want them to be
mounted only upon access, you will need to use the
noauto,x-systemd.automount parameters. In addition, you can use the
x-systemd.device-timeout= option to specify how long systemd should
wait for the filesystem to show up. Also, the _netdev option ensures
systemd understands that the mount is network dependent and order it
after the network is online.
noauto,x-systemd.automount,x-systemd.device-timeout=30,_netdev
Warning: Be sure to test the fstab before rebooting with a sudo mount -o remount -a and sudo mount -o rw,remount /srv/nas as a erronous fstab can give you problems upon boot.
See also, related, CIFS randomly losing connection to Windows share
| Debian server, auto-mount Samba share |
1,460,513,809,000 |
The man page for fstab has this to say about the pass value:
Pass (fsck order) Fsck order is to tell fsck what order to check the
file systems, if set to "0" file system is ignored.
Often a source of confusion, there are only 3 options :
0 == do not check. 1 == check this partition first. 2 == check this
partition(s) next In practice, use "1" for your root partition, / and
2 for the rest. All partitions marked with a "2" are checked in
sequence and you do not need to specify an order.
Use "0" to disable checking the file system at boot or for network
shares.
It doesn't explicitly mention values higher than 2, but implies that 0, 1 and 2 are the only useable values.
Other sources (such as the fsck man page) imply that values above 0 will be treated in ascending order ("passno value of greater than zero will be checked in order")
Can values higher than 2 be used, or not?
|
The answer is.. it depends, but probably not.
TL;DR if you use systemd, non-zero pass numbers will be checked in the order in which they appear in fstab. If not systemd, pass numbers will be checked sequentially in ascending order and values higher than 2 can be used.
On most distributions of linux, the fsck binary is provided by util-linux. This fsck accepts pass numbers higher than 2, and these will be treated in order.
Any system which calls fsck directly will understand "pass number" values higher than 2 in fstab.
It turns out that util-linux's fsck is not always used to check fstab. systemd maintains its own internal copy of fsck called systemd-fsck, which treats any non-zero fstab entries in the order in which they appear (specifically, it will not scan your pass number 1 entries before others).
On linux distributions that use systemd, systemd-fsck is used for automated file system checks, and in those cases the pass number is treated as a boolean (0 is means "false", or "don't verify" and != 0 is true, or "verify").
Also, don't forget that the root drive (the / mount) is sometimes checked separately.
Many thanks to Ned64, who did much research in their answer.
| Can I use a pass value higher than 2 in fstab? |
1,460,513,809,000 |
I have this fstab entry :
LABEL=cloudimg-rootfs / ext4 defaults,noatime,nobarrier,data=writeback,rw 0 0
I added rw to see if would fix my issue but it wont. After boot I get a read-only file system that I can't fix either using common results found on google.
Useful output. There are no errors with dmesg | grep error
root@w2:~# dmesg | grep EXT4
[ 8.372564] EXT4-fs (sda1): mounted filesystem with ordered data mode. Opts: (null)
[ 8.892244] EXT4-fs (sda1): Cannot change data mode on remount
|
Instead of setting it late in the fstab, why not use tune2fs to make it the default for that filesystem:
tune2fs -o journal_data_writeback /dev/sdXY
Do this once then reboot.
| Adding data=writeback to a ext4 fstab entry, results in read-only filesystem |
1,460,513,809,000 |
I'm using Pop!_OS (Based on Ubuntu/Elementary OS).
I've got swaps that are mounting at boot that aren't present in fstab. Because I've deleted those partitions, the entire boot process has to wait an unnecessary minute and a half for it to find the swaps with those UUIDs. Any tips to remove it?
EDIT: As per comment request, here is the /etc/fstab file
PARTUUID=fa857f57-b4d8-4bf5-b659-de05f87e8288 /boot/efi vfat umask=0077 0 0
UUID=1e23af14-f8ec-485f-8b23-1c63099206f2 / ext4 noatime,errors=remount-ro 0 0
UUID=568bc5f2-8a35-4f51-ba0f-d07f53e09091 /home ext4 noatime,errors=remount-ro 0 0
#UUID=0c8e22a9-7fd2-420d-8b20-7bb1ed099ab5 swap swap 0 0
#UUID=27fe5717-921b-48f1-9840-2273a3074d9e swap swap 0 0
UUID=3419B3F505351D84 /SSD ntfs uid=1000,gid=1000,rw,user,exec,umask=000,x-gvfs-show 0 0
The relevant part of boot.log is listed below. As I've got it set to show me the boot process (all the messages flying by etc) I notice it has a timer to wait 1:30 for a swap that doesn't exist
swapon: /dev/sdb2: swapon failed: Invalid argument
Sep 13 10:18:23 vegpop systemd[1]: dev-disk-by\x2duuid-4043f55a\x2dd6e4\x2d4557\x2db3b9\x2d4322bcc0dfd8.swap: Swap process exited, code=exited, status=255/EXCEPTION
Sep 13 10:18:23 vegpop systemd[1]: dev-disk-by\x2duuid-4043f55a\x2dd6e4\x2d4557\x2db3b9\x2d4322bcc0dfd8.swap: Failed with result 'exit-code'.
Sep 13 10:18:23 vegpop systemd[1]: Failed to activate swap /dev/disk/by-uuid/4043f55a-d6e4-4557-b3b9-4322bcc0dfd8.
|
Swap activation usually happens early in the boot process, while the system is still running on initramfs.
If you haven't updated your initramfs after removing the swap partitions from your /etc/fstab, there might still be a copy of the old fstab embedded within the initramfs, and that probably triggers the unnecessary waits.
Also, there might be a reference to the swap partition as a potential hibernate/resume location in /etc/initramfs-tools/conf.d/resume, which also gets embedded within initramfs.
So, first check /etc/initramfs-tools/conf.d/resume and comment out any references to removed swaps., then run sudo update-initramfs -u to update your initramfs to match the current state of your /etc directory tree.
The error messages mention /dev/sdb2 and an UUID 4043f55a-d6e4-4557-b3b9-4322bcc0dfd8 which matches neither of the /etc/fstab lines you've commented out, so try:
grep -r "/dev/sdb2" /etc
grep -r 4043f55a-d6e4-4557-b3b9-4322bcc0dfd8 /etc
If those commands find any files, take a look at the files.
| Why do swaps that aren't in fstab attempt to mount at boot |
1,460,513,809,000 |
I have a Debian 11 that i was using LVM for data only, i did the following commands:
mount /dev/srv-vg/lv-data /mnt/data
vi /etc/fstab
/dev/srv-vg/lv-data /mnt/data ext4 defaults 0 0
The mount command worked pretty well, the folder works just fine but after adding the fstab line when i reboot i get the following errors:
[FAILED] Failed to mount /mnt/data
[DEPEND] Dependency failed for Local File Systems
Cannot open access to console, the root account is locked
Press ENTER to continue
When i press enter, it just says the same message again (Root account is locked.)
Can someone help me?
Edit: Fixed typo.
|
Boot from external hard disk or USB stick, change back /etc/fstab. Alternatively, take out hard disk, attach it to another computer, mount, edit /etc/fstab.
And I second the comment to add nofail for non-essential mounts.
| Cannot open access to console, the root account is locked |
1,460,513,809,000 |
Currently using Debian 9.5 with this fstab file:
# /etc/fstab: static file system information.
#
/dev/mmcblk1p1 / ext4 noatime,errors=remount-ro 0 1
tmpfs /var/volatile tmpfs defaults,x-mount.mkdir 0 0
Now, if the folder /var/volatile doesn't exist, it will be created (x-mount.mkdir).
What would be the correct way of having a subfolder (e.g) like /var/volatile/subfolder just created after the mounting procedure succeeds?
I want this subfolder to be created before systemd continues with its tasks until finalizing startup.
|
After exploring systemd, I stumbled upon a greet discovery. It turns out there is no need to create a custom service to deal with this as systemd already provides a solution for this purpose: systemd-tmpfiles. It is a structured and configurable method to manage temporary directories and files.
https://www.freedesktop.org/software/systemd/man/systemd-tmpfiles.html
https://www.freedesktop.org/software/systemd/man/tmpfiles.d.html
Just create a file /etc/tmpfiles.d/volatile-subfolder.conf with this content:
d /var/volatile/subfolder 0755 root root - -
And reload.
| Mount a tmpfs folder on startup (volatile) with a created subfolder |
1,460,513,809,000 |
I made a partition like /part on my machine with some important data...
But I can't stand the name of it...
I want a clear solution to resolve it and change the name of it to for example /test...
As you see this is my /etc/fstab information:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda5 during installation
UUID=a21a99c4-e5b4-4197-ac5e-80d0fab1f30c / ext4 errors=remount-ro 0 1
# /home was on /dev/sda6 during installation
UUID=2e37d833-ca34-4fa7-a5d8-a4423a5af9bc /home ext4 defaults 0 2
# /part was on /dev/sda7 during installation
UUID=47e6e0b1-0e57-4184-a378-375b9ce315c5 /part ext4 defaults 0 2
# swap was on /dev/sda1 during installation
UUID=485e9f78-4dce-4404-af4e-f43985525264 none swap sw 0 0
The point is: My information are important and I scare to manipulate it without being sure...
I want a safe solution...
How is it possible?
|
Unmount the partition:
# umount /part
Rename the directory after making sure it's not mounted:
# mountpoint /part &>/dev/null || mv /part /best_name_ever
Edit /etc/fstab to replace /part with /best_name_ever
Remount the partition:
mount /best_name_ever
The # is of course meant to represent your root prompt, not actual input to be typed in.
To test the safety of this solution or any other one on dummy data
The following instructions are (in part) stolen from Virtual Filesystem: Building A Linux Filesystem From An Ordinary File.
Create an ordinary file with a size of 20 MB (for example):
$ dd if=/dev/zero of=dummy_fs bs=1k count=20480 # 20480 = 20 * 1024
Create an ext4 filesystem on your file:
$ /sbin/mkfs -t ext4 dummy_fs
mke2fs 1.42.5 (29-Jul-2012)
dummy_fs is not a block special device.
Proceed anyway? (y,n) y
... # Output of mkfs
Mount the filesystem image, create some dummy data on it and test the solution:
# mkdir /tmp/testmount
# mount -o loop dummy_fs /tmp/testmount
# touch /tmp/testmount/{blah,bleh} # Create dummy data
# ls /tmp/testmount
blah bleh lost+found
# umount /tmp/testmount
# mountpoint /tmp/testmount &>/dev/null || mv /tmp/testmount /tmp/sexy_name
# mount -o loop dummy_fs /tmp/sexy_name
# ls /tmp/sexy_name # to ensure your data is intact:
blah bleh lost+found
| How to rename /dev/sdax(partitions) in Linux |
1,460,513,809,000 |
Here you can see two devices are mounted as root:
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
rootfs 29221788 18995764 8761244 69% /
udev 10240 0 10240 0% /dev
tmpfs 203260 2192 201068 2% /run
/dev/disk/by-uuid/1d8879f2-9c47-4a72-9ef4-a6ecdd7a8735 29221788 18995764 8761244 69% /
tmpfs 5120 0 5120 0% /run/lock
tmpfs 406516 376 406140 1% /tmp
tmpfs 406516 72 406444 1% /run/shm
/dev/sda2 29225884 15019636 12741264 55% /home
/dev/sda3 226881528 191247596 24275680 89% /opt
...
However, I didn't specify UUID in /etc/fstab:
proc /proc proc defaults 0 0
LABEL=debian / ext4 errors=remount-ro 0 1
LABEL=istore /mnt/istore ext4 defaults 0 0
LABEL=home /home ext4 defaults 0 2
...
I'd like to see mount info in "/dev/xxx" rather then "/dev/disk/by-uuid/...". Though mount by UUIDs have many advantages, but I prefer to the old style... It's also weired why there are two rootfs mount?
|
This is a side effect of how the debian initramfs operates. Initially the kernel creates a tmpfs for the root, and unpacks the initramfs, which is a compressed cpio archive, there. The programs and scripts in the initramfs mount the real root device and then chroot there. Simply ignore the first entry that lists the filesystem as rootfs, as that is just the initramfs. It is the other one that is your real root filesystem.
Since /etc/fstab is in your root fs, it can not be consulted to mount your root fs, so this is done via kernel command line arguments passed by the boot loader. If you are using grub, it uses the UUID by default to avoid problems if the drives happen to be enumerated in a different order. You can edit /etc/default/grub to change this behavior, but it is not a good idea.
| Why rootfs is mounted multiple times? |
1,460,513,809,000 |
This is a followup to another question.
I figured out something is unmounting my device right after I mount it.
This device is being used by a database (Vertica), which is down and not using the directory while I'm running the mount command.
I'm trying to figure out:
Is systemd the one which unmounts the device?
How can I debug why is that happening?
How do I fix it?
Here's an example of what is happening:
[root@mymachine systemd]# mount -t ext4 /dev/xvdx /vols/data5; ls -la /vols/data5; sleep 5; ls -la /vols/data5
total 36
drwxr-xr-x 5 dbadmin verticadba 4096 Jul 23 2017 .
drwxr-xr-x 9 root root 96 Jul 16 18:52 ..
drwxrwx--- 503 dbadmin verticadba 12288 Jul 23 13:51 somedb
drwx------ 2 root root 16384 Nov 30 2016 lost+found
drwxrwxrwx 2 dbadmin verticadba 4096 Jun 20 08:32 tmp
total 0
drwxr-xr-x 2 root root 6 Jun 8 2017 .
drwxr-xr-x 9 root root 96 Jul 16 18:52 ..
[root@mymachine ~]#
fstab:
#
# /etc/fstab
# Created by anaconda on Mon May 1 18:59:01 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=29342a0b-e20f-4676-9ecf-dfdf02ef6683 / xfs defaults 0 0
/dev/xvdb swap swap defaults,nofail 0 0
/dev/xvdy /vols/data ext4 defaults 0 0
/dev/xvdx /vols/data5 ext4 defaults 0 0
Some more logs as per Filipe Brandenburger's suggestion:
Aug 01 16:55:19 mymachine kernel: EXT4-fs (xvdx): mounted filesystem with ordered data mode. Opts: (null)
Aug 01 16:55:19 mymachine systemd[1]: Unit vols-data5.mount is bound to inactive unit dev-xvdl.device. Stopping, too.
Aug 01 16:55:19 mymachine systemd[1]: Unmounting /vols/data5...
Aug 01 16:55:19 mymachine umount[353194]: umount: /vols/data5: target is busy.
Aug 01 16:55:19 mymachine umount[353194]: (In some cases useful info about processes that use
Aug 01 16:55:19 mymachine umount[353194]: the device is found by lsof(8) or fuser(1))
Aug 01 16:55:19 mymachine systemd[1]: vols-data5.mount mount process exited, code=exited status=32
Aug 01 16:55:19 mymachine systemd[1]: Failed unmounting /vols/data5.
|
Ok, that was in interesting debugging experience... Thanks Filipe Brandenburger for leading me to it!
Is systemd the one which unmounts the device?
Yes. journalctl -e shows a related message:
Aug 01 16:55:19 mymachine systemd[1]: Unit vols-data5.mount is bound to inactive unit dev-xvdl.device. Stopping, too.
Apparently I'm not the first one to encounter it. See this systemd issue:
systemd umounts manual mounts when it has a failed unit for that mount point #1741
How can I debug why is that happening?
Run journalctl -e for debugging.
How do I fix it?
This workaround worked for me: run the command below, then try mounting again.
systemctl daemon-reload
That's all, folks!
| How to check whether systemd is unmounting my device? (and why?) |
1,460,513,809,000 |
I have a desktop PC running Arch Linux that during inital installation only used a 120GB SSD for / and no other partitions. I have just recently added a 500GB HDD that I want to mount as /home to give me added storage, avoid future issues with compiling on an SSD, and help with easier upgrades in the future if I ever change anything but want to retain the same /home.
Prior to this my fstab read:
# <file system> <dir> <type> <options> <dump> <pass>
/dev/sda1 / ext4 rw,data=ordered,noatime,nodiratime,discard,erros=remount-ro 0 1
When preparing for the upgrade I copied all of /home to the new partition then renamed /home to /home_old and created a new, empty /home then modified /etc/fstab to read:
# <file system> <dir> <type> <options> <dump> <pass>
/dev/sda1 / ext4 rw,data=ordered,noatime,nodiratime,discard,erros=remount-ro 0 1
/dev/sdb1 /home ext4 rw,nodev,nosuid,erros=remount-ro 0 2
... Which at the time were the correct partition names.
However, I rebooted and it mounted the SSD as / and /home. I tried it with UUIDs and received the same result.
Just for the sake of trying, I switched the two and it fell back to an emergency console at boot time. Again tried with UUIDs with the same result.
If I go back to the old version of /etc/fstab now, it shows the SSD as /dev/sdb1 and the HDD as /dev/sda1 but still mounts the SSD as /, which I find VERY strange.
My question, given the backstory now, is how do I fix this issue and why is it behaving this way so I can understand what's causing this?
EDIT:
As Timothy Martin pointed out in the comments I made a typo in fstab and it turns out that's what caused it. More proof that weird things occur when you make a mistake in your configuration files.
sheepish grin
|
Create a temporary Home folder
blkid
This will display the UID of all the partitions. Record the UUID of the dd
Open a terminal and type the following:
vi /etc/fstab
and add the following line to the end of the file.
UUID=xxx-xxxxx-xxxxx /media/home ext4 nodev,nosuid 0 2
save and exit
Next, create a mount point:
mkdir /media/home
and reload the updated fstab.
mount -a
we need to remove the existing Home folder to make way for the new Home folder in the 500 GB partition . To do that, type the following commands in the terminal:
cd /
sudo mv /home /home_backup
sudo mkdir /home
Mount the new Home folder
vi /etc/fstab
All you have to do is to change the /media/home to /home . Save and exit the file.
reload the fstab file:
mount -a
removing the Home_backup folder
rm -rf /home_backup
| Adding new hard drive as /home after installation |
1,460,513,809,000 |
I’m running jessie/sid with systemd 208 and try to convert the following wildcard autofs configuration to either an /etc/fstab or .mount/.automount definition.
$ cat /etc/auto.master
/home/* -fstype=nfs homeserver:/exp/home/&
(homeserver runs a Solaris with each subdirectory in /exp/home/ being a separate share.)
Is there a way to emulate wildcard maps with systemd?
|
I suppose no. The .mount/.automount unit name has to be equal to the mount path, escaped with systemd-escape --path. And the only way in systemd to instantiate units is "template syntax" of a form [email protected]. Hence it is at least not possible to have a dynamically instantiated mount unit.
Just use autofs. systemd is not a replacement for everything.
| Wildcard automounts with systemd |
1,460,513,809,000 |
Due to some complex requirements, I had to put the following two lines in /etc/fstab:
/dev/xvdg1 /srv/storage ext4 $OPTIONS1 0 2
/srv/storage/dir /var/opt/dir none bind,$OPTIONS2 0 0
Now my question is: Do I have to re-list all mount options $OPTIONS1 in $OPTIONS2, or will the second line (the bindmount line) inherit the options of $OPTIONS1?
FYI, here is the actual options used in $OPTIONS1:
rw,auto,async,noatime,nodiratime,barrier=0,delalloc
ETA: actually I use UUID=... instead of /dev/xvdg1, but that's beside the point.
|
The short answer is "maybe", because it depends on which option you're passing, and what it is enforced by. If the options you are passing are strictly superblock flags, you don't need to relist the options as part of the bind mount. If the options you are passing contain a vfsmount flag, then yes, you need to relist the vfsmount flags. You can think of "superblock flag" as meaning that it's part of the underlying filesystem, and "vfsmount flag" as meaning that it's part of the kernel (although this is not technically true, since the kernel is the one enforcing both in reality).
You need to do this with arguments like noexec, nodev, or nosuid, because they apply per-filesystem (see this thread on the kernel mailing list for some good information).
$ truncate -s 10M container
$ mkfs.ext4 container
$ mkdir mountpoint binded
$ sudo mount -o loop container mountpoint
$ sudo chown "$EUID" mountpoint
$ sudo mount -o bind mountpoint binded
$ cat > mountpoint/script << 'EOF'
> #!/bin/bash
> echo "This works."
> EOF
$ chmod +x mountpoint/script
$ binded/script
This works.
$ sudo mount -o remount,noexec mountpoint
$ binded/script
This works.
$ mountpoint/script
bash: mountpoint/script: Permission denied
Note that despite being the same script, noexec is only being enforced per-filesystem. This is because it's a vfsmount flag, not a superblock flag -- that is, it's a functionality of the kernel, not the filesystem.
Note how the output of mount for the two mountpoints looks like after this, and that noexec did not carry over:
$ mount
[...]
/tmp/tmp.hoiHQYPEFX/container on /tmp/tmp.hoiHQYPEFX/mountpoint type ext4 (noexec,relatime,data=ordered)
/tmp/tmp.hoiHQYPEFX/container on /tmp/tmp.hoiHQYPEFX/binded type ext4 (relatime,data=ordered)
If we remount the bind itself with noexec, however, this works as expected:
$ sudo mount -o remount,noexec binded
$ mount
[...]
/tmp/tmp.hoiHQYPEFX/container on /tmp/tmp.hoiHQYPEFX/mountpoint type ext4 (noexec,relatime,data=ordered)
/tmp/tmp.hoiHQYPEFX/container on /tmp/tmp.hoiHQYPEFX/binded type ext4 (noexec,relatime,data=ordered)
Options which are underlying filesystem attributes, however, generally do not need to be done again on the bind mount (and they will probably raise an error, since many of the supported options are defined by the filesystem). A simple one to demonstrate is ro, the read-only option, but this applies to other superblock flags as well.
$ sudo mount -o remount,ro mountpoint
$ > mountpoint/test
bash: mountpoint/test: Read-only file system
$ > binded/test
bash: binded/test: Read-only file system
Note that this time, the flag carries over automatically:
$ mount
[...]
/tmp/tmp.hoiHQYPEFX/container on /tmp/tmp.hoiHQYPEFX/mountpoint type ext4 (ro,noexec,relatime,data=ordered)
/tmp/tmp.hoiHQYPEFX/container on /tmp/tmp.hoiHQYPEFX/binded type ext4 (ro,noexec,relatime,data=ordered)
| mount options for bindmount |
1,460,513,809,000 |
I am using archivemount to mount one of several tar.bz files in Ubuntu. It works very nicely for me.
I mount and umount frequently (I'm in a testing phase). I want a solution that will allow me to umount without typing in my password. Here's my current script:
ARCHIVE=$(zenity --file-selection --filename=/share/);
archivemount $ARCHIVE /media/Archive/
echo "when finished use: sudo umount /media/Archive/"
My goal is to simply be able to umount quickly and easily without typing my password. I understand that adding an fstab entry will do the trick, but I cannot find the correct format for the fstab entry when using archivemount.
|
archivemount is a fuse-based system (like sshfs, among others), so you can unmount it as a regular user (at least the one who mounted it) using
fusermount -u /media/Archive
| I need an fstab example for archivemount (want to umount without my password) |
1,460,513,809,000 |
on the following example fstab file, we want to delete all lines that start with UUID , but except the UUID line with boot word
/dev/mapper/VG100-lv_root / xfs defaults 0 0
UUID=735cb76a-51b5-4e06-b6fb-3b9577e38dc5 /boot xfs defaults 0 0
/dev/mapper/VG100-lv_var /var xfs defaults 0 0
UUID=0b14011d-f69d-4c4c-8ce0-6240bb0a574a /var/kafka/mp1 xfs defaults,noatime 0 0
UUID=2d7872f2-96d4-4ba9-8a17-a1115542645c /var/kafka/mp2 xfs defaults,noatime 0 0
UUID=79bdbf56-9a09-4505-ab8e-41ce9432cf0f /var/kafka/mp3 xfs defaults,noatime 0 0
UUID=ca42a388-83d4-4f8b-aff7-3450d836eef7 /var/kafka/mp4 xfs defaults,noatime 0 0
UUID=62d356bb-c393-4a74-bbf9-984b60d3b5c4 /var/kafka/mp5 xfs defaults,noatime 0 0
UUID=d4071a83-204f-475f-8917-cdd77ef6b1ed /var/kafka/mp6 xfs defaults,noatime 0 0
so excepted results should be as following
/dev/mapper/VG100-lv_root / xfs defaults 0 0
UUID=735cb76a-51b5-4e06-b6fb-3b9577e38dc5 /boot xfs defaults 0 0
/dev/mapper/VG100-lv_var /var xfs defaults 0 0
so far we have the following sed
sed -i '/^UUID/d' /etc/fstab
but above approach deletes all UUID lines
|
First tell sed to print lines which contain boot, then tell it to delete lines which contain UUID, separate expressions with ;
sed -i '/boot/p;/UUID/d' /etc/fstab
Since the print expression was given before delete expression, line which contains boot will be printed, before the lines with UUID are deleted, if you would changes expression positions, this would not work.
Given the example you gave, you should get something like this
/dev/mapper/VG100-lv_root / xfs defaults 0 0
UUID=735cb76a-51b5-4e06-b6fb-3b9577e38dc5 /boot xfs defaults 0 0
/dev/mapper/VG100-lv_var /var xfs defaults 0 0
You can also add the part to delete empty lines, ^ marks the start of line, and $ marks the end of line, so add expression to delete lines that that have the end $ of line right after beginning of the line ^.
sed -i '/boot/p;/UUID/d;/^$/d' /etc/fstab
EDIT
As noted in the comments by Stéphane Chazelas this will print the line which contains boot twice, if it doesn't contain UUID in the same line, so something like this would be better.
sed -i '/^UUID/{/boot/!d};/^$/d' /etc/fstab
It will delete lines which start with UUID, except if they contain /boot, and then delete all empty lines
| How to delete all UUID from fstab but not the UUID of boot filesystem |
1,460,513,809,000 |
I have Windows and Arch Linux in dual with UEFI. I want to mount my another NTFS partition in Arch. I mounted the partition with mount /dev/sda5 /mnt/Apps command. Then added the output of genfstab -U /mnt/Apps in /etc/fstab file. The output is as follows:
UUID=01D158CC7C2A61A0 /mnt/Apps ntfs rw,nosuid,nodev,user_id=0,group_id=0,allow_other 0 0
But when I unmount and remount that partition all files and folders shows with root uid and gid and 0777 permission. I also tried with changing uid and gid to 1000 but same result. So, my question, what is the correct way to add NTFS partitions in fstab so that I can read & write all file and folder both with normal user (1000:1000) and root (0:0)?
|
As commented by muru from this answer, I have added the fmask and dmask permissions in /etc/fstab and now it shows correct permissions. I have change that line as follows:
UUID=01D158CC7C2A61A0 /mnt/Apps ntfs rw,auto,user,fmask=133,dmask=022,uid=1000,gid=1000 0 0
This sets all files 0644 and directories 0755 permissions.
| What is the correct permission in /etc/fstab to mount NTFS? |
1,460,513,809,000 |
I am using Arch Linux. I have three functioning RAID arrays via MDADM:
~ cat /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 name=beast:0 UUID=564fbbac:07f9bbeb:07ef9229:1d8fd77e
ARRAY /dev/md1 metadata=1.2 name=beast:1 UUID=7559b085:3b4715cc:59205fdd:12c0db08
ARRAY /dev/md2 metadata=1.2 name=beast:2 UUID=2dddbf33:26249617:ef8f8b65:c9670bdb
I have three directories in /run/media that I try to automount these mdadm arrays via fstab:
#THE FOLLOWING SHOULD WORK BUT AUTOMOUNT FAILS!!!!!
#UUID=564fbbac:07f9bbeb:07ef9229:1d8fd77e /run/media/tcarpent/MDADM_SYSRAID ntfs-3g auto,user,rw,exec,nofail 0 0
/dev/md0 /run/media/tcarpent/MDADM_SYSRAID ntfs-3g auto,user,rw,exec,nofail 0 0
#THE FOLLOWING SHOULD WORK BUT AUTOMOUNT FAILS!!!!!
#UUID=7559b085:3b4715cc:59205fdd:12c0db08 /run/media/tcarpent/MDADM_MISCRAID ext4 auto,user,rw,exec,nofail 0 0
/dev/md1 /run/media/tcarpent/MDADM_MISCRAID ext4 auto,user,rw,exec,nofail 0 0
#THE FOLLOWING SHOULD WORK BUT AUTOMOUNT FAILS!!!!!
#UUID=2dddbf33:26249617:ef8f8b65:c9670bdb /run/media/tcarpent/MDADM_MEDIARAID ext4 auto,user,rw,exec,nofail 0 0
/dev/md2 /run/media/tcarpent/MDADM_MEDIARAID ext4 auto,user,rw,exec,nofail 0 0
Using the commented out UUID lines, automount does not work. I see the drive as 'active but not mounted' in webmin, but am required to mount it, and enter my password, then the drive mounts. However, with the /dev/,,, lines, automount works, no password required.
What gives? I've been told to ALWAYS fstab with UUIDs and never device names so I want to fix this.
|
The UUID seen in mdadm.conf are related to the MD drivers.
The UUID used in fstab are related to filesytems.
What you need are the filesystem UUID numbers. You can get them with a command line
sudo dumpe2fs /dev/md0 | grep UUID
So in my case:
$ grep md/0 /etc/mdadm/mdadm.conf
ARRAY /dev/md/0 metadata=1.2 UUID=d634adc8:69deedd8:d491a79e:69aeff78
$ sudo dumpe2fs /dev/md0 | grep UUID
dumpe2fs 1.42.12 (29-Aug-2014)
Filesystem UUID: 195237da-8825-45fb-abf7-a62895bd0967
$ grep boot /etc/fstab
UUID=195237da-8825-45fb-abf7-a62895bd0967 /boot ext4 defaults 0 2
So we can see the UUID used is the filesystem UUID and not the MD UUID.
| MDADM: automount only works with dev, not UUID |
1,460,513,809,000 |
I have Linux Mint installed on and booting from an LVM drive with two physical disks in the volume group (1TB each). I have purchased a new hard drive (4TB) and I would like to clone the whole thing and get it booting from the new disk.
I'm really struggling to find instructions for this procedure when the root file system is on an LVM drive.
I followed these instructions and have successfully managed to mirror the mint-vg/root and mint-vg/swap_1 logical volumes on to the new disk, I then split the mirror with lvconvert --splitmirror and split the Volume Group with vgsplit. This made a nice clone of all my files I just cant for the life of me work out how to boot from the new copy!
First I tried renaming all the LVs and VGs so the old ones had "OLD_" prefixed and the new ones had the names of the old ones. For example "mint-vg" became "OLD_mint-vg" and "new_mint-vg" became "mint-vg" etc.
I then realised that one of the old drives has a primary partition on that is bootable. Here is the original configuration of drives: (sde and sdf are the old drives and sdg is the new one)
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sde 8:64 0 931.5G 0 disk
└─sde1 8:65 0 931.5G 0 part
└─mint--vg-root 253:2 0 1.8T 0 lvm /run/timeshift/backup
sdf 8:80 0 931.5G 0 disk
├─mint--vg-root 253:2 0 1.8T 0 lvm /run/timeshift/backup
└─mint--vg-swap_1 253:3 0 976M 0 lvm [SWAP]
sdg 8:96 0 3.7T 0 disk
I have tried grub-install /dev/sdg and got "grub-install: error: failed to get canonical path of '/cow'"
I have also experimented with these instructions but I can't create the Primary partition because my new disk is bigger than 2048G which is the maximum partition size.
I also have installed linux mint on to the new disk to see how it configures the partitions and they look like this:
sdg 8:96 0 3.7T 0 disk
└─sdg1 8:97 0 1M 0 part
└─sdg2 8:98 0 513M 0 part
└─sdg3 8:99 0 3.7T 0 part
└─vgmint-root 253:1 0 3.7T 0 lvm
└─vgmint-swap_1 253:2 0 967M 0 lvm [SWAP]
Would it now be possible to clone my old root and swap LVs and just replace the ones from the new Mint installation? Will it really be that simple? Otherwise if anyone can walk me through the process of setting up the necessary boot config so I can get my cloned system running again or point me to some clear instructions I'd be really grateful.
Thanks,
Dan.
Edit
Thanks ever so much for your help, I’m literally pulling my hair out over this!
Okay, here is the pastebin link you requested but beware my system has 7 drives in and I’m booting from a linux mint live USB.
The original OS is on drives /dev/sde and /dev/sdf/and I have renamed the VG and LVs with prefix “OLD_”. The new drive is /dev/sdg and has a fresh install of Linux Mint on it at the moment which will need to be removed going forwards. I installed it so I could see how it configures the partitions.
It looks like the new Mint installer has chosen the BIOS option you mentioned and a bios_grub partition (/dev/sdg1)
Now that the Mint installer has set up said partitions, can I now delete the volume group “vgmint” from the fresh install and replace it with my cloned volume group "mint-vg"? If so what will I have to reconfigure to get it to boot? Or should I wipe the drive and start afresh?
|
I have been messing about with this for days now so I figured I should post my solution for anyone else that is having a similar problem. Here is how to clone your Mint installation to a new 4TB disk when installed on an LVM spanned across 2x 1TB disks:
Useful links:
Clone an LV
Rename LV and VG
Delete LV and VG
LVM attributes
Setting up Grub with BIOS/GPT setup
Terminology
PV = Physical Volume
VG = Volume Group
LV = Logical Volume
To make a clone of a system disk installed on an LVM system with the intention of booting from the clone we will perform the following steps:
Prep the new disk (create partitions)
create PV
Add the new PV to the same VG that contains the target LVs
Create a Mirror of the target LVs on the new PV
Separate the mirrors into two separate LVs
Split the VG so the new PV with the mirrored LVs on is in a new VG
tidy up (rename LVs, VGs)
Install Grub To Make bootable
1 - Prep the new disk
If your disk is smaller than 2048 gigabytes you can prep the disk with an MBR partition but that is not covered here.
If you want to boot to a drive that is bigger than 2TB you must create a BIOS boot partition. I found these instructions useful but to be honest I cheated a bit.
The way I configured my partitions was to do a fresh install of linux mint on to my new drive. That set up 3 partitions the BIOS boot partition (bios_grub) some unknown fat32 partition (i'm still looking in to this I'm thinking about deleting it it's half a gig!!!) and an LVM2 partition (with LVs ʻrootʻ and ʻswap_1ʻ in).
I then deleted the new volume group with the fresh install of Mint leaving a blank partition (/dev/sdg3) and then cloned my old mint VG in to the blank partition.
I think if I had created the 1meg Bios partition with fdisk as outlined in these instructions then an LVM partition with the rest of the disk I could probably have avoided installing mint afresh. However it worked so feel free to experiment or cheat it's up to you.
2 - Create PV
Now you have your disk partitioned you need to find the device name of the biggest partition with lsblk or fdisk -l (Mine is called /dev/sdg3). Now create new PV:
pvcreate /dev/sdg3
3 - Add the new PV to the same VG as the target LV
You can list the logical volumes with vgs (I will use "mint-vg") and add the new PV like this:
vgextend mint-vg /dev/sdg3
4 - Create a Mirror of the target LV on the new PV
List your LVs with lvs, mine was called "root", I also cloned swap_1 so you can just repeat these instructions for both LVs.
If your LV is fairly large mirroring can take a long time while it copies all the data. It will keep you informed of its progress on the screen and if you have a power outage or something like that it should just continue from where it left off next time you boot on to your live disk. You might also want to run it in the background with the -b option.
lvconvert --type mirror -m1 /dev/mint-vg/root /dev/sdg3
Once it has finished you might want to check that it all looks good:
lvs -a -o +devices | egrep "LV|root"
Notice the Cpy%Sync column it should display the percent copied.
Now start this section again and mirror the "swap_1" LV.
5 - Separate the mirrors into two separate LVs
Next convert the mirrored LV in to an actual LV. The two LVs (the original and the copy) will be on the same VG so it will be necessary to rename them as you do it (I will use "new_root"). Also it is important to flush the caches with the sync command first just to be on the safe side.
sync
lvconvert --splitmirrors 1 --name new_root /dev/mint-vg/root /dev/sdg3
Now repeat for /dev/min-vg/swap_1
6 - Split the VG so the new PV with the mirrored LV on it is in a new VG
Before we split the VG we must deactivate the LV: (the -a stands for activate [y|n])
lvchange -an /dev/mint-vg/new_root
lvchange -an /dev/mint-vg/new_swap_1
Now we can make a new VG from /dev/sdg3 which will still have the mirrored LVs on it:
vgsplit mint-vg new_mint-vg /dev/sdg3
You should now be able to see the copied LVs and two VGs with their associated devices
lvs -o +devices
7 - tidy up (rename LVs, VGs and perhaps mark a VG for exporting)
If (like me) you are trying to copy your system to a new disk that you intend to boot from and wipe the old system drives you will need to rename all the LVs and VGs so the old "mint-vg" is called "OLD_mint-vg" and the new "new_mint-vg" is called "mint-vg" etc and the same for the LVs.
you can rename an LV and a VG like this: (unmount first!)
umount /dev/mapper/mint—vg-root
lvrename mint-vg root OLD_root
vgrename mint-vg OLD_mint-vg
If you intend to remove a volume group (Perhaps you have copied it to an external drive for transportation) you should deactivate the LVs on it and the VG itself and mark it for exporting:
lvchange -an /dev/mint-vg/old_root
vgchange -an old_mint-vg
vgexport old_mint-vg
Now if you run pvs you should see the VGs attributes have an x to indicate that it is marked for exporting and there is no a attribute meaning it is not active.
8 - Install Grub To Make bootable
A quick mention of fstab
Here is a brief description of your /etc/fstab file.
I just wanted to quickly mention your /etc/fstab file. It is used to tell your system about partitions that need to be mounted, in which order to mount them and to assign certain options to them upon mounting. In my case I renamed my LVs and it's VG so they were the same as the originals. Additionally, in my /etc/fstab file my partitions are identified with their device name and not the unique UUID which meant that everything just worked for me.
it might be worth having a look at your /etc/fstab file just to familiarise yourself with it.
cat /etc/fstab
If you have renamed your VG, any of the LVs or your partitions are identified by their UUID in your fstab file you will likely have to edit your fstab file to get your system booting and your volumes mounted.
you can find out the UUIDs by typing blkid in your terminal.
Grub
To get your clone booting from your new disk you need to install Grub on it. To do this you must first mount the root folder so we can point grub to the /boot folder.
These instructions might be useful but if you don't tell it about your /boot folder you will get the following error: "failed to find the canonical /cow". After reading the Grub manual info grub-install - I was able to install grub by pointing it to the /boot/ folder on the root LV. Here's how:
First create a mount point folder:
then mount the root LV
and finally you can install grub
mkdir /mnt/root
mount /dev/mint-vg/root /mnt/root
grub-install --boot-directory=mnt/root/boot /dev/sdg
This will set up your ʻ/bootʻ folder and create a new ʻcore.imgʻ in your BIOS boot partition. You should be able to boot now, don't forget to change the boot device in your bios!
God speed!
| Trying to move Linux installed on LVM over to new disk |
1,460,513,809,000 |
For the project SamplerBox, up to now I was using /dev/sda1 /media auto nofail 0 0 to have USB flash drives automatically mounted when inserted on the headless computer, see also Auto-mount and auto-remount with /etc/fstab. But this seems not very reliable, for example, when an USB flash drive is removed, and then re-inserted.
What lightweight and easy-to-configure solution is available in Debian to automatically mount every /dev/sd* device to /media/?
If a second flash drive is plugged, ignore or mount to another folder /media2/
If a drive is removed (even without a proper umount), and then re-inserted a few minutes later it should be mounted again
The use case is a headless device on which the end user can plug USB flash drives, and it should be always recognized (no matter if they removed the previous USB flash drive without asking permission in the software).
|
Based on @FelixJN's comment, I slightly modified this excellent guide by
Andrea Fortuna according to my needs and here is the solution:
Create a file /root/usb-mount.sh containing this (and add +x permission):
#!/bin/bash
ACTION=$1
DEVBASE=$2
DEVICE="/dev/${DEVBASE}"
MOUNT_POINT=$(/bin/mount | /bin/grep ${DEVICE} | /usr/bin/awk '{ print $3 }') # See if this drive is already mounted
case "${ACTION}" in
add)
if [[ -n ${MOUNT_POINT} ]]; then exit 1; fi # Already mounted, exit
eval $(/sbin/blkid -o udev ${DEVICE}) # Get info for this drive: $ID_FS_LABEL, $ID_FS_UUID, and $ID_FS_TYPE
OPTS="rw,relatime" # Global mount options
if [[ ${ID_FS_TYPE} == "vfat" ]]; then OPTS+=",users,gid=100,umask=000,shortname=mixed,utf8=1,flush"; fi # File system type specific mount options
if ! /bin/mount -o ${OPTS} ${DEVICE} /media/; then exit 1; fi # Error during mount process: cleanup mountpoint
;;
remove)
if [[ -n ${MOUNT_POINT} ]]; then /bin/umount -l ${DEVICE}; fi
;;
esac
Create a file /etc/systemd/system/[email protected] containing:
[Unit]
Description=Mount USB Drive on %i
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=/root/usb-mount.sh add %i
ExecStop=/root/usb-mount.sh remove %i
Create a file /etc/udev/rules.d/99-local.rules containing:
KERNEL=="sd[a-z][0-9]", SUBSYSTEMS=="usb", ACTION=="add", RUN+="/bin/systemctl start usb-mount@%k.service"
KERNEL=="sd[a-z][0-9]", SUBSYSTEMS=="usb", ACTION=="remove", RUN+="/bin/systemctl stop usb-mount@%k.service"
Restart the rules:
udevadm control --reload-rules
systemctl daemon-reload
Plug a USB flash drive. It should be mounted to /media/.
| USB flash drives automatically mounted (headless computer) |
1,460,513,809,000 |
I have a filesystem on flash using jffs2. I would like to mount this filesystem read-only, except for a single folder that I would like to be writable.
Is this possible without resorting to something like unionfs and the likes?
|
You can use a bind mount, although they are a bit finicky about permissions requiring you to mount and then remount the directory to get the correct permissions. The man page of mount suggests:
mount --bind olddir newdir
mount -o remount,rw newdir
however on my Arch system I need to do
mount --bind olddir newdir
mount -o remount,rw olddir newdir
If you only want the directory to be listed in one place you can over mount the directory
mount --bind olddir olddir
mount -o remount,rw olddir olddir
| Make part of read-only filesystem writable |
1,460,513,809,000 |
I am not able to mount the disk. It is showing error.
Error mounting system-managed device /dev/sda3: Command-line `mount "/media/tusharmakkar08/Local"' exited with non-zero exit status 1: [mntent]: line 1 in /etc/fstab is bad
mount: can't find /media/tusharmakkar08/Local in /etc/fstab or /etc/mtab
Output of sudo blkid is
/dev/sda2: UUID="FA38015738011473" TYPE="ntfs"
/dev/sda3: LABEL="Local Disk" UUID="01CD72098BB21B70" TYPE="ntfs"
/dev/sda4: UUID="2ca94bc3-eb3e-41cf-ad06-293cf89791f2" TYPE="ext4"
/dev/sda5: UUID="CFB1-5DDA" TYPE="vfat"
Output of cat /etc/fstab is :
UUID=01CD72098BB21B70 /media/tusharmakkar08/Local Disk1 ntfs-3g nosuid,nodev 0 0
UUID=FA38015738011473 /media/sda2 ntfs-3g defaults 0 0
UUID=2ca94bc3-eb3e-41cf-ad06-293cf89791f2 / ext4 defaults 0 1
UUID=CFB1-5DDA /media/tusharmakkar08/CFB1-5DDA vfat defaults 0 0
|
You can use \x20 for space.
That is hex value for ASCII (and utf-8 encoded) space.
Or you can use the octal variant \040.
So that would be (in fstab):
UUID=01CD72098BB21B70 /media/tusharmakkar08/Local\x20Disk1
# or
UUID=01CD72098BB21B70 /media/tusharmakkar08/Local\040Disk1
If you are not to familiar with ASCII fun install ascii and:
ascii # decimal and hex view
ascii -o # octal view
Non the less I'd also recommend, like @TNW, changing the mount point to one without space. Generally makes things easier. You can also change the label
though that might affect other things if you have something configured to recognize it as it is.
| Error mounting drives |
1,460,513,809,000 |
I am trying to mount an exfat drive using fstab with read/write permission for both user and group.
The line of etc/fstab for this drive is:
UUID=5E98-37EA /home/ftagliacarne/data/media exfat defaults,rw,uid=1000,gid=1001,umask=002 0 1
Using these option the drive gets mounted to the correct location to the correct user and group, however, the group does not have read-write access. i.e. the permission are set to:
drwxr-xr-x 7 ftagliacarne docker-media 262144 Sep 24 20:40 media
Is there any way of setting the group permission to also have read-write access?
Desired outcome:
drwxrwxr-x 7 ftagliacarne docker-media 262144 Sep 24 20:40 media
Some of the things I tried:
Setting umask to 002
Using chmod before/after mounting
Using chmod recursively on the parent directory
Appreciate any help you can give me.
Update 1:
I also tried changing the fstab file to the following:
UUID=5E98-37EA /home/ftagliacarne/data/media exfat defaults,uid=1000,gid=1001,dmask=0002,fmask=0113 0 1
Alas, it still does not work.
Update 2:
After having issues at boot due to the configurations above, I changed the /etc/fstab entry to the following:
UUID=5E98-37EA /home/ftagliacarne/data/media exfat defaults,uid=1000,gid=1001,fmask=0113,dmask=0002,nofail 0 0
And now it works. I suspect the issue was with the pass option being 1, as changing that to 0 seems to have fixed it. Thank you to everyone who helped!
|
chmod and chown will not work for mounted fat32, exfat and ntfs-3g, period.
What you're looking for is dmask=0002,fmask=0113.
| ExFat mount permission |
1,460,513,809,000 |
The box is a HP microserver, running Ubuntu 16.04. I recently "upgraded" the boot device to a 64GB SSD. Additionally there is a 1TB SATA drive.
usually it boots up with /dev/sda1 as the primary partition (on the SSD) and /dev/sda5 as swap, and /dev/sdb1 pointing to the partition on the 1Tb HDD, that is mounted to /mnt/media0.
The problem is, it sometimes changes all that, and the SSD is now /dev/sdb1 and /dev/sdb5 and the media partition is now /dev/sda1.
This, of course, causes the swap and media mounts to fail as they are listed in /etc/fstab using their previous /dev/sd* names.
So, I have:
Checked the BIOS, and it consistently lists the 64GB SSD as the first drive and the 1TB IDE as the 2nd.
I tried to change /etc/fstab to reference the media drive by volume label, but that causes Ubuntu to fail on startup and put me into a recovery mode.
I tried to change /etc/fstab to reference the swap, and (ext4) media partitions using UUID (as, in fact, it lists the primary partition) but I then encounter the 2nd problem I have.
When I execute the following to find the UUIDs of the various partitions...
ls /dev/disk/by-uuid
blkid
both only list the 1 entry – the primary partition's UUID. I can only see the UUID of the media partition using (on boots where it does, in fact, get assigned sdb1 obviously)
tune2fs -l /dev/sdb1
but again, if I use that UUID in /etc/fstab then Ubuntu fails to boot and goes into recovery mode.
So, my questions are:
Is there any way to get /dev/sda and /dev/sdb to stop swapping between drives?
How can I get the system to see the UUIDs of the other partitions so I can use them in fstab?
and/or is there any other way I can reliably get my swap and media partitions mounted?
|
You could use the "disk/by-id" names in /etc/fstab, see
ls -l /dev/disk/by-id
Note that these device names may be also used in other files (initrd, grub configs). So you may update your grub config and re-recreate initrd too.
| sda and sdb keep on swapping |
1,460,513,809,000 |
The situation is as follows.
I have a Linux partition on a primary drive (modestly-sized SSD, and sharing it with Windows).
I have another Linux (ext4) partition on a hard drive. It is permanently mounted in /etc/fstab.
I don't want to make a swap file on a root drive to save space.
Thus I want to make a swap file on the hard drive partition. I've successfully created and enabled a swap file, but I have trouble enabling it permanently in /etc/fstab. Should it be mounted under /dev/ (where the drive is mounted), or under /mnt/ (where the file system is mounted)?
|
In your case the /etc/fstab entry and preceding steps for a swap file looks like as follows.
dd if=/dev/zero of=/mnt/<UUID>/swapfile bs=1M count=512
mkswap /mnt/<UUID>/swapfile
chmod 600 /mnt/<UUID>/swapfile
echo "/mnt/<UUID>/swapfile none swap defaults 0 0" >> /etc/fstab
So the entry in the /etc/fstab should look like
/mnt/<UUID>/swapfile none swap defaults 0 0
and should be below the line that mounts /mnt/<UUID>.
Then you should be able to activate it with the command as follows.
swapon -a
Concerning the question from your comment, mounting the swap file with the UUID created during mkswap, no it is no possible. You have to specify the full path to the file.
| How should an entry to fstab be formulated for a swap file that is not situated on a primary drive? |
1,460,513,809,000 |
I'm installing a system on SSD with LUKS and Btrfs, where should I enable discard option for TRIM support? Only /etc/crypttab, only /etc/fstab, everywhere, or nowhere since Btrfs detects SSDs and enables TRIM support?
I also use LVM, shoud I somehow change configs to activate TRIM support for LVM too?
P.S. I know about security implications on LUKS with TRIM and I'm fine with it.
|
For TRIM to work, it has to be enabled on all layers. The first step therefore is to enable it in LUKS as LUKS normally disables TRIM due to the security implications. For some distributions you do this in the crypttab, for others you need to edit the cmdline.
Since LVM is the next layer on top of LUKS it needs to pass TRIM, which it does per default if the underlying device supports it. Additionally you can set issue_discards = 1 in your lvm.conf, which will bulk-TRIM on lvremove and vgremove. With this in place you can either use fstrim or enable btrfs' native discard (set discard in fstab, see here). If everything works successfully, btrfs will print
BTRFS info (device <something>): turning on discard
to syslog.
| Where should I enable discard option? |
1,460,513,809,000 |
so that it appears in Thunar's sidebar under the name 'Schijf-2'?
I am running linux mint13 xfce and this has been a headache causer for the last couple of days.
the UUID of this partition is:
this is the output from blkid:
/dev/sda2: UUID="913aedd1-9c06-46fa-a26e-32bf5ef0a150" TYPE="ext4"
How should I enter this in fstab so that it mounts to this directory:
/media/Schijf-2/
I have tried so many things, I have read so many stackexchange questions, but I still have not succeeded.
Edit:
without an entry in fstab, the drive is shown as Schijf-2 in the file manager now.
But this partition is not automatically mounted at startup.
Which causes links to be not working, Dropbox asking for a new location etc.
And to have this automatically mounted, I need an entry in fstab. Right?
Or is there an other place where I can set to mount it automatically at startup/login?
edit 2:
After adding it again to fstab as @jasonwryan suggested, the partition shows up in Thunar when I am logged in into my own account. After logging in into my dad's account, it does not show up. Which again confirms my thoughts that somehow my dad's account has got messed up.
Which files or directory from my account should I copy paste to my dad's account to have the same settings as my own account?
I already tried removing my dad's account and adding again, but that got me into totally different trouble. (but this is a different question and has nothing to do with mounting my /dev/sda2 in fstab).
|
By default, if your fstab entry is:
UUID=913aedd1... /media/Schijf-2 ext4 rw,relatime 0 2
your partition will not be shown as Schijf-2 in your sidebar, unless it is labelled Schijf-2. You have two options:
Leave the fstab entry as is and label your partition (e.g. if sda2 is your partition):
e2label /dev/sda2 Schijf-2
Leave the partition as is and add x-gvfs-name=Schijf-21 to your mount options in fstab:
UUID=913aedd1 /media/Schijf-2 ext4 rw,relatime,x-gvfs-name=Schijf-2 0 2
1
this works even if the partition has a different label and you want it to be shown as Schijf-2
| how should I mount my ext4 partition in fstab |
1,460,513,809,000 |
what is the best approach to check all mount point are mounted according to fstab file
my target is to check that all mountpoint as defined in fstab are really mounted
what is the command for this ?
|
mount --fake --verbose --all
For currently mounted devices the output will contain "already mounted".
Options explained (for exact details see man 8 mount):
--fake: mount command will not actually mount anything
--verbose: provide detailed output
--all: mount all devices listed in fstab
| check all mount point are mounted according to fstab file |
1,460,513,809,000 |
Suppose I start with a non-btrfs system and then add a secondary drive that I format as btrfs. How would I mount /var/log on a subvolume of the new drive instead of on the original drive? Is this even possible?
I've created the fs and the subvolume 'log' on it, but no syntax I try gets it to mount.
|
It turns out that you just have to specify the id of the subvolume. To find it, do
# btrfs subvolume list <path to btrfs drive/fs>
For fstab, the line will be very similar to the line for the btrfs drive in general, but with the subvolid option set. Mine looks like this since I'm using LVM:
/dev/mapper/ubuntu--vg-vmdrive /mnt/vmdrive btrfs defaults 0 0
/dev/mapper/ubuntu--vg-vmdrive /var/log btrfs defaults,compress=lzo,commit=120,subvolid=408 0 0
Your subvolid will probably be different.
Also, watch out for permission. As with any mount, you might need to set the uid/gid or umask, especially for something like /var/log.
| How do you mount a specific btrfs subvolume? |
1,460,513,809,000 |
I need to create /dev/shm on an embedded ARM system.
From "Installed The Latest Changes to Current and......".
I see that it can be created with mkdir /lib/udev/devices/shm, but I'm wondering what is supposed to be at that location? The only directory I have at that location is /lib/modules/, there's no devices/ or anything.
So I went ahead and just created them, empty directories. I then added:
tmpfs /dev/shm tmpfs defaults 0 0
to my /etc/fstab and I didn't add an mtab entry, since I don't have an /etc/mtab. I then rebooted and now, there's still no /dev/shm device.
Any ideas how I get that device?
EDIT #1
Oh, and a mount -a (after a reboot) results in:
# mount -a
mount: mounting tmpfs on /dev/shm failed: No such file or directory`
|
An embedded system may have a static /dev, rather than use udev to populate it. If you don't have /lib/udev, then presumably your system isn't running udev. In that case, you need to create /dev/shm on the root filesystem.
If the root filesystem is an initramfs, rebuild your initramfs with an extra line in the initramfs description file:
dir /dev 755 0 0
dir /dev/shm 755 0 0
…
If the root filesystem is an on-disk filesystem, just create the directory.
# mkdir /dev/shm
| can not create /dev/shm |
1,593,417,547,000 |
I mount successively two points using fstab in my linux system
# Mounting apps drive
UUID=c54ca7da-117d-4cb2-8897-019ba4f6f12d /media/user/apps ext4 defaults 0 2
# Mounting opt based on apps mountpoint
/media/user/apps/opt /opt none bind
As you can see, the second mountpoint /opt is mounted on the previous mounted partition /media/user/apps/. I am not sure, whether it is safe to do like that. I am asking, if I should add some kind of condition or waiting time before I mount /opt. If the first fstab command is not yet completed and then the second command tries to bind there might be a problem, is that right?
|
/media/user/apps/opt /opt none bind,x-systemd.requires=/media/user/apps
Should do the trick.
There are two more options that help doing a safe successive mounting, because when we need to specify order dependencies between mount commands and other units.
x-systemd.after
x-systemd.before
So we can add
/media/user/apps/opt /opt none bind,x-systemd.after=/media/user/apps
But also, equivalently,
UUID=c54ca7da-117d-4cb2-8897-019ba4f6f12d /media/user/apps ext4 defaults,x-systemd.before=/opt 0 2
More information at systemd.mount
| Mounting successively in fstab: wait for partition to be mounted? |
1,593,417,547,000 |
I have a problem like this question
How disk became suddenly write protected in spite configuration is read/write?
And I used these commands to resolve that
umount /dev/sdb1
e2fsck /dev/sdb1
mount /dev/sdb1
but
~# e2fsck /dev/sdb1
e2fsck 1.44.5 (15-Dec-2018)
ext2fs_open2: Bad magic number in super-block
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/sdb1
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
/dev/sdb1 contains a ufs file system
additional commands to help you to know additional details
~#nano /etc/fstab
UUID=###951671### /DATA ufs defaults 1 2
mkdir /DATA
mount /DATA
~# ls -lat | grep DATA
drwxr-xr-x 5 root root 1024 May 26 11:37 DATA
~# df -h | grep sd
/dev/sda1 276G 8.7G 254G 4% /
**/dev/sdb1 197G 102G 80G 57% /DATA**
~# lsblk -f | grep sd
sda
├─sda1 ext4 ###-c0fb-42ce-9c78-### 253.2G 3% /
├─sda2
└─sda5 swap ###-27b4-485b-98b3-### [SWAP]
sdb
└─sdb1 ufs ###951671### 79.3G 52% /DATA
~:/DATA# ls
ls: reading directory '.': Input/output error
~:/DATA# mount -o rw,remount /dev/sdb1
mount: /DATA: mount point not mounted or bad option.
~# umount /DATA
~# e2fsck /DATA
e2fsck 1.44.5 (15-Dec-2018)
e2fsck: Is a directory while trying to open /DATA
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
~# mount /DATA
mount: /DATA: WARNING: device write-protected, mounted read-only.
At all, I would like to access to this hard /dev/sdb1 in /DATA folder
How can I resolve this problem?
|
I resolved this problem
$ dmesg|grep bsd
[ 3.467958] sda1:
Then:
$ sudo mount -t ufs -r -o ufstype=ufs2 /dev/sdb1 ~/freebsd
Of course, for another version of linux line ubuntu we need to know:
Possible common types are:
old old format of ufs
default value, supported as read-only
44bsd used in FreeBSD, NetBSD, OpenBSD
ufs2 used in FreeBSD 5.x
5xbsd synonym for ufs2
sun used in SunOS (Solaris)
sunx86 used in SunOS for Intel (Solarisx86)
hp used in HP-UX
nextstep used in NextStep
nextstep-cd used for NextStep CDROMs (block_size == 2048)
openstep used in OpenStep
and we have to use this command for ubuntu and like that
$ sudo mount -t ufs -r -o ufstype=44bsd /dev/sdb1 /DATA
| How to resolve e2fsck Superblock problem? |
1,593,417,547,000 |
Recently, I updated my Arch Linux install. I try to do that every two weeks. Once I did, I rebooted and received an error that a dependency failed for /home (my home partition.) The boot process immediately went into emergency mode. I found that my home partition was not being mounted at all. The weird thing is, I can go into emergency mode, then exit. After that, the home partition mounts and the system loads just fine.
I should mention that my home partition is btrfs while my root partition is ext4. I do have a HOOK in my /etc/mkinitcpio.conf file that scans for btrfs on boot. It is working, as I can see it outputting that it is scanning for btrfs.
It does tell me to check the output of journalctl -xb but, couldn't see why my home partition is not mounting.
I've read similar posts that say to check the UUID of the home partition in my /etc/fstab file. Everything in my fstab file looked good, but I decided to put in an arch installation USB and regenerate the file. That did not fix the issue.
Here's a portion of my journalctl -xb output:
-- Unit systemd-fsck@dev-disk-by\x2duuid-8244\x2d4C7C.service has begun starting up.
Jul 20 01:46:35 aurora systemd[1]: home.mount: Bound to unit dev-disk-by\x2duuid-9f171b93\x2ddd7d\x2d4353\x2d84f8\x2d79c6f673f47f.device, but unit isn't active.
Jul 20 01:46:35 aurora systemd[1]: Dependency failed for /home.
-- Subject: Unit home.mount has failed
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit home.mount has failed.
--
-- The result is RESULT.
Jul 20 01:46:35 aurora systemd[1]: Dependency failed for Local File Systems.
-- Subject: Unit local-fs.target has failed
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit local-fs.target has failed.
--
-- The result is RESULT.
Jul 20 01:46:35 aurora systemd[1]: local-fs.target: Job local-fs.target/start failed with result 'dependency'.
Jul 20 01:46:35 aurora systemd[1]: local-fs.target: Triggering OnFailure= dependencies.
Jul 20 01:46:35 aurora systemd[1]: home.mount: Job home.mount/start failed with result 'dependency'.
Jul 20 01:46:35 aurora systemd[1]: Reached target Network.
-- Subject: Unit network.target has finished start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit network.target has finished starting up.
--
-- The start-up result is RESULT.
Jul 20 01:46:35 aurora systemd[1]: Reached target Sockets.
-- Subject: Unit sockets.target has finished start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit sockets.target has finished starting up.
--
-- The start-up result is RESULT.
Jul 20 01:46:35 aurora systemd[1]: Starting Create Volatile Files and Directories...
-- Subject: Unit systemd-tmpfiles-setup.service has begun start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit systemd-tmpfiles-setup.service has begun starting up.
Jul 20 01:46:35 aurora systemd[1]: Started Emergency Shell.
-- Subject: Unit emergency.service has finished start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit emergency.service has finished starting up.
--
-- The start-up result is RESULT.
Jul 20 01:46:35 aurora systemd[1]: Reached target Emergency Mode.
-- Subject: Unit emergency.target has finished start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit emergency.target has finished starting up.
--
-- The start-up result is RESULT.
Jul 20 01:46:35 aurora systemd[1]: Reached target Timers.
-- Subject: Unit timers.target has finished start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit timers.target has finished starting up.
--
-- The start-up result is RESULT.
Jul 20 01:46:35 aurora systemd[1]: Mounting /home...
-- Subject: Unit home.mount has begun start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit home.mount has begun starting up.
Jul 20 01:46:35 aurora kernel: BTRFS info (device sdb3): disk space caching is enabled
Jul 20 01:46:35 aurora kernel: BTRFS info (device sdb3): has skinny extents
Jul 20 01:46:35 aurora systemd-fsck[305]: fsck.fat 4.1 (2017-01-24)
Jul 20 01:46:35 aurora systemd-fsck[305]: /dev/sda6: 349 files, 3156/127746 clusters
Jul 20 01:46:35 aurora systemd[1]: Started File System Check on /dev/disk/by-uuid/8244-4C7C.
-- Subject: Unit systemd-fsck@dev-disk-by\x2duuid-8244\x2d4C7C.service has finished start-up
-- Defined-By: systemd
-- Support: https://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit systemd-fsck@dev-disk-by\x2duuid-8244\x2d4C7C.service has finished starting up.
Here's my /etc/fstab file (I've intentionally erased my other UUIDs for security reasons):
# /dev/sda5
UUID=XXXX-XXXX-XX / ext4 rw,relatime,data=ordered 0 1
# /dev/sda6
UUID=XXXX-XXXX-XX /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro 0 2
# /dev/sdb3
UUID=9f171b93-dd7d-4353-84f8-79c6f673f47f /home btrfs rw,relatime,space_cache,subvolid=5,subvol=/ 0 0
# /dev/sda7
UUID=XXXX-XXXX-XX none swap defaults,pri=-2 0 0
Here's the output of lsblk -f (again, I've intentionally erased my other UUIDs for security reasons)
sda
├─sda1 vfat XXXX-XXXX-XX
├─sda2
├─sda3 ntfs XXXX-XXXX-XX
├─sda4 ntfs XXXX-XXXX-XX
├─sda5 ext4 XXXX-XXXX-XX /
├─sda6 vfat XXXX-XXXX-XX /boot/efi
└─sda7 swap XXXX-XXXX-XX [SWAP]
sdb
├─sdb1
├─sdb2 ntfs Games XXXX-XXXX-XX
└─sdb3 btrfs 9f171b93-dd7d-4353-84f8-79c6f673f47f /home
As suggested below, I checked dmesg here's a portion of that:
[ 2.055164] Btrfs loaded, crc32c=crc32c-intel
[ 2.055509] BTRFS: device fsid 9f171b93-dd7d-4353-84f8-79c6f673f47f devid 1 transid 83839 /de v/sdb3
[ 2.086828] PM: Image not found (code -22)
[ 2.340606] EXT4-fs (sda5): mounted filesystem with ordered data mode. Opts: (null)
[ 2.457135] systemd[1]: systemd 239 running in system mode. (+PAM -AUDIT -SELINUX -IMA -APPAR MOR +SMACK -SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTIL S +KMOD -IDN2 +IDN +PCRE2 default-hierarchy=hybrid)
[ 2.473611] systemd[1]: Detected architecture x86-64.
[ 2.478035] systemd[1]: Set hostname to <aurora>.
[ 2.557974] systemd[1]: local-fs-pre.target: Wants dependency dropin /etc/systemd/system/loca l-fs-pre.target.wants/btrfs-dev-scan.service is not a symlink, ignoring.
[ 2.567559] random: systemd: uninitialized urandom read (16 bytes read)
[ 2.567629] systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
[ 2.567665] random: systemd: uninitialized urandom read (16 bytes read)
[ 2.569028] systemd[1]: Created slice system-systemd\x2dfsck.slice.
[ 2.569041] random: systemd: uninitialized urandom read (16 bytes read)
[ 2.569246] systemd[1]: Created slice system-getty.slice.
[ 2.569352] systemd[1]: Listening on udev Control Socket.
[ 2.569422] systemd[1]: Listening on udev Kernel Socket.
[ 2.569518] systemd[1]: Listening on Journal Socket (/dev/log).
[ 2.583473] EXT4-fs (sda5): re-mounted. Opts: data=ordered
[ 2.643252] systemd-journald[236]: Received request to flush runtime journal from PID 1
[ 2.729364] systemd-journald[236]: File /var/log/journal/cabc2f34b30c47f09c201422dfa880bd/sys tem.journal corrupted or uncleanly shut down, renaming and replacing.
[ 3.001682] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
[ 3.027930] rtc_cmos 00:02: RTC can wake from S4
[ 3.029174] rtc_cmos 00:02: registered as rtc0
[ 3.029191] rtc_cmos 00:02: alarms up to one month, y3k, 114 bytes nvram, hpet irqs
[ 3.029493] i801_smbus 0000:00:1f.3: enabling device (0001 -> 0003)
[ 3.029623] i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
[ 3.066003] e1000e: Intel(R) PRO/1000 Network Driver - 3.2.6-k
[ 3.066005] e1000e: Copyright(c) 1999 - 2015 Intel Corporation.
[ 3.066898] e1000e 0000:00:19.0: Interrupt Throttling Rate (ints/sec) set to dynamic conserva tive mode
[ 3.084156] mousedev: PS/2 mouse device common for all mice
[ 3.086094] input: PC Speaker as /devices/platform/pcspkr/input/input7
[ 3.092504] RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 163840 ms ovfl timer
[ 3.092506] RAPL PMU: hw unit of domain pp0-core 2^-16 Joules
[ 3.092506] RAPL PMU: hw unit of domain package 2^-16 Joules
[ 3.092507] RAPL PMU: hw unit of domain pp1-gpu 2^-16 Joules
[ 3.094356] ipmi message handler version 39.2
[ 3.097517] ipmi device interface
[ 3.102120] Linux agpgart interface v0.103
[ 3.145910] iTCO_vendor_support: vendor-support=0
[ 3.150461] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
[ 3.150494] gpio_ich: GPIO from 436 to 511 on gpio_ich
[ 3.150597] iTCO_wdt: Found a Cougar Point TCO device (Version=2, TCOBASE=0x0460)
[ 3.153467] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
[ 3.189732] Adding 3288640k swap on /dev/sda7. Priority:-2 extents:1 across:3288640k SSFS
[ 3.232407] e1000e 0000:00:19.0 0000:00:19.0 (uninitialized): registered PHC clock
[ 3.235465] sdb: sdb1 sdb2 sdb3
[ 3.301961] BTRFS info (device sdb3): disk space caching is enabled
[ 3.301964] BTRFS info (device sdb3): has skinny extents
I guess the permanent solution would be to copy everything from the BTRFS partition, reformat it to EXT4, and put the files back on it. I don't necessarily want to do that because it would take a lot of time, but that's probably what needs to be done.
Any help would be appreciated, I feel like I've tried everything I know to do at this point.
Let me know if you need any other outputs or logs.
|
Add the -netdev option to the fstab entry for '/home'. It appears the dependency for mounting is the network management stack.
| systemd - Booting into Emergency Mode with Error - Dependency Failed for /home |
1,593,417,547,000 |
I discovered that is not possible to run fsck on a loopback device at boot by the fsck flag inside the fstab file, nor is it possible to accomplish this by manually running fsck when the loop device is mounted.
Is there an alternative to check the device at boot time?
|
I found an elegant and reliable solution.
I have writteng a script for then"/etc/initramfs-tools/scripts/local-premount/" boot phase in order to process my loop disk just before the file system mounting.
Below the details:
Create the script into /etc/initramfs-tools/scripts/local-premount/.
Update the initrd.img by the command update-initramfs -u.
Here is the script source:
#!/bin/sh
# Antonio Petricca <[email protected]> - 03/01/2018
PREREQ=""
# Output pre-requisites
prereqs()
{
echo "$PREREQ"
}
case "$1" in
prereqs)
prereqs
exit 0
;;
esac
. /scripts/functions
log_begin_msg "Running file system check on loop device(s)"
DEV=/dev/sdb5
MNT=/tmp/mnt
LOOP=$MNT/.linux-loops/242eef08-32d6-42c2-93eb-afdc2111a13e.ext4
mkdir $MNT && \
mount -t ntfs $DEV $MNT && \
fsck.ext4 -p -v $LOOP && \
umount $MNT
# Uncomment next line to hold messages for debugging
# sleep 10
log_end_msg "Done"
# Continue boot anyway
exit 0
Regards!
| Fsck at boot time for loopback device |
1,593,417,547,000 |
I have a HDD which I mount on /mnt/sda1 at startup (in /etc/fstab)
Whenever I want to send a file to the trash in pcmanfm, I get the following message :
Some files cannot be moved to trash can because the underlying file
systems don't support this operation. Do you want to delete them
instead?
The owner of /mnt/sda1 and /mnt/sda1/.Trash-1000 is user 1000 (me), and I have read write permissions.
When deleting a file in the CLI using gvfs-trash or gio trash it correctly sends the file to /mnt/sda1/.Trash-1000, and pcmanfm sees the file in the Trash and can even restore it. Still it cannot delete it.
Any clues ?
|
This is a bit late, but I ran into the same issue. As it turns out, you have to disable the 'Erase files on removable media instead of "trash can" creation' preference. Apparently PCManFM sees any drives with an unmount button as removable media. Once that's done, sending files to trash works as expected.
| pcmanfm doesn't send files to trash on external drive |
1,593,417,547,000 |
I have mounted several data drives and used noexec parameter. Thinking that since it's only data I wouldn't need exec. Now I am having some permission issues and would like to rule this out as the cause as well as to understand the option better.
Does exec parameter in /etc/fstab have the same effect as giving execute permissions to all directories and files in the mounted system?
How does it affect windows executables (.exe) accessed via samba shares or other network protocols?
Mounted drives will be pooled with aufs or mhddfs and accessed via a central mount point in /mnt/virtual. It will then be accessed via network (samba right now). There will be some local access too (xbmc). I am not sure if I should provide it a direct link or samba link to the files?
What is the best practice in this case?
|
Looking through the man pages
If you look at the man page for mount.cifs which is what will be used to mount any shares listed in /etc/fstab there is a note that mentions noexec.
excerpt - mount.cifs man page
This command may be used only by root, unless installed setuid, in
which case the noeexec and nosuid mount flags are enabled. When
installed as a setuid program, the program follows the conventions set
forth by the mount program for user mounts, with the added restriction
that users must be able to chdir() into the mountpoint prior to the
mount in order to be able to mount onto it.
Some samba client tools like smbclient(8) honour client-side
configuration parameters present in smb.conf. Unlike those client
tools, mount.cifs ignores smb.conf completely.
Given this I would expect it to honor the exec/noexec option if it's included in any mount attempts. Additionally looking at the mount.cifs usage shows how that option would be used.
excerpt - mount.cifs usage
Less commonly used options:
credentials=<filename>,guest,perm,noperm,setuids,nosetuids,rw,ro,
sep=<char>,iocharset=<codepage>,suid,nosuid,exec,noexec,serverino,
mapchars,nomapchars,nolock,servernetbiosname=<SRV_RFC1001NAME>
directio,nounix,cifsacl,sec=<authentication mechanism>,sign,fsc
Looking at the fstab man page explains the intended purpose for exec/noexec, but doesn't specify whether it's for all executables or just Unix ones.
excerpt from fstab man page
exec / noexec
exec lets you execute binaries that are on that partition, whereas
noexec does not let you do that. noexec might be useful for a
partition that contains no binaries, like /var, or contains binaries
you do not want to execute on your system, or that cannot even be
executed on your system, as might be the case of a Windows partition.
Does exec/noexec make everything executable?
No the exec/noexec attribute simply gates the allowing of things that are marked as executable through their permissions bits, it doesn't effect the permissions directly.
What about Window's binaries?
However, the setting of exec/noexec has no control over Windows executables, only Unix executables that can also reside on these shares.
Also I'm not even sure how these would come into play if you're mounting a CIFS/Samba share through /etc/fstab, when would a Windows OS even come into the mix in this scenario. Windows would/could mount this share itself directly and not even bother going through Linux.
Testing it out
Example from Unix
You can test this out using mount.cifs directly via the command line like so. Assuming we had a file on the CIFS/Samba share as follows:
$ cat cmd.bash
#!/bin/bash
echo "hi"
$ chmod +x cmd.bash
Now we mount it like so, and try and run out script, cmd.bash:
$ mount.cifs //server/cifsshare /path/to/cifsmnt -o user=joeuser,noexec
$ cd /path/to/cifsmnt
$ ./cmd.bash
bash: ./cmd.bash: Permission denied
If we omit that option, noexec:
$ mount.cifs //server/cifsshare /path/to/cifsmnt -o user=joeuser
$ cd /path/to/cifsmnt
$ ./cmd.bash
hi
From Windows
The only scenario I could conceive of here would be if I was using something like Virtualbox and I mounted a CIFS/Samba share inside of a directory that a Windows VM could then utilize.
When I tested this out, I was successfully able to run .exe files through this mounting setup.
NOTE: I used the \\vboxsrv share mechanism in Virtualbox to mount my home directory that's local on my system, /home/saml. I then ran this command, mounting a CIFS/Samba share as a directory inside /home/saml.
$ mkdir /home/saml/cifsmnt
$ mount //server/cifsshare cifsmount -o user=joeuser,noexec
Conclusions
Doing the above would seem to indicate that exec/noexec has no baring over Windows' access to the files.
| How does Fstab exec noexec parameter affects samba shares |
1,593,417,547,000 |
I recently got an SSD for my computer. Therefore I reinstalled my system and mounted / on /dev/sda1 (which is a partition on the SSD).
To protect the SSD, I managed to mount /tmp on the RAMdisk. However, I would also want some other folders to be outsourced, not on the SSD but on my RAID1.
The following folders should be outsourced:
/var/log
/var/cache
/var/games
/var/tmp
(do you have any other suggestions?)
I tried to simply mount those folders on a RAID-partition, just like I did with /tmp (find a part of my /etc/fstab below). Now I know, this was not the right way, instead I would have to use bind.
I would need your help for the following issues:
What preparations are necessary on the RAID (what about permissions, especially)?
What are the propper mount options in /etc/fstab?
Now that I have done it the wrong way, how would I migrate the data to the correct place (and is this even necessary for those folders?)
a wrong part of my /etc/fstab
<raid uuid> is the same for all of these lines
UUID=<raid uuid> /var/log ext4 noexec,nodev,nosuid 0 0
UUID=<raid uuid> /var/cache ext4 noexec,nodev,nosuid 0 0
UUID=<raid uuid> /var/games ext4 noexec,nodev,nosuid 0 0
UUID=<raid uuid> /var/tmp ext4 noexec,nodev,nosuid 0 0
|
Mount the raid partition to /mnt/var
UUID=<raid uuid> /mnt/var ext4 defaults 0 0
Create mount point /mnt/var
cd /mnt; mkdir var
Reboot
Copy content into /mnt/var
cp -a /var/log /mnt/var
cp -a /var/cache /mnt/var
cp -a /var/games /mnt/var
cp -a /var/tmp /mnt/var
Modify fstab as follow to mount them to /var on next boot
UUID=<raid uuid> /mnt/var ext4 defaults 0 0
/mnt/var/log /var/log none bind 0 0
/mnt/var/cache /var/cache none bind 0 0
/mnt/var/games /var/games none bind 0 0
/mnt/var/tmp /var/tmp none bind 0 0
Reboot
| How to mount some folders on a different partition |
1,593,417,547,000 |
I'd like to make a temporary change to my fstab file so that my /home is on another drive. However, I don't want the whole partition to be mounted, but just a folder ("home") on that partition. I'm OK with the rest of the data being unavailable.
What's the canonical way of expressing this in fstab? I can't think of a way to do it in one command (as I can't reference a folder on a filesystem I haven't mounted). I think I should do a first mount and then move the folder to /home. But I don't know if I can do a move in fstab, haven't found it in man (and I don't feel like trying blindly because I only have ssh access to the machine right now).
For now I have a bind mount in fstab:
/dev/sdd1 /mnt/temphome ntfs defaults,errors=remount-ro 0 2
/mnt/temphome/home /home none bind
However this leaves /dev/sdd1 mounted in both points.
To summarize:
can I do a move mount operation in fstab and if yes, then how?
is that the right approach and if not, what is?
Thanks in advance.
|
I don't think you can perform moves from /etc/fstab. If you want to do that, add a mount --move command in /etc/rc.local. That leaves a time in the boot process during which the home directories are not available at their final location. Since these are the home directories, they shouldn't be used much if at all during the boot process, so that's ok. The one thing I can think of is @reboot crontab directives. If you have any of these, the home directories need to be available, so you should add mount --move to the right place in /etc/rc.sysinit instead (just after mount -a).
Using a bind mount is probably fine, though. What can go wrong is mainly processes that traverse the whole disk, such as backups and updatedb. Leaving the bind mounts in /etc/fstab is the least risky option, but you should configure disk traversal processes to skip /mnt/temphome/home.
Yet another possibility is to make /home a symbolic link. However this may cause some programs to record the absolute path to users' home directories, which would be /mnt/temphome/home/bob. A bind mount or moving a submount doesn't have this problem.
| Mount a folder on a drive in fstab. Move? |
1,593,417,547,000 |
I'm running Arch Linux on a Macbook. I want to automatically mount my Macintosh partition when booting Arch, so I added the following to /etc/fstab:
/dev/sda2 /media/Machintosh hfsplus defaults 1 2
After rebooting, the partition was not mounted, but I could mount it with the following command:
sudo mount /dev/sda2
How can I make Arch Linux automatically mount the partition?
|
Creating the mount point fixed the issue:
mkdir /mnt/Machintosh
Also if you want to avoid warnings, mount the volume as read-only, because write is not supported on HFS+ journaled systems (or you can disable journaling, but it is not advised).
| fstab not mounting a disk on boot |
1,593,417,547,000 |
on my Raspberry Pi I have a SD card with noobs and an installed Raspbian. Everything went fine without any problems, it was booted directly into the raspbian. Now I made a wrong entry in the fstab and booting was no longer possible. "root locked unable to mount /mnt/server...". kind of like that, anyway. Now I put the SD in a laptop with Linux mint and commented out the wrong line with sudo nano /etc/fstab. Back to my RPi now the recoverymenu of noobs comes with "select an os to boot" but the window is empty. Even when enforcing Recovery (Shift key), it no longer recognizes that Raspbian is installed. Before editing fstab, he had recognized this.
Is there any way to fix that? like fixmbr to boot directly into the raspbian?
More data:
fdisk -l
Gerät Boot Start Ende Sektoren Größe Id Typ
/dev/sdb1 8192 3275390 3267199 1,6G e W95 FAT16 (LBA)
/dev/sdb2 3275391 15757311 12481921 6G 5 Erweiterte
/dev/sdb5 3276800 3342333 65534 32M 83 Linux
/dev/sdb6 3342336 3483647 141312 69M c W95 FAT32 (LBA)
/dev/sdb7 3489792 15757311 12267520 5,9G 83 Linux
proc /proc proc defaults 0 0
/dev/mmcblk0p6 /boot vfat defaults 0 2
/dev/mmcblk0p7 / ext4 defaults,noatime 0 1
# a swapfile is not a swap partition, no line here
# use dphys-swapfile swap[on|off] for that
# //j6p-w7-srv/R /mnt/server cifs username=joe6pack,password=xxxxxxxxxxxxx,file_mode=0666,dir_mode=0666 0 0
# https://webdav.magentacloud.de /mnt/webdav davfs user,rw,file_mode=0777,dir_mode=0777,gid=davfs2 0 0
the line with /mnt/server was the problem
this is the fstab from the last partition sdb7 (mmcblk0p7)
here the cmdline.txt from sdb6 (mmcblk0p6)
dwc_otg.lpm_enable=0 console=serial0,115200 console=tty1 root=/dev/mmcblk0p7 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait
|
I solved it: The problem was a corrupted file in the SETTINGS mount point. There is a file called: installed_os.json. This one was defective. For whatever reason. As a result, NOOB's could not find any installed Linux and so the selection window remained empty.
The following content had to be in my case included:
[
{
"description" : "A Debian wheezy port, optimised for the Raspberry Pi",
"folder" : "/mnt/os/Raspbian",
"icon" : "/mnt/os/Raspbian/Raspbian.png",
"name" : "Raspbian",
"partitions" : [
"/dev/mmcblk0p6",
"/dev/mmcblk0p7"
],
"release_date" : "2014-01-07"
}
]
now NOOB's knew about an already installed linux and was able to boot without any problems.
| edit fstab on another computer, no more bootable |
1,593,417,547,000 |
During the installation I thought I had generated the correct fstab, but now, after cp complains that the filesystem is full, I discover that fstab is empty.
I have this configuration:
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 232.9G 0 disk
├─sda1 8:1 0 238M 0 part
├─sda2 8:2 0 1.9G 0 part
├─sda3 8:3 0 23.3G 0 part /
└─sda4 8:4 0 207.5G 0 part
But this is absolutely not what I want. I'd like to mount /dev/sda1 as /boot and /dev/sda4 as /home/user. /dev/sda2 is swap.
The problem is that I already added a ton of files in /home/user, all of which went in /dev/sda3, because that one was the only one mounted.
How do I fix this?
I could boot from a live cd, mount the partitions, run genfstab, but then? Where do my files under /home/user go? Is it safe?
Should I move all my home content into a temporary directory, mount /dev/sda4 and copy everything back?
|
Your files are in the sda3 partition, so changing fstab by itself won't move the files.
You could (among several alternatives)
mkdir /home.new
mount /dev/sda4 /home.new
cp -a /home/user /home.new
umount /home.new
rf -fr /home
mv /home.new /home
mount /dev/sda4 /home
then edit fstab to include /home /dev/sda4
| Archlinux, move files to a different partition |
1,593,417,547,000 |
I am using Arch Linux. Is there a way to automount other NTFS or Ext partitions automatically without configuring them in /etc/fstab?
|
There is https://www.archlinux.org/packages/extra/x86_64/gnome-disk-utility/ which is a GUI application that will allow you to generate an fstab entry automatically.
| Auto mounting other partitions in Arch Linux |
1,593,417,547,000 |
I believe the question is clear. But will add some details and history. I have two systems Win10 and Manjaro Linux. After reboot from Windows to Linux, I try to mount NTFS file systems, and mount often fails with the message about unclean cache. The medicine is ntfsfix /dev/sdXX or better ntfsfix /dev/disk/by-label/my-ntfs-partition
I added string to fstab:
LABEL=Media /media/Media ntfs nofail 0 2
I want Linux fixes NTFS for me. So if mount fails, it should call ntfsfix, and then retry mount.
Please help me to explain Linux what I want.
|
Create a bash file containing the following and set it to run at startup.
#!/bin/bash
#delay for 10 seconds
sleep 10
#Check to see if Media has failed to mount and carry out the fix
if ! mount | grep Media > /dev/null; then
ntfsfix /dev/disk/by-label/my-ntfs-partition && mount -t ntfs /dev/path/to/ntfsdisk /media/Media
else
exit
fi
Running scripts on startup varies depending on the desktop environment so I can't really comment on that. E.g. Achieving it in Gnome is different to Openbox.
| How to run script on fstab mount failed and try once more? Like handle exception |
1,593,417,547,000 |
So there are a bunch of NICs and vlans on the server and after adding the last ones the auto mount of a network volume is not working. I can do it all right if I SSH in and use mount however it doesn't happen with fstab.
What I'm speculating is the interfaces that are needed to reach the networked storage come up too late, I have no experience with the startup so I have only a few bits of debug you show such as
# chkconfig --list
networking 0:off 1:off 2:off 3:off 4:off 5:off 6:off S:on
nfs-common 0:off 1:off 2:on 3:on 4:on 5:on 6:off S:on
(that's among others, do ask if you need to see other entries, I don't actually know what would be needed)
How can I make the nfs mounting be done at a later point, this above suggests to me that perhaps the nfs stuff happen too early?
# ls -lh /etc/rc5.d/
total 4.0K
-rw-r--r-- 1 root root 677 Mar 27 2012 README
lrwxrwxrwx 1 root root 17 Apr 10 13:46 S14portmap -> ../init.d/portmap
lrwxrwxrwx 1 root root 20 Apr 10 13:46 S15nfs-common -> ../init.d/nfs-common
lrwxrwxrwx 1 root root 20 Apr 11 11:19 S17fancontrol -> ../init.d/fancontrol
lrwxrwxrwx 1 root root 17 Apr 10 13:46 S17rsyslog -> ../init.d/rsyslog
lrwxrwxrwx 1 root root 14 Apr 10 15:59 S17sudo -> ../init.d/sudo
lrwxrwxrwx 1 root root 17 Apr 11 11:18 S18apache2 -> ../init.d/apache2
lrwxrwxrwx 1 root root 15 Apr 11 11:18 S19acpid -> ../init.d/acpid
lrwxrwxrwx 1 root root 13 Apr 11 11:18 S19atd -> ../init.d/atd
lrwxrwxrwx 1 root root 14 Apr 11 11:18 S19cron -> ../init.d/cron
lrwxrwxrwx 1 root root 15 Apr 11 11:18 S19exim4 -> ../init.d/exim4
lrwxrwxrwx 1 root root 21 Apr 11 11:18 S19mpt-statusd -> ../init.d/mpt-statusd
lrwxrwxrwx 1 root root 17 Apr 11 11:18 S19nagios3 -> ../init.d/nagios3
lrwxrwxrwx 1 root root 17 Apr 11 20:49 S19postfix -> ../init.d/postfix
lrwxrwxrwx 1 root root 15 Apr 12 19:15 S19rsync -> ../init.d/rsync
lrwxrwxrwx 1 root root 13 Apr 11 11:18 S19ssh -> ../init.d/ssh
lrwxrwxrwx 1 root root 18 Apr 11 11:18 S21bootlogs -> ../init.d/bootlogs
lrwxrwxrwx 1 root root 18 Apr 11 11:18 S22rc.local -> ../init.d/rc.local
lrwxrwxrwx 1 root root 19 Apr 11 11:18 S22rmnologin -> ../init.d/rmnologin
lrwxrwxrwx 1 root root 23 Apr 11 11:18 S22stop-bootlogd -> ../init.d/stop-bootlogd
How can I delay or change the order of events so that it'd mount with fstab, what to look for anyway?
|
OK I could work around that by putting the mount command in rc.local (also worked as @reboot root mount -a in /etc/crontab). But if someone has suggestions about fixing it by changing the services startup that'd be nice to know.
| Debian: Mounting NFS Volume at boot with fstab not working, How to change when this is attempted on boot |
1,593,417,547,000 |
I put UUID=fb2b6c2e-a8d7-4855-b109-c9717264da8a / ext4 auto,noatime,noload,data=ordered,commit=10,defaults 1 1 in fstab
And now server fails to reboot. It can reboot but reject all kind of connections.
This is what my provider said:
Yeah, the noload option might be problematic... I can't edit /etc/fstab from
single user mode, but I might be able to edit it using one of my pxe boot tools
to enter the filesystem manually. With regards to your request about
/var/log/messages and /var/log/secure, I'm afraid I can't do that for you
(technically, I'm already bordering on managed services by editing your fstab
for you, but I am justifying it as necessary to restore connectivity)...
I search for the purpose of noload option in google
https://www.google.com/search?q=fstab+noload&ie=utf-8&oe=utf-8&aq=t&rls=org.mozilla:en-US:official&client=firefox-a
and have no issue. Someone in linux forum says that it disable journaling.
I'm unable to paste the actual error messages without manually typing
them out, but I assure you they're not really very descriptive
(basically, system indicates that it is unable to remount root in
read/write mode, and then errors composed of read/write problems,
particularly in /var are printed to the screen)...
It does end in a happy ending:
Your server is back online, and I was able to successfully disable
your iptables (which for reference, I did confirm were causing
connectivity issues), and I am now able to ssh into your server with
the credentials provided earlier in this ticket:
|
noload doesn't turn off journaling. It suppresses the loading of the journal, without turning off journaling. As you can imagine, that's usually not a good thing.
noload is mostly useful to mount a disk as read-only without changing it in the slightest way, not even replaying the journal. You can read most data this way, and you can even read all data if the journal was flushed (by calling sync). For example, this is a way of reading from a filesystem that is currently mounted by a hibernated system.
noload may or may not be the cause of your problem, but in any case it's a very bad idea, and definitely not something to use in /etc/fstab.
| What does the noload option do in fstab? |
1,593,417,547,000 |
I want to start editing my /etc/fstab file more comfortably and not rely on random forums anymore. But wherever I go, I see very scarce info about it. I can nowhere find a webpage that, for example, explains all of the options available. So, who owns the fstab file, what program uses it, and where can I find the official documentation for it from its creator?
Specifically, I want to understand the difference between none, mem and tmpfs devices (the first field). I know, I can probably google it and eventually find the answer, but as I said, I don't want to do it anymore, I want to go full-geek mode and read from the official resources.
EDIT: Quick answer: The difference should be only in the string (the name) and should only matter to systemd that reads the file when mounting filesystems.
|
The documentation for system files such as fstab is (almost always) on your machine. In this instance man fstab will answer your question - up to a point:
The first field (fs_spec). This field describes the block special device or remote filesystem to be mounted.
[...] For filesystems with no storage, any string can be used, and will show up in df(1) output, for example. Typical usage is proc for procfs; mem, none, or tmpfs for tmpfs.
You've just mentioned in a comment that you want to create a tmpfs entry. Here's an example of one for /mnt/mytmpfs:
tmpfs /mnt/mytmpfs tmpfs nosuid,nodev,noatime 0 0
Don't forget to create the directory yourself (mkdir /mnt/mytmpfs).
| Where is the official documentation for /etc/fstab? |
1,593,417,547,000 |
To have /tmp on tmpfs, I know I can use an entry in /etc/fstab, but I do not understand the role of /etc/default/tmpfs mentioned sometimes, and in what case I need to create or modify it.
Recently, I often see suggested to use systemd tmp.mount confuguration. For example, on Debian:
$ sudo cp /usr/share/systemd/tmp.mount /etc/systemd/system/
$ sudo systemctl enable tmp.mount
Which of the two methods is more appropriate for everyday use? In what situations one is better than the other? When do I need to deal with /etc/default/tmpfs?
|
On some systems, /tmp is a tmpfs by default, and this is the configuration provided by systemd’s “API File Systems”. Fedora-based systems follow this pattern to various extents; Fedora itself ships /usr/lib/systemd/system/tmp.mount and enables it, but RHEL 8 ships it without enabling it. On such systems, masking and unmasking the unit is the appropriate way of disabling or enabling a tmpfs /tmp, as documented in the API File Systems documentation.
Other systems such as Debian don’t ship tmp.mount in a directly-usable location; this is why you need to copy it to /etc/systemd/system if you want to use it. This has the unfortunate side-effect of creating a full override of tmp.mount in /etc, which means that if the systemd package ships a different version of tmp.mount in /lib/systemd/system in the future, it will be ignored. On such systems I would recommend using /etc/fstab instead.
In both setups, /etc/fstab is still the recommended way of customising /tmp mounts, e.g. to change their size; man systemd.mount says
In general, configuring mount points through /etc/fstab is the preferred approach to manage mounts for humans.
and the API File Systems documentation concurs.
Using mount units is recommended for tooling, i.e. for automated configuration:
For tooling, writing mount units should be preferred over editing /etc/fstab.
(This means that tools which want to automatically set up a mount shouldn’t try to edit /etc/fstab, which is error-prone, but should instead install a mount unit, which can be done atomically and can also be overridden by a system administrator using systemd features.)
/etc/default/tmpfs is used by Debian’s sysvinit, so it’s irrelevant with systemd.
| tmp on tmpfs: fstab vs tmp.mount with systemd |
1,593,417,547,000 |
My system takes exactly 95 seconds to boot: 5 seconds actual boot and 90 seconds waiting for a nonexistent drive:
(...boot.log...)
A start job is running for dev-disk-by\x2duuid-6bbb4ed8\x2d53ea\x2d4603\x2db4f7\x2d1205c7d24e19.device (1min 29s / 1min 30s)
Timed out waiting for device dev-disk-by\x2duuid-6bbb4ed8\x2d53ea\x2d4603\x2db4f7\x2d1205c7d24e19.device.
This device is not listed in fstab, and I did not even manage to find the piece of hardware (usb disks etc.). Where can it come from and how can I disable it?
I have ecryptfs on my home directory, and I have manually disabled swap in order to save my SSD disk.
|
The file /etc/crypttab is a (less known) counterpart of fstab for managing crypto filesystems. The default installation of Ubuntu configured an encrypted swapfile:
cryptswap1 UUID=6bbb4ed8-53ea-4603-b4f7-1205c7d24e19 /dev/urandom swap,offset=1024,cipher=aes-xts-plain64
Originally I had disabled this swap partition in fstab only, which is not enough.
Anybody who knows more about the purpose and inner workings of /etc/crypttab is welcome to extend this vague self-answer of mine.
| Why does systemd wait for a disk not present in `fstab`? |
1,593,417,547,000 |
The following lines are defined in my /etc/fstab file.
My current fstab:
/dev/sdb /lpo/sda ext4 defaults,noatime 0 0
/dev/sdc /lpo/sdb ext4 defaults,noatime 0 0
From blkid we get:
/dev/sdb: UUID="14314872-abd5-24e7-a850-db36fab2c6a1" TYPE="ext4"
/dev/sdc: UUID="6d439357-3d20-48de-9973-3afb2a325eee" TYPE="ext4"
How to update my current fstab (the two lines) to use the UUID?
For example, if I create the following line (according to the man page) for /dev/sdb, is it correct?
UUID="14314872-abd5-24e7-a850-db36fab2c6a1" /dev/sdb ext4 defaults,noatime 0 0
|
UUID="14314872-abd5-24e7-a850-db36fab2c6a1" /lpo/sda ext4 defaults,noatime 0 0
UUID="6d439357-3d20-48de-9973-3afb2a325eee" /lpo/sdb ext4 defaults,noatime 0 0
The format of entries in fstab are as follows:
<file system> <dir> <type> <options> <dump> <pass>
Where <file system> is the device you want to mount (such as /dev/sdb and <dir> is the path to where the device should be mounted (/lpo/sda in your case).
There are multiple ways you can specify <file system>, the simplest being the path to the file system device in question /dev/sdb in your case (although typically they point to a partition on a drive rather than the drive, such as /dev/sdb1 but it appears that your drives lack a partition table and simply have the filesystem on the main device). But you can also use the device UUID or PARTUUID by specifying it as a key/value pair UUID="14314872-abd5-24e7-a850-db36fab2c6a1" inplace of /dev/sdb.
The main reason to use UUID or PARTUUID instead of device paths is that they are more consistent when changing the physical disks. The devices are numbered according to how they are presented to the OS by the bios (which is normally ordered by the socket they are plugged into). This means that if you add in a new device or physically rearrange existing devices they will be renumbered and what was /dev/sdb before might not be now. As you can imagine this will result in the wrong disk being mounted to the wrong location. UUID and PARTUUID are ids that are written as part of formatting the filesystem for UUID or at the time of creating the partition in the case of PARTUUID. These numbers are written to the disk and will always remain the same so can be used to mount the correct disk even when the underlying device file gets renumbered.
Side note: Your devices are a bit confusing - you have /dev/sdb mounted to /lpo/sda - while that works it can be confusing and lead to errors when you maintain/configuring your system, you may want to make these more consistent.
| How to update fstab file with UUID? |
1,593,417,547,000 |
In my search for the ideal filesystem to share files between a lot of computer with a lot of different OS'es I accepted this answer and installed a UDF filesystem on my USB stick.
First I blanked the disk, to make sure there are no leftovers to confuse a system that's reading the drive:
dd if=/dev/zero of=/dev/sdb bs=1M
Then I formatted the drive, using udftools from arch linux's AUR:
sudo mkudffs --media-type=hd --blocksize=512 /dev/sdb
Obviously, the drive was in /dev/sdb.
Now my question is, since the drive doesn't have any traditional partitions or even a partition table as far as I know, it does not have a UUID. Therefore, I
can not add it to the fstab, which I find rather annoying.
What can I do to fix this (e.g. is there an alternative way to set default mount point and options, or an alternate partitioning option)?
|
Choose a blocksize of at least 2K (which is the default) and add --vid= to your mkudffs parameters. (The blkid from util-linux doesn't seem to cope with smaller blocksizes.)
$ mkudffs --media-type=hd --vid=my-drive /dev/sdj
$ blkid /dev/sdj
/dev/sdj: LABEL="my-drive" TYPE="udf"
Now you can use LABEL=my-drive in /etc/fstab.
| UDF and fstab (no UUID) |
1,593,417,547,000 |
I am trying to understand what the precedence and combination of the permission options set in fstab when mounting a disk with those that are associated with each file on disk in the case of ext4 being the file-system in use.
More specifically:
exec and executable flag
suid and suid flag
dev
defaults vs nothing at all
For instance does rw in fstab mean that the files will have read and write permissions when mounted?
What will happen if they have only read associated with the file?
Do the mount options affect the permissions of the mounted files as stored on on disk? Or do they filter them out somehow keeping only what is allowed in both?
What happens to files newly created on the mounted disk?
There are many different articles out there, about linux permissions particular subject, but none of those I stumbled upon tackles the issue in its entirety.
If someone has a link to such an article it would be very nice to share it!
|
Mount options don't affect the stored permissions bits, but they affect the effective permissions. For example, it's possible to have a file with execute permissions (i.e. chmod a+x myfile has succeeded, ls -l shows the file having execute permissions, etc.), but if the filesystem is mounted with the noexec option, then attempting to execute the file results in a “permission denied” error. Similarly the ro option causes any attempt to write to fail, the nodev option causes any attempt to access a device to fail (even though devices can be created), and the nosuid option causes any attempt to execute a file to ignore the setuid and setgid bits.
Another way to put it is that the algorithm to decide whether a file operation is allowed goes something like this:
If write permission is needed and the filesystem is mounted ro, deny immediately.
If execute permission is needed and the filesystem is mounted noexec, deny immediately.
If the file is a device and the filesystem is mounted nodev, deny immediately.
If the file's user is one of the groups of the process attempting access, allow or deny based on the user permission bits stored in the filesystem.
If the file's group is one of the groups of the process attempting access, allow or deny based on the user permission bits stored in the filesystem.
Allow or deny based on the “other” permission bits stored in the filesystem.
(I simplified to show only the most important parts for our purposes here. Other considerations include access control lists, extended attributes such as immutable and append-only, and security modules such as SELinux and AppArmor. The ultimate complete and accurate — but not easy-to-read — reference would be the source code, e.g. the may_open function in the Linux kernel.)
And the setuid/setgid determination is not done (the setuid/setgid bits from the file metadata are not taken into account) if the filesystem is mounted nosuid.
| How fstab mount options work together with per file defined permissions in linux |
1,593,417,547,000 |
I just mounted sda2 to /mnt. How can I force a refresh of fstab so it can pickup the changes and insert a new line for sda2-/mnt?
|
You need to manually edit the file fstab. To find out what to put in there, issue the mount command and look at its output.
| How do I add newly created mount point to fstab? |
1,593,417,547,000 |
My understanding is /proc/mounts should list all mount options for a filesystem, including kernel defaults, so I was surprised to see that exec (among others) is not listed here?
For example, my root and home filesystems in fstab:
/dev/mapper/vg0-xen_root / ext4 noatime,errors=remount-ro 0 1
/dev/mapper/vg1-xen_home /home ext4 defaults 0 2
and how they appear in /proc/mounts:
/dev/mapper/vg0-xen_root / ext4 rw,noatime,errors=remount-ro,user_xattr,barrier=1,data=ordered 0 0
/dev/mapper/vg1-xen_home /home ext4 rw,relatime,user_xattr,barrier=1,data=ordered 0 0
The filesystem independent defaults documented in man mount:
defaults
Use default options: rw, suid, dev, exec, auto, nouser, and async.
Why are some defaults (e.g. rw) listed but others (e.g. exec) are not? Is there a way to get the complete set of mount options associated with a filesystem?
|
Files in /proc are generated by the kernel, not by the mount utility. The kernel omits options that are in their default kernel setting. The defaults of the mount utility don't always match the kernel defaults. You can check the defaults for your kernel version in the source code, in fs/proc_namespace.c. For example, as of version 3.15, noexec is displayed if applicable; nothing is displayed in the no-noexec (i.e. exec) case.
| Why is the exec option not listed in /proc/mounts? |
1,593,417,547,000 |
I have a 4TB external hard drive connected to an Linux server.
The fstab permissions on this drive are set so that only one particular non-root user has access to it:
/dev/disk/by-uuid/CEE0476DE0388DA9/ /mnt/USBexternal ntfs-3g defaults,auto,uid=51343,gid=50432,umask=077 0 0
From a remote location, this user has been successful at doing rsync backups to this external hard drive.
However, the external drive doesn't stay mounted as reliably as an internal hard drive does. Every couple of days I'm having to login as root do this command:
mount -a
I would like to give this user the ability to mount this drive, but when the non-root user does mount -a, it tells them they do not have permission to do this:
nonrootuser@server:~$ mount -a
mount: only root can do that
When the non-root user tries to mount this drive specifically, it tells them it is already mounted (even though it isn't):
nonrootuser@server:~$ mount /mnt/USBexternal/
mount: according to mtab, /dev/sdb1 is already mounted on /mnt/USBexternal
As mentioned, the drive is not actually mounted, but (because of the output above) if the non-root user tries to unmount the drive, it says their request disagrees with fstab:
nonrootuser@server:~$ umount /mnt/USBexternal/
umount: /mnt/USBexternal/ mount disagrees with the fstab
How can I permit this user the ability to mount this drive, without giving them any other administrative powers?
|
You can setup an entry in the /etc/sudoers file for this user to be able to use the mount command. Add something like the following to the end of the /etc/sudoers file:
username ALL=NOPASSWD: /usr/bin/mount, /sbin/mount.ntfs-3g, /usr/bin/umount
Be sure that the exact path to each executable is correct for your system. For example, your mount command might be in /bin instead of /usr/bin.
Adding the mount.ntfs-3g part is important to provide that access for the user. I can see in your mount command that you are using a ntfs-3g filesystem type.
You could, instead, create a shell script to handle the mounting/unmounting and place that in your sudoers file. For example:
create /usr/local/bin/mount-ntfs-drive script:
#!/bin/bash
device_path="/dev/disk/by-uuid/CEE0476DE0388DA9/"
mount_point="/mnt/USBexternal"
if [ "$1" = "-u" ] ; then
# do unmount
/bin/umount $mount_point
else
# do mount
/bin/mount $device_path $mount_point
fi
edit /etc/sudoers file:
username ALL=NOPASSWD: /usr/local/bin/mount-ntfs-drive
Be sure to do chmod +x /usr/local/bin/mount-ntfs-drive. Also, when your user runs the file, they will need to use the fully qualified path for it to work. It might work from their path but not sure.
sudo /usr/local/bin/mount-ntfs-drive
| Allow NonRoot User to Mount a Particular NTFS External Hard Drive |
1,593,417,547,000 |
I have a basic understanding of fsck utility but what does "fsck" section in /etc/fstab denote? It has values 0, 1, 2 what are these values?
Googling says 0- it wont be checked
1- will be checked on boot
2- Now what is it?
|
From the fstab(5) man page:
The sixth field (fs_passno).
This field is used by the fsck(8) program to determine the order
in which filesystem checks are done at reboot time. The root
filesystem should be specified with a fs_passno of 1, and other
filesystems should have a fs_passno of 2. Filesystems within a
drive will be checked sequentially, but filesystems on different
drives will be checked at the same time to utilize parallelism
available in the hardware. If the sixth field is not present or
zero, a value of zero is returned and fsck will assume that the
filesystem does not need to be checked.
| What is in fsck section in fstab? |
1,593,417,547,000 |
I'm trying to enable journaled usrquota on Debian 11 Kernel 5.10. All information I find uses external files which leads to the following deprecation warning:
quotaon: Your kernel probably supports ext4 quota feature but you are using external quota files. Please switch your filesystem to use ext4 quota feature as external quota files on ext4 are deprecated.
My fstab entry uses the options errors=remount-ro,usrjquota=aquota.user,jqfmt=vfsv1
Which as far as I understand should enable ext4 qouta feature. However after a reboot when I run sudo quotaon -v / I get a deprecation warning and complains about missing aquota.user file.
What confuses me is: Why do I have to specify a file name for usrjquota... As far as I understand the point of journaled quota is that we don't need a file any more.
If someone could provide the steps to enable journaled ext4 quotas it would be really appreciated.
|
To enable journaled quota tune2fs is used. No mount options in /etc/fstab are needed. I.E. assuming you want quotas for /home enabled which is on /dev/sda2
you do:
umount /home
tune2fs -O quota /dev/sda2
mount -a
quotaon -va
If you want to turn quota on for the root file system you need to boot from a live disk and use tune2fs on the related partition.
| How to enable journaled quota on Debian 11 |
1,593,417,547,000 |
I noticed recently that is possible to allow normal user/s to mount a device through fstab, but apparently in any case umount can only be done by root.
Even the man page of mount only talks about mounting:
Even more than having an actual solution to this, I'm wondering what's the reason behind this?
|
The reason behind this, as with many Unix/Linux peculiarities, is of course historical. Unix, which itself evolved out of Unics (a pun on its predecessor Multics) was designed as a true multi user system. Users can log in either locally or remotely through getty and login, get a shell and and run their programs.
These days, the TTYs are virtual and login has been replaced by GDM/KDM, but utilities such as mount, df, ls, ps (which belong to the oldest Unix commands) still remain largely unchanged in purpose, although they have acquired many additional features over the year.
The commands mount and umount were originally only meant to be run by the system administrator, or root. As Unix evolved and spread to personal computers both mount and umount became SUID programs to enable regular users to mount and unmount filesystems, but only under strict conditions. From man mount:
Normally, only the superuser can mount filesystems. [...]
Note that mount is very strict about non-root users and all paths specified on command line are verified before fstab is parsed or a helper program is executed. [...]
It drops suid permissions and continue as regular non-root user. [...]
Only the user that mounted a filesystem can unmount it again. If any user should be able to unmount it, then use users instead of user in the fstab line.
Hence, both mount and umount are SUID programs that look for the user option or users option in /etc/fstab, then drop their root privileges and finally make the mount()/umount() system call.
| mount and fstab: why can they be configured to allow users to mount but not umount? |
1,593,417,547,000 |
We are thinking about to change all Linux fstab configuration to work with UUID instead the current configuration
Some of the disks are with non RAID and some of the disks are with RAID10
I searched in google and find complain about using UUID for RAID1 :
" Unfortunately you MUST NOT use UUID in /etc/fstab if you use software RAID1. Why? Because the RAID volume itself and the first element of the mirror will appear to have the same filesystem UUID. If the mirror breaks or for any other reason the md device isn't started at boot, the system will mount any random underlying disk instead, clobbering your mirror. Then you'll need a full resync. Bad juju."
So I just want to know if we can use UUID for RAID10 ?
and in which cases ( RAID configuration ) not to use UUID?
second - in few lines what are the benefit to use UUID ?
|
Answer to your second question: an UUID allows you to uniquely identify a device.
Devices are assigned as /dev/sda, /dev/sdb, etc. depending on the order the system discovers them. While the drive the system boots on is always the first, for the others their name assignment depends on the order of discovery and might change after a reboot.
Also, imagine you have drives /dev/sdc and /dev/sdd, and you physically remove the first drive; after reboot, what was known as /dev/sdd is now called /dev/sdc.
This makes identification of devices ambiguous. UUIDs avoid all ambiguity; as the UUID is stored in the superblock (for a block device), it pertains to the device itself.
| in which cases it will be problematic to configure UUID in fstab |
1,593,417,547,000 |
Is it possible to mount a single folder without the noexec option. I have a situation in which a web app in the users home folder on the server has to be mounted without noexec to run correctly, however, I don't want to remove the restriction for all users, just for that one.
The fstab looks like this:
# /dev/sda3 /home ext3 defaults 0 2
UUID=cae3a489-22c1-43d8-aaf1-27306b32ebb0 /home ext3 defaults,noexec 0 2
So, removing noexec from here would allow all the users to run executables, and I need a solution to allow it only for the user user.
|
Do you mean you want to remove the noexec restriction on a directory in /home without removing it on the entire partition? If so, bind mounting the directory and remounting it with default options might work. But please conduct your own tests. Below is a dirty hack that seemed to work using EXT4, but it'd probably be cleaner/safer/better if you could bind mount the webapp directory somewhere besides on top of itself. This would have to run in a shell script, after mounts from fstab are complete:
mount --bind /home/user/webapp /home/user/webapp
mount /home/user/webapp -oremount,defaults
| Mount a folder without noexec |
1,593,417,547,000 |
I trying to deploy an rails application into /home/app/myapp, but when application tries to connect to Mysql, I get this error:
** [out :: 192.168.110.50] /home/app/myapp/shared/bundle/ruby/1.9.1/gems/mysql2-0.3.11/lib/mysql2/mysql2.so: failed to map segment from shared object: Operation not permitted - /home/app/myapp/shared/bundle/ruby/1.9.1/gems/mysql2-0.3.11/lib/mysql2/mysql2.so
'app' user has root privilegies, so it no make sense.
After googling, I find that noexec in home folder can block system calls.
This my fstab file:
$cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Wed Oct 17 16:48:10 2012
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/VG00-LVbarra / ext4 defaults 1 1
UUID=3d5ccda7-932f-4b48-a010-9ddcb99873c0 /boot ext4 defaults 1 2
/dev/mapper/VG00-LVhome /home ext4 defaults,noexec,nosuid 1 2
/dev/mapper/VG00-LVtmp /tmp ext4 defaults,noexec,nosuid 1 2
/dev/mapper/VG00-LVusr /usr ext4 defaults 1 2
/dev/mapper/VG00-LVvar /var ext4 defaults,noexec,nosuid 1 2
How to remove noexec flag from home folder?
Thank you!
|
Looks like mprotect failed, but anyway, to remove the noexec flag, change
/dev/mapper/VG00-LVhome /home ext4 defaults,noexec,nosuid
To
/dev/mapper/VG00-LVhome /home ext4 defaults,nosuid
And remount /home with mount -o remount /home
| Remove noexec from Home folder |
1,477,101,611,000 |
Some background:
I have installed Linux Mint 17 Cinnamon on my laptop SSD on a 10GB formatted partition and i have another HDD with 75GB, both ext4 formatted.
The question part: I have two partitions lets call them System(sda1) and Data(sdb1).
How do i move the folders: home, usr, var and tmp to the Data(sdb1) and make them accessible from the root system?
I tried symlinks and fstab with a lame logic that didn`t work.
What i appended in my /etc/fstab:
UUID=XXX-Data-drive-UUID-XXX /media/data ext4 default 0 1
/media/data/tmp /tmp ext4 default 0 1
/media/data/home /home ext4 default 0 1
/media/data/usr /usr ext4 default 0 1
/media/data/var /var ext4 default 0 1
Can anyone at least point me in a right direction ?
-- Edited--
The answer worked, but just to help whoever is following this path:
First, i copied with the command cp -rp (-r for recursive and -p for keeping the same permissions, without -p everything will belong to root)
Them i changed the /etc/fstab as the answer, the bind clause really did the trick.
Them i rebooted with a liveUSB only to rename the old folders in the System(sda1) and avoid some kind of conflict.
And them i started normally, not a single error.
|
It sounds to me like you are trying to mount directories which are already mounted (or part of a mount) to a different location. The way to do is is to mount -o bind. So you would have something like this:
UUID=XXX-Data-drive-UUID-XXX /media/data ext4 defaults 0 1
/media/data/tmp /tmp ext4 defaults,bind 0 0
/media/data/home /home ext4 defaults,bind 0 0
/media/data/usr /usr ext4 defaults,bind 0 0
/media/data/var /var ext4 defaults,bind 0 0
(also you may mean defaults, not default, which I have changed here)
PS: The bind mounts should not be checked, so I have edited the answer to "0 0"
| Moving 4 system folders to 1 separated partition |
1,477,101,611,000 |
I created a logical volume like the following.
lvcreate -L 300G MyVolGroup -n homevol
As for mounting this volume after initializing a filesystem on it, a few guides I read used /dev/MyVolGroup/homevol. However, I noticed the root partition (as part of the default OS install) was mounted using /dev/mapper/MyVolGroup-root (this is a vanilla install of Fedora 35 Server).
Both symlink to ../dm-1. But I'm wondering if there's a good reason to use one over the other (the path under /dev/MyVolGroup or the path under /dev/mapper?
|
It doesn't matter, you can use either of them. As you found out these are just symlinks and both are created by udev (the /dev/mapper/<vg>-<lv> one is created by the 10-dm.rules rule and the /dev/<vg>/<lv> by the 11-dm-lvm.rules) so these will be created at the same time so there isn't really a reason to prefer one over the other.
I guess the /dev/<vg>/<lv> symlink can be seen as a more user friendly and more LVM-specific one and the /dev/mapper/<vg>-<lv> can be seen as a more low level one, because all device mapper devices have symlinks in /dev/mapper, not only the LVM ones.
And why Anaconda (Fedora installer) prefers the /dev/mapper path? I have no idea, it's just used in Blivet (storage library Anaconda uses) as the default path for LVM devices, probably for no particular reason.
Two small notes:
You definitely should not use the /dev/dm-X device, the number is not persistent, dm-1 will simply be the first device mapper device created.
Using UUID is usually preferred in fstab, but this mostly for partitions where /dev/sda1 is not guaranteed to be the same device between boots. You can use UUID here too, but it's not necessary, because the LVM names must be unique in the system so MyVolGroup-root will be always the same device.
| LVM entry for logical volume in /etc/fstab - /dev/mapper/group-volume or /dev/group/volume? |
1,477,101,611,000 |
I have this in my /etc/fstab:
tmpfs /home/user1/tmp tmpfs rw,nodev,nosuid,noexec,size=16G 0 0
user1 has tmpfs mounted as /home/user1/tmp.
I would like to mount same tmpfs for user2 as well, so that they can share same tmpfs.
How can I do this in fstab, so that user2 has same tmpfs mounted at /home/user1/tmp ?
|
Add another line with a bind mount:
/home/user1/tmp /home/user2/tmp none bind,x-systemd.requires=/home/user1/tmp
The systemd is just for ordering, but you may not need/want it.
| mount same tmpfs on two mountpoints |
1,477,101,611,000 |
I have recently reinstalled my Linux Mint 19.2 from a USB which went fine. Upon starting the system, though, it get's stuck at initramfs. The error messages above state
Mount: mounting /dev on /root/dev failed: no such file or directory
Mount: mounting /run on /root/run failed: no such file or directory
run-init: opening console: No such file or directory
Target: filesystem doesn't have requested /sbin/init.
Try passing init= bootarg.
From the initramfs I did an fsck of my root partition sda2 which came up clean. I repeated it with e2fsck with the same result.
I booted into the Live-System on the USB again, mounted sda1 (my EFI boot partition) and sda2 and checked the UUID values in grub and fstab which coincided.
Now I'm stuck in initramfs again and am looking at the contents of /etc on it and find that fstab has a size of 0 bytes.
(initramfs) ls -la /etc/fs*
-rw-r--r-- 1 0 /etc/fstab
Therefore mounting /dev/sda2 is not possible (no entry in fstab). Now, I'm not sure if this is supposed to contain anything at this point, because this is obviously not the fstab lying on /dev/sda2 in /etc, but I'm quite frankly out of ideas of what else might be causing the system not to find the root partition, when grub and fstab as seen from the live system seem to be fine.
BTW, this is the first time I've installed on a system with EFI, with an EFI boot partition (vfat) of 1GB (sda1). I made sure to boot into the live system in EFI mode before installing (in fact I disabled legacy mode in the BIOS so it would only show me the EFI bootable OS's). Is there any other setting I should be aware of that might cause the system not to find its root partition? What would be a value I might pass to the boot loader after init= ?
Feel free to ask for additional information if that helps determine what's wrong here. Thanks!
Added:
On sda1 there is a grub.cfg in /EFI/ubuntu with this content:
search.fs_uuid 734be585-8baf-408e-850a-69555c89c955 root hd0,gpt2
set prefix=($root)'/boot/grub'
configfile $prefix/grub.cfg
On sda2 in the /boot/grub folder there is the referenced grub.cfg with among other this content:
export linux_gfx_mode
menuentry 'Linux Mint 19.2 Cinnamon' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-734be585-8baf-408e-850a-69555c89c955' {
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_gpt
insmod ext2
set root='hd0,gpt2'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt2 --hint-efi=hd0,gpt2 --hint-baremetal=ahci0,gpt2 734be585-8baf-408e-850a-69555c89c955
else
search --no-floppy --fs-uuid --set=root 734be585-8baf-408e-850a-69555c89c955
fi
linux /boot/vmlinuz-4.15.0-54-generic root=UUID=734be585-8baf-408e-850a-69555c89c955 ro ignore_bootid live-media-path=/multibootusb/linuxmint-19.2-cinnamon-64bit/casper floppy.allowed_drive_mask=0 ignore_uuid root=UUID=759A-1D86 quiet splash $vt_handoff
initrd /boot/initrd.img-4.15.0-54-generic
}
blkid for sda offers this info:
/dev/sda1: UUID="EE9A-4B64" TYPE="vfat" PARTLABEL="boot" PARTUUID="33a95580-f254-4f54-937e-143da0e1e37c"
/dev/sda2: LABEL="/" UUID="734be585-8baf-408e-850a-69555c89c955" TYPE="ext4" PARTLABEL="/" PARTUUID="92a4ec7f-d1f6-441b-abdf-0bc0a9970d0b"
/dev/sda4: LABEL="home" UUID="7b3b371b-6447-4e33-822d-d2535215b863" TYPE="ext4" PARTUUID="70686b24-b09d-4f83-a715-73fb1e4224d1"
sda3 is supposed to be the swap partition, not sure if it's normal not to show up here.
I get fine until grub during the boot process. It's only after choosing either above entry or the accompanying "extended options" entry that I end up with initramfs instead of a login.
|
Mount: mounting /dev on /root/dev failed: no such file or directory
Mount: mounting /run on /root/run failed: no such file or directory
run-init: opening console: No such file or directory
Target: filesystem doesn't have requested /sbin/init.
Looks like whatever is being mounted as the root filesystem does not have the correct mountpoint directories...
And here's the kernel line of your boot entry, with each boot option on a separate line for clarity.
linux /boot/vmlinuz-4.15.0-54-generic \
root=UUID=734be585-8baf-408e-850a-69555c89c955 \
ro \
ignore_bootid \
live-media-path=/multibootusb/linuxmint-19.2-cinnamon-64bit/casper \
floppy.allowed_drive_mask=0 \
ignore_uuid \
root=UUID=759A-1D86 \
quiet \
splash
Now you can probably see it: you have two root= options. The later one will override the former. And based on the shortness of the second "UUID", it looks like you'll end up trying to use some FAT filesystem as your root filesystem. It isn't your /dev/sda1, though.
The live-media-path option also looks odd if you're trying to boot an OS that has been fully installed on a HDD.
The first root=UUID=734be585-8baf-408e-850a-69555c89c955 correctly refers to the UUID of your /dev/sda2, so it's the correct one.
My guess is that root=UUID=759A-1D86 probably refers to the USB you installed the system from. Probably the installation process of the UEFI bootloader made an error: it failed to recognize that root=UUID=759A-1D86 was part of the options for booting from the installation media and should not be copied to the finished installation.
You should remove root=UUID=759A-1D86 and probably also live-media-path=/multibootusb/linuxmint-19.2-cinnamon-64bit/casper from your boot options, i.e. both from /boot/grub/grub.cfg and from /etc/default/grub in /dev/sda2, if they exist. The former should remove the immediate problem; the latter should prevent the problem from reoccurring any time you install a kernel update or run update-grub for any other reason.
The /multibootusb in the live-media-path= option makes me think you might have done the installation with something like MultiBootUSB rather than with a "vanilla" Mint 19.2 installation media. Such automated solutions need to rebuild the bootloader configuration to build their boot menu, and don't always manage to do it perfectly.
ignore_uuid is for casper live media utility which is not used with an HDD-installed OS, and the ignore_bootid seems also to be associated with casper. The floppy.allowed_drive_mask=0 just tells the kernel to skip floppy drive detection, which might in usual cases speed up booting by maybe 3 seconds or so. (On some older laptops with a non-traditional floppy drive setup it might be necessary to prevent a hang at boot.)
You can very likely remove all those three boot options, but just to be safe, when the system is in GRUB boot menu, press E to edit the current boot entry (just for this particular boot) and remove those boot options there. If you then can boot successfully, you know you can remove them from the actual configuration files - if the system hangs at boot without those options, just reset the system and it will boot normally again.
| Boot problems with empty fstab in initramfs |
1,477,101,611,000 |
I have a small number of removable hard drives. At any one time, one of them will be mounted to /backup except while changing drives. I swap the drive periodically. That is I have 4 hard drives and I rotate them.
Currently I manually mount / unmount the drive. But there are times when this machine is turned off and worse, sometimes it gets turned off without my knowledge. The daily backup script will fail if a drive isn't mounted.
The drives don't currently share a UUID or Label. I can't garuntee that the disk will always be available on /dev/sdb1. Is there a good way to mount one of a number of drives automatically from /etc/fstab when I just don't know which drive will be inserted?
Note this is a linux (debian) system without a monitor or keyboard. Drives are currently manually mounted / unmounted over ssh after I plug / before I unplug.
|
After coming back to this question a long time later I've realised the solution is actually the same as optionally mounting a drive in /etc/fstab. This is discussed here https://wiki.archlinux.org/index.php/fstab#External_devices
In short my solution is to simply have two almost identical entries mounting to the same place. Eg:
UUID=cd49ca72-db24-47ba-b3bc-f0ba8e290599 /backup ext4 nofail,x-systemd.device-timeout=1 0 0
UUID=d28c6d3a-461e-4d7d-8737-40a56e8f384a /backup ext4 nofail,x-systemd.device-timeout=1 0 0
As long as only one of them is plugged in when the system boots, the other will "silently" timeout after a 1 second. So whichever is plugged in will get mounted, and the other entry will not trip up the boot process.
Note: Only use this solution if toy are confident that only one of the drives will be mounted at a time.
| How to mount one of multiple disks to a specific location in fstab |
1,477,101,611,000 |
I'm currently trying to able the Trash feature in a NTFS partition mounted automatically on boot. To do that I'm using the permissions option in my fstab:
UUID=1CACB8ABACB88136 /media/FILES ntfs defaults,permissions,relatime 0 0
then I changed the permissions:
sudo chown :users -R /media/FILES/
sudo chmod g+rwx -R /media/FILES/
It works great except I continue to not have the trash feature. I can read, write, execute being member of the users group but I cannot use the Trash feature in Nautilus, only permanent delete. Any thoughts ?
BR
|
Hey guys I've found the solution, removing my old .Trash folder that was there but wasn't working:
sudo rm -rf /media/FILES/.Trash-1000
worked like a charm, I'm now able to move to Trash from nautilus. And I'm pretty sure that If I create a new user he will be able to have its own trash too.
| How can I enable Trash feature in a NTFS partition with permissions? |
1,477,101,611,000 |
I have 3 encrypted partitions, one for /, one for /home, and one for swap.
It seemed silly to me to type in my password 3 times, so I replaced the swap partition with a swap file on the encrypted drive. However, even though I removed the entry from fstab, I am still being prompted for my password for the old swap partition on boot.
When I boot, the OS asks me for the password for sda7_crypt which I would expect. However, after that, it asks me for the password for sda5_crypt. How do I disable sda5_crypt?
/etc/fstab:
/dev/mapper/sda7_crypt / ext4 errors=remount-ro 0 1
UUID=xxxxxxxxxxxxxxxxxxxxxxx /boot ext4 defaults 0 2
/dev/mapper/sdb5_crypt /home ext4 defaults 0 2
/myswapfile swap swap defaults 0 0
/etc/cryptsetup:
sda7_crypt UUID=xxxxxxxxxxxxxxxxxxxxx none luks,discard
sdb5_crypt UUID=xxxxxxxxxxxxxxxxxxxxx /keyfile luks,discard
Update: more information to answer questions in the comments:
System: Ubuntu 14.04 64-bit Desktop
/boot/grub/grub.cfg:
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#
### BEGIN /etc/grub.d/00_header ###
if [ -s $prefix/grubenv ]; then
set have_grubenv=true
load_env
fi
if [ "${next_entry}" ] ; then
set default="${next_entry}"
set next_entry=
save_env next_entry
set boot_once=true
else
set default="0"
fi
if [ x"${feature_menuentry_id}" = xy ]; then
menuentry_id_option="--id"
else
menuentry_id_option=""
fi
export menuentry_id_option
if [ "${prev_saved_entry}" ]; then
set saved_entry="${prev_saved_entry}"
save_env saved_entry
set prev_saved_entry=
save_env prev_saved_entry
set boot_once=true
fi
function savedefault {
if [ -z "${boot_once}" ]; then
saved_entry="${chosen}"
save_env saved_entry
fi
}
function recordfail {
set recordfail=1
if [ -n "${have_grubenv}" ]; then if [ -z "${boot_once}" ]; then save_env recordfail; fi; fi
}
function load_video {
if [ x$feature_all_video_module = xy ]; then
insmod all_video
else
insmod efi_gop
insmod efi_uga
insmod ieee1275_fb
insmod vbe
insmod vga
insmod video_bochs
insmod video_cirrus
fi
}
if loadfont unicode ; then
set gfxmode=auto
load_video
insmod gfxterm
set locale_dir=$prefix/locale
set lang=en_US
insmod gettext
fi
terminal_output gfxterm
if [ "${recordfail}" = 1 ] ; then
set timeout=-1
else
if [ x$feature_timeout_style = xy ] ; then
set timeout_style=hidden
set timeout=0
# Fallback hidden-timeout code in case the timeout_style feature is
# unavailable.
elif sleep --interruptible 0 ; then
set timeout=0
fi
fi
### END /etc/grub.d/00_header ###
### BEGIN /etc/grub.d/05_debian_theme ###
set menu_color_normal=white/black
set menu_color_highlight=black/light-gray
if background_color 44,0,30; then
clear
fi
### END /etc/grub.d/05_debian_theme ###
### BEGIN /etc/grub.d/10_linux ###
function gfxmode {
set gfxpayload="${1}"
if [ "${1}" = "keep" ]; then
set vt_handoff=vt.handoff=7
else
set vt_handoff=
fi
}
if [ "${recordfail}" != 1 ]; then
if [ -e ${prefix}/gfxblacklist.txt ]; then
if hwmatch ${prefix}/gfxblacklist.txt 3; then
if [ ${match} = 0 ]; then
set linux_gfx_mode=keep
else
set linux_gfx_mode=text
fi
else
set linux_gfx_mode=text
fi
else
set linux_gfx_mode=keep
fi
else
set linux_gfx_mode=text
fi
export linux_gfx_mode
menuentry 'Ubuntu' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-c3444e01-a00a-4e4b-a73e-d213dc913a1e' {
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 52381e81-2730-434f-93be-e8223c4aa95c
else
search --no-floppy --fs-uuid --set=root 52381e81-2730-434f-93be-e8223c4aa95c
fi
linux /vmlinuz-3.13.0-35-generic root=UUID=xxxxxxxxxxxxxxxxxxxxxxx ro quiet splash $vt_handoff
initrd /initrd.img-3.13.0-35-generic
}
submenu 'Advanced options for Ubuntu' $menuentry_id_option 'gnulinux-advanced-c3444e01-a00a-4e4b-a73e-d213dc913a1e' {
menuentry 'Ubuntu, with Linux 3.13.0-35-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-35-generic-advanced-c3444e01-a00a-4e4b-a73e-d213dc913a1e' {
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 52381e81-2730-434f-93be-e8223c4aa95c
else
search --no-floppy --fs-uuid --set=root 52381e81-2730-434f-93be-e8223c4aa95c
fi
echo 'Loading Linux 3.13.0-35-generic ...'
linux /vmlinuz-3.13.0-35-generic root=UUID=xxxxxxxxxxxxxxxx ro quiet splash $vt_handoff
echo 'Loading initial ramdisk ...'
initrd /initrd.img-3.13.0-35-generic
}
menuentry 'Ubuntu, with Linux 3.13.0-35-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-35-generic-recovery-c3444e01-a00a-4e4b-a73e-d213dc913a1e' {
recordfail
load_video
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 52381e81-2730-434f-93be-e8223c4aa95c
else
search --no-floppy --fs-uuid --set=root 52381e81-2730-434f-93be-e8223c4aa95c
fi
echo 'Loading Linux 3.13.0-35-generic ...'
linux /vmlinuz-3.13.0-35-generic root=UUID=c3444e01-a00a-4e4b-a73e-d213dc913a1e ro recovery nomodeset
echo 'Loading initial ramdisk ...'
initrd /initrd.img-3.13.0-35-generic
}
menuentry 'Ubuntu, with Linux 3.13.0-34-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-34-generic-advanced-c3444e01-a00a-4e4b-a73e-d213dc913a1e' {
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 52381e81-2730-434f-93be-e8223c4aa95c
else
search --no-floppy --fs-uuid --set=root 52381e81-2730-434f-93be-e8223c4aa95c
fi
echo 'Loading Linux 3.13.0-34-generic ...'
linux /vmlinuz-3.13.0-34-generic root=UUID=xxxxxxxxxxxxxxxxx ro quiet splash $vt_handoff
echo 'Loading initial ramdisk ...'
initrd /initrd.img-3.13.0-34-generic
}
menuentry 'Ubuntu, with Linux 3.13.0-34-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-34-generic-recovery-c3444e01-a00a-4e4b-a73e-d213dc913a1e' {
recordfail
load_video
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 52381e81-2730-434f-93be-e8223c4aa95c
else
search --no-floppy --fs-uuid --set=root 52381e81-2730-434f-93be-e8223c4aa95c
fi
echo 'Loading Linux 3.13.0-34-generic ...'
linux /vmlinuz-3.13.0-34-generic root=UUID=xxxxxxxxxxxxxxxxxxxxxxxxx ro recovery nomodeset
echo 'Loading initial ramdisk ...'
initrd /initrd.img-3.13.0-34-generic
}
menuentry 'Ubuntu, with Linux 3.13.0-33-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-33-generic-advanced-c3444e01-a00a-4e4b-a73e-d213dc913a1e' {
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 52381e81-2730-434f-93be-e8223c4aa95c
else
search --no-floppy --fs-uuid --set=root 52381e81-2730-434f-93be-e8223c4aa95c
fi
echo 'Loading Linux 3.13.0-33-generic ...'
linux /vmlinuz-3.13.0-33-generic root=UUID=xxxxxxxxxxxxxxxxx ro quiet splash $vt_handoff
echo 'Loading initial ramdisk ...'
initrd /initrd.img-3.13.0-33-generic
}
menuentry 'Ubuntu, with Linux 3.13.0-33-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-33-generic-recovery-c3444e01-a00a-4e4b-a73e-d213dc913a1e' {
recordfail
load_video
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 52381e81-2730-434f-93be-e8223c4aa95c
else
search --no-floppy --fs-uuid --set=root 52381e81-2730-434f-93be-e8223c4aa95c
fi
echo 'Loading Linux 3.13.0-33-generic ...'
linux /vmlinuz-3.13.0-33-generic root=UUID=xxxxxxxxxxxxxxxxxxx ro recovery nomodeset
echo 'Loading initial ramdisk ...'
initrd /initrd.img-3.13.0-33-generic
}
menuentry 'Ubuntu, with Linux 3.13.0-32-generic' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-32-generic-advanced-c3444e01-a00a-4e4b-a73e-d213dc913a1e' {
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 52381e81-2730-434f-93be-e8223c4aa95c
else
search --no-floppy --fs-uuid --set=root 52381e81-2730-434f-93be-e8223c4aa95c
fi
echo 'Loading Linux 3.13.0-32-generic ...'
linux /vmlinuz-3.13.0-32-generic root=UUID=xxxxxxxxxxxxxxxxxxxx ro quiet splash $vt_handoff
echo 'Loading initial ramdisk ...'
initrd /initrd.img-3.13.0-32-generic
}
menuentry 'Ubuntu, with Linux 3.13.0-32-generic (recovery mode)' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-3.13.0-32-generic-recovery-c3444e01-a00a-4e4b-a73e-d213dc913a1e' {
recordfail
load_video
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 52381e81-2730-434f-93be-e8223c4aa95c
else
search --no-floppy --fs-uuid --set=root 52381e81-2730-434f-93be-e8223c4aa95c
fi
echo 'Loading Linux 3.13.0-32-generic ...'
linux /vmlinuz-3.13.0-32-generic root=UUID=xxxxxxxxxxxxxxxxxxxxx ro recovery nomodeset
echo 'Loading initial ramdisk ...'
initrd /initrd.img-3.13.0-32-generic
}
}
### END /etc/grub.d/10_linux ###
### BEGIN /etc/grub.d/20_linux_xen ###
### END /etc/grub.d/20_linux_xen ###
### BEGIN /etc/grub.d/20_memtest86+ ###
menuentry 'Memory test (memtest86+)' {
insmod part_msdos
insmod ext2
set root='hd0,msdos6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 52381e81-2730-434f-93be-e8223c4aa95c
else
search --no-floppy --fs-uuid --set=root 52381e81-2730-434f-93be-e8223c4aa95c
fi
knetbsd /memtest86+.elf
}
menuentry 'Memory test (memtest86+, serial console 115200)' {
insmod part_msdos
insmod ext2
set root='hd0,msdos6'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos6 --hint-efi=hd0,msdos6 --hint-baremetal=ahci0,msdos6 52381e81-2730-434f-93be-e8223c4aa95c
else
search --no-floppy --fs-uuid --set=root 52381e81-2730-434f-93be-e8223c4aa95c
fi
linux16 /memtest86+.bin console=ttyS0,115200n8
}
### END /etc/grub.d/20_memtest86+ ###
### BEGIN /etc/grub.d/30_os-prober ###
### END /etc/grub.d/30_os-prober ###
### BEGIN /etc/grub.d/30_uefi-firmware ###
### END /etc/grub.d/30_uefi-firmware ###
### BEGIN /etc/grub.d/40_custom ###
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the 'exec tail' line above.
### END /etc/grub.d/40_custom ###
### BEGIN /etc/grub.d/41_custom ###
if [ -f ${config_directory}/custom.cfg ]; then
source ${config_directory}/custom.cfg
elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then
source $prefix/custom.cfg;
fi
### END /etc/grub.d/41_custom ###
/etc/default/grub
# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.
# For full documentation of the options in this file, see:
# info -f grub -n 'Simple configuration'
GRUB_DEFAULT=0
GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX=""
# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"
# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console
# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480
# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
#GRUB_DISABLE_LINUX_UUID=true
# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_RECOVERY="true"
# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"
|
Encrypted volumes are listed in /etc/crypttab. You need to update that file, to remove the volume that you no longer want mounted.
After doing this, you need to rebuild the initramfs, by running
sudo update-initramfs -u
If you want to have three encrypted partitions on the same disk, then you should have a single encrypted volume instead of three, and create partitions inside it (with LVM: make the encrypted volume a physical volume, create a volume group containing that physical volume, and create a logical volume for /, one for /home and one for swap). Ubuntu's startup scripts don't handle sharing the passphrase between volumes, though you can tweak them to do that; see bug #1022815.
By the way, you can use a random key (generated at each boot) for the swap volume, if you don't use hibernation.
| Luks Partition Mounting After Removing From fstab |
1,477,101,611,000 |
I just installed the latest Ubuntu 12.04 and obviously it screws something up. I'm not sure if this has anything to do with the fact that I have a Raid 1 but at the moment, I have sda and sdb which point to the same device:
# blkid
/dev/sda1: UUID="88aa922a-4304-406e-8abd-edc2e9064d79" TYPE="ext2"
/dev/sda2: UUID="22b881d5-6f5c-484d-94e8-e231896fa91b" TYPE="swap"
/dev/sda3: UUID="e1fa161b-b014-4a6b-831a-9d8f9e04be07" TYPE="ext3"
/dev/sda5: UUID="6ed19886-1cba-47b2-9ce0-7c2ea8f9c3c9" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb1: UUID="88aa922a-4304-406e-8abd-edc2e9064d79" TYPE="ext2"
/dev/sdb2: UUID="22b881d5-6f5c-484d-94e8-e231896fa91b" TYPE="swap"
/dev/sdb3: UUID="e1fa161b-b014-4a6b-831a-9d8f9e04be07" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb5: UUID="6ed19886-1cba-47b2-9ce0-7c2ea8f9c3c9" TYPE="ext3"
But I have only one "visible" hard disc, so this ought to be sda. In my earlier version (10.10) /dev/mapper took care of it. Look at the mount points below. In the current version, this doesn't work anymore, so I entered sda mount points to fstab at first, which seemed to work, but when I execute the mount command, I saw that suddendly one partition was mounted as sdb instead of sda. So I tried to use the UUID as file system in fstab but the problem still exists. Which is even worse: It mixes up both devices. That means it sometimes mount one partition as sda, at next reboot it is suddendly sdb. And it behaves as it would mount different hard drives, because my /home partition was mounted once as sda, now as sdb and changes and setting I made in file system were suddenly "reset". What can I do? Should I delete all sdb block specials?
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
#/dev/mapper/pdc_ccfhbjbeeg3 / ext3 errors=remount-ro 0 1
#/dev/mapper/pdc_ccfhbjbeeg1 /boot ext2 defaults 0 2
#/dev/mapper/pdc_ccfhbjbeeg5 /home ext3 defaults 0 2
#/dev/mapper/pdc_ccfhbjbeeg2 none swap sw 0 0
#/dev/sda1 /boot ext2 defaults 0 2
#/dev/sda2 none swap sw 0 0
#/dev/sda3 / ext3 errors=remount-ro 0 1
#/dev/sda5 /home ext3 defaults 0 2
UUID=e1fa161b-b014-4a6b-831a-9d8f9e04be07 / ext3 errors=remount-ro 0 1
UUID=88aa922a-4304-406e-8abd-edc2e9064d79 /boot ext2 defaults 0 2
UUID=6ed19886-1cba-47b2-9ce0-7c2ea8f9c3c9 /home ext3 defaults 0 2
UUID=22b881d5-6f5c-484d-94e8-e231896fa91b none swap sw 0 0
UPDATE
by the way, the Ubuntu installer shows the RAID array and not the partitions. See also https://bugs.launchpad.net/ubuntu/+bug/973147
|
I found a very easy solution to get my (obviously fake) hardware RAID working again.
After I reinstalled Ubuntu 12.04 I didn't reboot but stayed in try mode. Then I mounted / and edited
/usr/share/initramfs-tools/scripts/local-top/dmraid
I added dmraid -ay after the last comment:
# Activate any dmraid arrays that were not identified by udev and vol_id.
dmraid -ay
if devices=$(dmraid -r -c); then
for dev in $devices; do
dmraid-activate $dev
done
fi
I think that's it, but at first I added
dm-raid45
dm-mirror
dm-region-hash
to
/etc/modules
I'm not sure if this important at all, because after first boot (which finally worked without falling back to maintenance console), /etc/modules didn't contain those 3 modules anymore, so I guess you can omit it.
When I execute mount, I see /dev/mapper mounted again:
/dev/mapper/pdc_ccfhbjbeeg3 on / type ext3 (rw,errors=remount-ro)
/dev/mapper/pdc_ccfhbjbeeg1 on /boot type ext2 (rw)
/dev/mapper/pdc_ccfhbjbeeg5 on /home type ext3 (rw)
| sda and sdb block specials point to same device and get mixed up (hardware RAID doesn't work after new installation of 12.04) |
1,477,101,611,000 |
I have the following in my /etc/fstab on a Red Hat 5 system:
//share/folder /mnt/folder cifs username=<my username>,password=<my password>,ro,soft,nounix
Can I replace this with something that will still mount //share/folder on my Linux box on startup without storing my plain text password in fstab?
|
You can make a separate file with the following lines, and make it readable by root only:
username=<my username>
password=<my password>
Then in /etc/fstab, replace the username and password options with:
credentials=/path/to/your/file
| Mounting Windows share on startup without storing password in plain text |
1,477,101,611,000 |
I am responsible for maintaining a Linux (Ubuntu) machine in my company. We mounted some NFS network drives. In irregular intervals (during holidays), the machine is forcefully restarted because the company turns off all electricity. After re-boot, the NFS network drives are gone and have to be mounted manually again in our current configuration.
I know about /etc/fstab, which contains a list of drives that should be mounted on system startup. I would like to edit this file to auto-mount the network drives on system startup. However, I am wondering what happens at system start if the file contents are invalid (e.g., syntax error) or if the network drives are, for some reason, inaccessible during mount (no network connection, server down, ...).
Is it safe to assume that the machine will boot and be usable without the mounted drives the next time?
If the file contains an error, will all drives not be mounted or only the erroreous ones?
If some network drives cannot be mounted, will at least some drives (i.e., the hard drives/RAID) be mounted?
Are there safer/better/more convenient alternatives to /etc/fstab in this use-case?
|
Assuming your system is running systemd, and your network file systems are listed in /etc/fstab with the _netdev option:
The machine will boot, even if one or more of the network file systems are unavailable; if it doesn’t need the network file systems, then it will be usable. The boot will take longer however, since the default timeout for network file systems is 90s; you can add the nofail option to avoid this timeout, and make it explicit that the file system isn’t required.
Non-network file systems with errors will need to be dealt with during boot. For network file systems, see above.
Everything that can be mounted will be mounted, unless you configure the system otherwise.
/etc/fstab is still the recommended configuration mechanism for file systems, even with systemd.
I have systems configured in exactly this manner and they boot as described.
| What happens when mounting network drives in fstab fails |
1,477,101,611,000 |
is it possible to run xfs repair by re-edit the fstab file?
/dev/mapper/vg-linux_root / xfs defaults 0 0
UUID=7de1dc5c-b605-4a6f-bdf1-f1e869f6ffb9 /boot xfs defaults 0 0
/dev/mapper/vg-linux_var /var xfs defaults 0 0
/dev/mapper/vg-linux_swap swap swap defaults 0 0
I am not sure but by replace the last number from 0 to 1 , is it right?
|
No, just editing /etc/fstab cannot cause xfs_repair to be executed.
For other filesystem types, it would work. But XFS is special here.
Changing the 6th field of /etc/fstab to a non-zero value on a XFS filesystem will cause the system to run fsck.xfs, whose man page says:
NAME
fsck.xfs - do nothing, successfully
[...]
However, the system administrator can force fsck.xfs to run xfs_re‐
pair(8) at boot time by creating a /forcefsck file or booting the sys‐
tem with "fsck.mode=force" on the kernel command line.
So, ordinarily fsck.xfs will do nothing at all.
If you really want xfs_repair to run at boot, there are two conditions that both must be satisfied:
a) The 6th field of /etc/fstab must be non-zero for the XFS filesystem in question, so that fsck.xfs will be executed.
b) Either a /forcefsck file must exist on the root filesystem (or perhaps within initramfs, if planning to check the root filesystem), or the kernel command line must have the fsck.mode=force boot option. This will cause fsck.xfs to run xfs_repair instead of doing nothing.
What's so special with xfs_repair, then?
The XFS filesystem and the xfs_repair tool both will assume that the underlying disk is in good condition, or at least is capable of transparently replacing bad blocks with built-in spare blocks (as all modern disks do). If a modern disk has persistent bad blocks visible to the operating system, it usually means that the built-in spare block mechanism has already been overwhelmed by the amount of bad blocks, and the disk is probably going to fail completely soon enough anyway.
The man page of xfs_repair says:
Disk Errors
xfs_repair aborts on most disk I/O errors. Therefore, if you are trying
to repair a filesystem that was damaged due to a disk drive failure,
steps should be taken to ensure that all blocks in the filesystem are
readable and writable before attempting to use xfs_repair to repair the
filesystem. A possible method is using dd(8) to copy the data onto a
good disk.
So, you probably should not set xfs_repair to run automatically in normal circumstances.
If a XFS filesystem has errors, you should always first evaluate the condition of the underlying disk: smartctl -a /dev/<disk device> might be useful, as might be using dd to read the whole contents of the partition/LV to /dev/null and seeing that the command can complete without errors.
If the disk is failing, you should first copy the contents of the partition/LV to a new, error-free disk (perhaps using dd or ddrescue), and only then you should attempt to run xfs_repair on the filesystem on the error-free disk.
Running xfs_repair automatically at boot time might be an appropriate workaround if you know that something is causing filesystem-level errors even when your disks are in good condition. But that is just a workaround, not a fix: you should find out what is causing the filesystem errors, and fix the root cause. (Maybe a filesystem driver bug, requiring an updated kernel package to fix?)
| repair file system by edit the fstab file |
1,477,101,611,000 |
I have added this entry to my /etc/fstab
/dev/sdb1 /user_data xfs rw 0 0
Which works fine, the issue i am getting it sometime i remove this drive and when i do and reboot my machine it goes into emergency mode.
I have tried adding
/etc/systemd/system/local-fs.target.d/nofail.conf
with
OnFailure= in it but i still get the same result
is there something else i can do to stop this happening
Thanks
|
Add the nofail option to your /etc/fstab:
/dev/sdb1 /user_data xfs rw,nofail 0 0
| Emergency Mode and Local Disk |
1,477,101,611,000 |
I am using Arch Linux and I have cleared the fstab file on accident.
Of course I regenerate the fstab with
genfstab -U -p /mnt >> /mnt/etc/fstab
the thing is, I do not know what was in the beginning of the file and I know that using >> just adds to a file. So I am assuming that there might have been script before I screwed with the file.
In arch Linux we use something called pacstrap and I ran something like this
pacstrap -i /mnt base
so I image in the base packages there is a fstab file. Is this true? This made me think maybe I should know how to target specific packages or files?
|
pacstrap is part of arch-install-scripts; you can read the script to understand how it works.
As the help message notes:
pacstrap installs packages to the specified new root directory. If no packages
are given, pacstrap defaults to the "base" group.
pkgfile is a utility that lets you query pacman's database:
pkgfile /etc/fstab
core/filesystem
So, to create a new /etc/fstab, you could simply pacstrap /mnt/filesystem and deal with any *.pac{new,save} files. In your case, however, running genfstab (and then manually checking the result) would be sufficient.
| Reloading specific base files |
1,477,101,611,000 |
Is it possible to setup ecryptfs mounts to prompt for password upon bootup? Say for example /home and /var are ecryptfs folders that need to be mounted; how do I force a prompt upon bootup to ask for mount passwords?
|
Solution is to use luks/dm-crypt and then modify /etc/crypttab file to do what I need.
| Bootup prompt for ecryptfs password |
1,477,101,611,000 |
Issue:
I have a dual-boot PC, Ubuntu / Windows 10, that share access to a NTFS disk partition (mounted as /DATA/ in Ubuntu).
I need to avoid the "Permission denied" error when a chmod command is executed on a file in such shared partition, regardless the user calling this command. This is because I chmod is called as part of bigger procedures and the users cannot just avoid them, and when they return an error the whole procedure stops.
What I tried:
/DATA/ is now being mounted with the permissions option (mapping file is activate) and under a non-root user that has the ID of 1001, and all users are part of the the group with ID of 1003, to which rwx is allowed, i.e.:
UUID=... /DATA ntfs auto,users,rw,permissions,umask=007,uid=1001,gid=1003 0 0
This solution ALMOST works. Everyone can r+w and, when the user 1001 calls chmod we don't get an error. It does not make any change indeed, but it is not a problem. The problem is that for other users the command chmod still trigger errors as they are not considered the owners of the files.
Is there an way to give ownership of the partition mounted on /DATA/ to all users? Or to the user who first logins at least?
Or at least make the chmod command never return an error?
|
Does the program that calls chmod hard-code the path to /bin/chmod?
If not, if it just runs whichever chmod program is first in the PATH, try creating a directory that contains only a symlink called 'chmod' to /bin/true.
e.g. (as root):
# mkdir /usr/local/dummy
# ln -s /bin/true /usr/local/dummy/chmod
Then set the PATH to have this directory first (PATH="/usr/local/dummy:$PATH") before running the program. You can create a wrapper script to set the PATH and then run the program.
You might want to make a symlink for chown too.
BTW, this is stating the obvious, but you don't want this PATH setting to be the default. You only want it when running the program that triggers the problem.
| Allow all users to use chmod on a NTFS file system |
1,477,101,611,000 |
I manually created the partitions, copied a rootfs inside an appropriate one, chrooted into the rootfs, installed a kernel and Grub, just like I did million times before. I exactly use the same disk layout and boot process (without creating the rootfs from "scratch") as my current host.
Current problem is that the boot process hangs with
A start job is running for ...some-UUID-beginning...73
A start job is running for ...some-UUID-beginning...67
message, then boot fails and a rescue shell is present. System is Debian Buster, initially created with:
sudo lxc-create -n erik3 -t debian -- -r buster
Disk layout is:
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sdc
├─sdc1 ext2 1.0 fe2cfe8c-28d3-455c-961e-b586cf763367 224.8M 17% /mnt/zencefil-root-rootfs/boot
└─sdc2 crypto_LUKS 2 32d6e3b6-1e75-4d40-86c2-5a8853996e73
└─zencefil_crypt LVM2_member LVM2 001 ttASfx-WjIt-tuhW-AjRN-6tiJ-dnuI-AU8GgN
├─zencefil-swap swap 1 286b4d65-5ca6-4453-b904-6d56749fed0f
└─zencefil-root btrfs 655c3de0-2963-46d1-bc0f-a6a8690632ad 896.1G 3% /mnt/zencefil-root
When I enter my root password to examine the situation, I noticed that commenting out some necessary lines in /etc/fstab (and rebooting) still doesn't prevent boot from hanging for 1m30s but lets the system boot up correctly.
I double checked the contents of /etc/fstab file and it has to be correct:
/dev/mapper/zencefil-root / btrfs subvol=rootfs,rw,noatime 0 1
##UUID=fe2cfe8c-28d3-455c-961e-b586cf763367 /boot ext2 defaults,noatime 0 2
###/dev/mapper/zencefil-root /mnt/zencefil-root btrfs subvolid=5,rw,noatime 0 1
tmpfs /tmp tmpfs defaults,noatime,nosuid,nodev,mode=1777,size=512M 0 0
(the ##... and ###... lines are necessary but commenting them out lets the system display the login screen after 1m30s)
There is even no typo in fstab. Here is the proof:
grep "^##\w" /etc/fstab | sed -e 's/^##//' \
| awk '{print "mount " $1 " " $2 " -t " $3 " -o " $4}' \
| xargs -L 1 -I {} sh -c "echo {}; {}"
Above script parses the fstab file for the line ##..., constructs a mount ... command and executes it, which in turn succeeds:
mount UUID=fe2cfe8c-28d3-455c-961e-b586cf763367 /boot -t ext2 -o defaults,noatime
root@erik3:~# mount | grep boot
/dev/sda1 on /boot type ext2 (rw,noatime)
However, leaving the same line uncommented out in /etc/fstab doesn't mount the /boot. Why?
The ... / ... line in /etc/fstab makes / partition mounted in rw. Commenting out that line causes / partition mounted in ro (which is initially performed by initrd), as expected. So /etc/fstab file is absolutely regarded.
What is the subsystem that displays the "A start job is running..." messages?
|
It turns out that using the rootfs that is created with LXC is inappropriate for creating real installations. We should use multistrap instead.
The steps to fully produce above problem is available at multistrap-helpers@2ada86fd. If you create the rootfs with multistrap, then install-to-disk instructions works perfectly. If you create the rootfs with lxc-create, that problem happens.
| /etc/fstab contents seem to be wrong but they aren't |
1,477,101,611,000 |
Sometimes I have an error ext4 and my disk becomes read-only.
I can fix it with a reboot and fcsk /dev/sda2 but it keeps coming back...
Here are some dmesg :
[ 3160.692730] perf: interrupt took too long (2509 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
[ 3631.408303] perf: interrupt took too long (3144 > 3136), lowering kernel.perf_event_max_sample_rate to 63500
[ 4143.729000] perf: interrupt took too long (3992 > 3930), lowering kernel.perf_event_max_sample_rate to 50000
[ 4770.574303] perf: interrupt took too long (5018 > 4990), lowering kernel.perf_event_max_sample_rate to 39750
[ 5334.077445] perf: interrupt took too long (6289 > 6272), lowering kernel.perf_event_max_sample_rate to 31750
[ 8241.921553] acer_wmi: Unknown function number - 8 - 1
[11370.110956] perf: interrupt took too long (7918 > 7861), lowering kernel.perf_event_max_sample_rate to 25250
[11484.098212] acer_wmi: Unknown function number - 8 - 0
[11875.568601] EXT4-fs error (device sda2): ext4_iget:4862: inode #92441: comm TaskSchedulerFo: bad extra_isize 9489 (inode size 256)
[11875.575273] Aborting journal on device sda2-8.
[11875.575537] EXT4-fs error (device sda2) in ext4_da_write_end:3209: IO failure
[11875.575976] EXT4-fs (sda2): Remounting filesystem read-only
[11875.576792] EXT4-fs error (device sda2): ext4_journal_check_start:61: Detected aborted journal
[11875.577612] EXT4-fs error (device sda2): ext4_iget:4862: inode #92441: comm TaskSchedulerFo: bad extra_isize 9489 (inode size 256)
[11875.583499] EXT4-fs error (device sda2): ext4_iget:4862: inode #92441: comm TaskSchedulerFo: bad extra_isize 9489 (inode size 256)
[11875.832886] EXT4-fs error (device sda2): ext4_iget:4862: inode #92441: comm TaskSchedulerFo: bad extra_isize 9489 (inode size 256)
[11899.686408] systemd-journald[395]: Failed to write entry (21 items, 614 bytes), ignoring: Read-only file system
[11899.686483] systemd-journald[395]: Failed to write entry (21 items, 705 bytes), ignoring: Read-only file system
[11899.686587] systemd-journald[395]: Failed to write entry (21 items, 614 bytes), ignoring: Read-only file system
[11899.686656] systemd-journald[395]: Failed to write entry (21 items, 705 bytes), ignoring: Read-only file system
[11899.686719] systemd-journald[395]: Failed to write entry (21 items, 614 bytes), ignoring: Read-only file system
[11899.686781] systemd-journald[395]: Failed to write entry (21 items, 705 bytes), ignoring: Read-only file system
[11899.686844] systemd-journald[395]: Failed to write entry (21 items, 614 bytes), ignoring: Read-only file system
[11899.686938] systemd-journald[395]: Failed to write entry (21 items, 705 bytes), ignoring: Read-only file system
[11899.686999] systemd-journald[395]: Failed to write entry (21 items, 614 bytes), ignoring: Read-only file system
[11899.687084] systemd-journald[395]: Failed to write entry (21 items, 705 bytes), ignoring: Read-only file system
And my /etc/fstab :
UUID=9c882ba5-b980-4f7d-dd02-cd0a1831ab1a / ext4 errors=remount-ro 0 1
UUID=0E37-D0A2 /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
Should I remove or change remount-ro in fstab and ignore this error ? How to fix / avoid this error ?
|
Can you check your disk for bad sectors or bad blocks? you can use badblocks or smartctl command to check in linux, I think bad disk is only reason for your issue.
| Ext4 Error and disk remounted read-only |
1,477,101,611,000 |
I wonder if there is something like "user specific /etc/fstab" for fusermount? ~/.fstab, ~/.config/fstab, something the like, which would work in cooperation with FUSE.
I used
sshfs foo.bar: foo.bar/
from the home dir to connect to the remote dir (there is foo.bar directory, and I have .ssh/config set accordingly). But I didn't like the repeating of foo.bar, wanted to use simple command [cmd] foo.bar/ to mount the remote directory. After some googling I found that simple "mount foo.bar/" can be made to work with the following line in /etc/fstab (also needed to enable "user_allow_other" in /etc/fuse.conf)
[email protected]: /home/user/foo.bar fuse.sshfs user,IdentityFile=/home/user/.ssh/id_rsa,port=12345,allow_other 0 0
Now "mount foo.bar" works as intended (and "umount" works as well). But it seems kind of odd to edit system-wide file for user-specific purpose; also the settings already in .ssh/config are repeated there (port), the identity file has to be specified. Maintaining this for more sites (users) seems inconvenient and evidently not what /etc/fstab is for. Another oddity - FUSE is run by root (afaictl) when using this solution.
I would much prefer something like "fusermount foo.bar/", with user specific fstab.
Is there such a thing?
|
There's no per-user equivalent of /etc/fstab. You can write a shell script that reads a file of your choice and calls the appropriate mounting command. Note that from the argument foo.bar, you have to deduce multiple pieces of information: the server location foo.bar, the directory on the server (here your home directory), and first and foremost the fact that it's an SSHFS mount.
#!/bin/bash
####
if [ -e ~/.fstab ]; then
args=("$@")
((i=${#args[@]}-1))
target=${args[$i]}
while read filesystem mount_point command options comments; do
if [[ $filesystem = \#* ]]; then continue; fi
if [[ $mount_point = "$target" || $filesystem = "$target" ]]; then
if [[ -n $options ]]; then
args[$((i++))]=-o
args[$((i++))]=$options
fi
args[$((i++))]=$filesystem
args[$((i++))]=$mount_point
exec "$3" "${args[@]}"
fi
done
fi
## Fall back to mount, which looks in /etc/fstab
mount "$@"
(Warning: untested code.)
This snippet parses a file ~/.fstab with a syntax reminiscent of /etc/fstab: “device”, mount point, filesystem type, options. Note that here the filesystem type is a command to execute and the “device” is filesystem-dependent. Not all FUSE filesystem commands use this syntax with a “device” followed by a mount point, though it's a common convention.
SSH options like the identity file, the remote username, etc. can stay in ~/.ssh/config. The only reason to put them in /etc/fstab is to allow these options to be used by all users.
| User specific fstab for fusermount |
1,477,101,611,000 |
I see numerous how-to examples for mounting an ntfs partition with either a mount command or an entry in fstab. In all cases, specifying ntfs as the filesystem is associated with also specifying umask=0222, and specifying ntsf-3g never has a umask parameter.
Trying to research umask, I came across numerous explanations like this one. I can't get from those explanations to understanding "0222", which among other things, has one more digit than the specification seems to describe. I understand that it supposedly reduces permissions from the default definition. That's not much help, either. I'm guessing that it relates to writing, since in Linux, ntfs-3g supports it and at least as of a few years ago, ntfs did not.
What are the default permissions (I assume they relate to the directories and files and are independent of the filesystem), and what does "0222" do to that? Why is it needed? Is it just to avoid an error message trying to write to a partition when Linux doesn't support it?
|
I do not know the difference between ntfs and ntfs-3g.
Regarding the umask option, it specifies a bit mask such that the bits set in the umask are cleared in the file access permissions. These permission bits are RWXRWXRWX, where R is read access, W is write access, and X is execute access, with some higher bits used in special cases. The high order RWX is for the owner of the file being accessed, the next RWX group gives access for the group of the file, and the last is for everybody. Because these permissions come three bits at a time, they are traditionally in octal. The leading 0 can indicate either octal, or 0 for some of the special case bits since it is traditionally represented in octal anyway, depending on the context.
So a umask of 222 or 0222, which are the same since the number is traditionally octal, is 010010010 in binary. This means the W bit is set for the user, the group, and everybody else. Setting this bit in umask clears the W bit in the file access permissions.
This is not to avoid error messages. By specifying a umask of 222, it makes files non-writable by anybody, when otherwise they might have been writable.
| mount command permissions: ntfs vs. ntfs-3g |
1,477,101,611,000 |
I have two users: userA and userB. I have also NTFS formatted parition. Whole parition is only accessible to userA thanks to this in /etc/fstab:
/dev/sda3 /home/userA/data ntfs-3g defaults,rw,nouser,uid=userA,umask=077,exec 0 0
. I want to allow ONE folder (for example /home/userA/data/movies) to be accessible for userB, but not whole drive. How can I do this?
If I allow all users in fstab, both users have access to whole drive, regardless if it is mounted in /home/userA/ folder. userB can simply do
ls /home/userA/data
even if he can't do
ls /home/userA
If I leave fstab as I have it set now and I use symlink, symlink respects permissions to folder it's linked to and userB won't be allowed to use this symlink.
I also tried to use remount option, but only thing it can change is ro/rw option, it can't change uid, guid or similar for ntfs partitions. I guess policy below (from man mount) applies to ntfs too:
The -o remount may not be able to change mount parameters (all
ext2fs-specific parameters, except sb, are changeable with a remount,
for example, but you can't change gid or umask for the fatfs).
|
I assume the client machine is running Linux.
Linux has the ability to create multiple views of all or part of the same filesystem. You can use this to make only part of a filesystem accessible to a user (subject to further permission checks).
/dev/sda3 /home/userA/data ntfs-3g defaults,rw,nouser,uid=userA,umask=077,exec
/home/userA/data/subdir /home/userB/subdir bind
The command mount --bind /home/userA/data/subdir /home/userB/subdir sets up that second view.
If /home/userA is not accessible to user B then user B will not be able to access the NTFS partition through that view. However user B will be able to access the subdir directory through the view at /home/userB/subdir. Permissions still apply: some files under subdir may not accessible if their permissions exclude userB.
If you want to tweak permissions as well (to allow userB to access all files, or to grant read-only access only, etc.), you can use bindfs. See read only access to all files in a specific sub-folder and Allow a user to read some other users' home directories for example.
| How to allow access to only one NTFS folder of already mounted partition for specific user? |
1,477,101,611,000 |
I'm trying to mount USB thumb drive to my router.
My USB thumb drive is 32GB,divided to two partitions :
16GB NTFS and 16GB ext4 .
The 16GB NTFS partition will be automatically detected in router as sda1 and by default mounted to /mnt/sda1 and /tmp/ftp/Volume_A1.
The 16GB ext4 automatically detected in router as sda2 but has not been mounted.
So I want to mount sda2 to /test
these what I did:
mount /dev/sda2 /test <====== sda2 will be mounted to /test, but gone after router rebooted
added the UUID of /dev/sda2 as below to /etc/fstab to mount it on /test
.<========== I check on df never been mounted , please see below
root@router:/# blkid
/dev/sda2: UUID="14a0f0f0-27ac-4101-8d11-3057f10d1385" TYPE="ext4"
/dev/sda1: LABEL="usbdata" UUID="23D9FBBC72AB064E" TYPE="ntfs"
/dev/ubi1_0: UUID="9c7f4c41-289f-4c49-8036-3698b24c7687" TYPE="ubifs"
/dev/ubi0_0: UUID="66fa53a5-cc19-454d-b1a4-6a691051fb9e" TYPE="ubifs"
I added the UUID of /dev/sda2 (listed above) to /etc/fstab to mount it on /test:
root@router:/# nano /etc/fstab
# fstab file - used to mount file systems
proc /proc proc defaults 0 0
tmpfs /var tmpfs size=420k,noexec 0 0
tmpfs /mnt tmpfs size=16k,noexec 0 0
tmpfs /dev tmpfs size=64k,mode=0755,noexec 0 0
sysfs /sys sysfs defaults 0 0
debugfs /sys/kernel/debug debugfs nofail 0 0
mtd:bootfs /bootfs jffs2 ro 0 0
UUID=14a0f0f0-27ac-4101-8d11-3057f10d1385 /test auto nosuid,nodev,nofail 0 0
root@router:/# df
Filesystem 1K-blocks Used Available Use% Mounted on
ubi:rootfs_ubifs 44840 38760 6080 86% /
mtd:bootfs 4480 3440 1040 77% /bootfs
mtd:data 4096 464 3632 11% /data
ubi1:tp_data 4584 844 3472 20% /tp_data
ubi:rootfs_ubifs 44840 38760 6080 86% /tmp/root
/dev/sda1 15452156 84620 15367536 1% /mnt/sda1
/dev/sda1 15452156 84620 15367536 1% /tmp/ftp/Volume_A1
[Spacing has been modified in an attempt to increase readability.]
Please advise and thank you
===========================================================================
Following up comment's below :
===========================================================================
As suggested by Aaron D. Marasco, I changed auto to ext4:
UUID=14a0f0f0-27ac-4101-8d11-3057f10d1385 /test ext4 nosuid,nodev,nofail 0 0
still no luck. df as same result as before
And here is the output from ps, as requested by Hauke Laging.
(The router's Busybox doesn’t recognize the -p option.)
root@router:/# ps -o pid,args
PID COMMAND
1 init
2 [kthreadd]
3 [ksoftirqd/0]
4 [kworker/0:0]
5 [kworker/0:0H]
6 [kworker/u4:0]
7 [rcu_preempt]
8 [rcu_sched]
9 [rcu_bh]
10 [migration/0]
11 [migration/1]
12 [ksoftirqd/1]
14 [kworker/1:0H]
15 [khelper]
122 [writeback]
125 [ksmd]
126 [crypto]
127 [bioset]
129 [kblockd]
151 [skbFreeTask]
152 [bcmFapDrv]
173 [kswapd0]
174 [fsnotify_mark]
294 [cfinteractive]
344 [kworker/1:1]
351 [linkwatch]
352 [ipv6_addrconf]
357 [deferwq]
362 [ubi_bgt0d]
926 [jffs2_gcd_mtd2]
947 [ubi_bgt1d]
962 [ubifs_bgt1_0]
1039 [bcmFlwStatsTask]
1113 [kworker/1:2]
1137 {rcS} /bin/sh /etc/init.d/rcS S boot
1139 init
1140 logger -s -p 6 -t sysinit
1286 /sbin/klogd
1540 /sbin/hotplug2 --override --persistent --set-rules-file /etc/hotplug2.rul
1550 /usr/sbin/logd -C 128
1555 /sbin/ubusd
1558 {S12ledctrl} /bin/sh /etc/rc.common /etc/rc.d/S12ledctrl boot
1560 /usr/bin/ledctrl
1627 [bcmsw_rx]
1629 [bcmsw]
1636 [pdc_rx]
1649 /bin/swmdk
1766 /sbin/netifd
4265 [dhd_watchdog_th]
4272 [wfd0-thrd]
4425 [check_task]
4493 [kworker/0:2]
4559 [scsi_eh_0]
4562 [scsi_tmf_0]
4568 [usb-storage]
4917 [kworker/u4:2]
4919 [kworker/1:1H]
5039 /usr/sbin/imbd
5207 /usr/sbin/dnsmasq -C /var/etc/dnsmasq.conf
5219 [ telnetDBGD ]
5220 [ acktelnetDBGD ]
5243 [NU TCP]
5248 [NU UDP]
5356 eapd
5369 nas
5395 wps_monitor
6095 acsd
7008 /usr/sbin/mcud
7592 /usr/sbin/dropbear -P /var/run/dropbear.1.pid -p 22
7598 {S50postcenter} /bin/sh /etc/rc.common /etc/rc.d/S50postcenter boot
7600 /usr/sbin/postcenter
7612 /usr/sbin/sysmond
7620 {S50tmpServer} /bin/sh /etc/rc.common /etc/rc.d/S50tmpServer boot
7622 /usr/bin/tmpServer
7626 /usr/sbin/tsched
7628 /usr/bin/tmpServer
7777 /usr/bin/client_mgmt
8350 /usr/sbin/ntpd -n -p time.nist.gov -p time-nw.nist.gov -p time-a.nist.gov
8398 [ubifs_bgt0_0]
8403 /usr/bin/cloud-https
8639 {S99switch_led} /bin/sh /etc/rc.common /etc/rc.d/S99switch_led boot
8644 /usr/bin/switch_led
8758 /usr/bin/tm_shn -b start
8948 [tntfsiupdated]
9217 /usr/sbin/smbd -D
9219 /usr/sbin/nmbd -D
9264 proftpd: (accepting connections)
9279 udhcpc -p /var/run/udhcpc-eth0.pid -s /lib/netifd/dhcp.script -O 33 -O 12
9330 /usr/sbin/minidlnad -f /tmp/minidlna.conf -P /var/run/minidlnad.pid
9533 /usr/sbin/crond -c /etc/crontabs -l 5
9568 {dnsproxy_deamon} /bin/sh /usr/lib/dnsproxy/dnsproxy_deamon.sh
9974 /usr/sbin/improxy -c /etc/improxy.conf -p /tmp/improxy.pid
10122 /usr/sbin/miniupnpd -f /var/etc/miniupnpd.conf
10332 /usr/bin/cloud-brd -c /etc/cloud_config.cfg
10341 /usr/bin/cloud-client
10778 {lic-setup.sh} /bin/sh ./lic-setup.sh
10783 ./gen_lic
11185 {tcd_monitor.sh} /bin/sh ./tcd_monitor.sh
11186 {dc_monitor.sh} /bin/sh ./dc_monitor.sh
11187 {wred-setup.sh} /bin/sh ./wred-setup.sh
11200 ./tcd
11204 ./dcd -i 1800 -p 43200 -S 4 -b
11217 ./wred -B
11241 {clean-cache.sh} /bin/sh ./clean-cache.sh
11244 /usr/bin/tm_shn -t start
15903 sh /lib/deleteTmSigToken.sh 86400
15906 sleep 86400
19612 /usr/sbin/dropbear -P /var/run/dropbear.1.pid -p 22
19771 -ash
19884 sleep 600
21950 sleep 30
22135 sleep 5
22137 sleep 5
22158 sleep 5
22160 sleep 5
as answer by Hauke Laging . Sounds right, I do mount -a or mount /test and sda2 will be mounted to /test , how to permanently mount with udev rule ?
On my router , I have no idea to run udev rule ( cant find any udev.conf ) , so I test with run script mount /test in /etc/rc.local , reboot the router but still won't mounted /test
then I add sleep 20 for delaying in script and test by reboot the router, and working , automatically mounted /test now !
Thank you all
|
On SystemD systems devices listed in /etc/fstab which are not present at boot time but appear later are mounted automatically. Other systems don't do that (at least not all of them).
So you need something that triggers a mount /test call when the device has become available. This could be done with a udev rule (RUN=).
| Mount ext4 with UUID in /etc/fstab |
1,477,101,611,000 |
I have a proc mount entry in my fstab on (Debian derived) Raspberry Pi OS. Is this one needed? On my pc (running Arch linux) I don't have this but (of course) proc gets mounted.
fstab line:
proc /proc proc defaults 0 0
uname -a:
Linux website 4.19.66-v7+ #1253 SMP Thu Aug 15 11:49:46 BST 2019 armv7l GNU/Linux
|
Historically, /proc wasn’t automatically mounted, which is why some systems still list it in /etc/fstab.
Nowadays systemd takes care of mounting a number of “API file systems” including /proc, so any system running systemd will have /proc mounted whether it’s listed in /etc/fstab or not.
API file systems may still appear in /etc/fstab since that’s the documented way of overriding mount settings; see Systemd backed tmpfs | How to specify /tmp size manually for details.
| Why is there a `proc` mount in fstab |
1,477,101,611,000 |
it works fine mounting manually from the cli
but when running
sudo mount -a
after editing the fstab im getting errors. It seems right to me, any one have a suggestion.
my fstab is:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda1 during installation
UUID=31c241fa-8ce7-4ddf-a11b-c1bb7214b9ff / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=f6adc6f2-a8ac-46cc-a167-fd5bdb985ca7 none swap sw 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0
//megaboxy/inetpub /mnt/megabo cifs username=admin, password=passwd 0 0
//megaboxz/inetpub /mnt/megab cifs username=admin, password=passwd 0 0
//10.0.0.15/Share /mnt/CodeS cifs username=admin, password=passwd 0 0
//210.0.0.1/inetpub /mnt/webse cifs username=admin, password=passwd 0 0
//10.0.0.120/Kabura-Projects /mnt/Projects cifs username=admin, password=passwd 0 0
|
//megaboxy/inetpub /mnt/megabo cifs username=admin, password=passwd 0 0
^ this is a problem
You can't put spaces between the options. Remove that and that error should go avay.
| [mntent]: line 15 in /etc/fstab is bad |
1,477,101,611,000 |
I followed instructions for sshfs "on demand" mounting, but it doesn't work.
I added this to /etc/fstab:
username@hostname:/ /mnt/remotes/hostname fuse.sshfs noauto,x-systemd.automount,_netdev,users,idmap=user,IdentityFile=/home/stanley/.ssh/my_rsa_key,allow_other,reconnect 0 0
Then I ran sudo mount -a which did nothing. I also tried systemctl daemon-reload && systemctl restart proc-sys-fs-binfmt_misc.automount.
So I followed the troubleshooting tips, and used this instead:
username@hostname:/ /mnt/remotes/hostname fuse.sshfs ssh_command=ssh\040-vv,sshfs_debug,debug,_netdev,users,idmap=user,IdentityFile=/home/stanley/.ssh/my_rsa_key,allow_other,reconnect 0 0
And then ran sudo mount -av. In a separate terminal I could access that mount point.
So 1) ssh and sftp are working, 2) sshfs is working, 3) permissions are fine.
So only the on-demand part isn't working - what am I doing wrong?
|
The instructions say:
Note: After editing /etc/fstab, (re)start the required service: systemctl daemon-reload && systemctl restart where <target> can be found by running systemctl list-unit-files --type automount
You have a problem :-(.
Mount options which are implemented by systemd, such as x-systemd.*, are not implemented by the mount command.
But the mount command is what you need to use, if you are an unprivileged user (no root/sudo), and you want to mount an fstab entry (which has been marked to allow this using the user or users mount option).
| sshfs with on-demand mounting |
1,477,101,611,000 |
This is my fstab:
#
# /etc/fstab
# Created by anaconda on Sat Jan 12 02:12:44 2013
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=fb2b6c2e-a8d7-4855-b109-c9717264da8a / ext4 auto,noatime,noload,data=ordered,commit=10,defaults 1 1
UUID=71362665-f627-41e1-a093-de42a0a356e2 /boot ext3 defaults 1 2
UUID=8024a5cd-af4b-4776-af0d-65ad80af8649 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/usr/tmpDSK /tmp ext3 defaults,noauto 0 0
/dev/sdd1 /home4 auto auto,noatime,noload,data=ordered,commit=10,defaults 0 0
/dev/sdc1 /home3 auto auto,noatime,noload,data=ordered,commit=10,defaults 0 0
/dev/sdb1 /home2 auto auto,noatime,noload,data=ordered,commit=10,defaults 0 0
What is UUID=fb2b6c2e-a8d7-4855-b109-c9717264da8a? Is it a partition in sda?
What is tmpfs,devpts,sysfs, and proc?
What is /usr/tmpDSK?Is it /dev/sdb /dev/sda or what?
How do I get temporary directory in memory and then resort to disk when memory is full?
|
The UUID fb... is a partition. From the Information above, it is not
possible to tell if it is /dev/sda or anything else.
proc,sysfs,devpts are virtual file systems
tmpfs is some ramdisk-like filesystem
/usr/tmpDSK seems to be a file which is used as image to mount /tmp
| How to understand this fstab? |
1,477,101,611,000 |
I have added user_xattr in ext4 but when I remount it doesn't show xattr & I installed attr & attr_dev
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/Anonymous--vg-root / ext4\040remount,user_xattr errors=remount-ro 0 1`
|
User extended attributes are supported by default on Ext4, you don’t need to do anything to enable them. To verify this, run
cd
touch xattr-test
setfattr -n user.test -v "hello" xattr-test
getfattr xattr-test
This should show that the extended attribute was successfully stored.
| how to enable xattr support in Debian 9 (Stretch) |
1,477,101,611,000 |
Centos 7.1 64. This is what I have:
Two raids, but not md0 and md1
[root@localhost]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdb2[1] sda2[0]
974711616 blocks super 1.0 [2/2] [UU]
bitmap: 1/8 pages [4KB], 65536KB chunk
md127 : active raid1 sdb1[1] sda1[0]
2048000 blocks super 1.2 [2/2] [UU]
unused devices: <none>
This is my fstab
[root@localhost]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Sun Apr 26 22:00:45 2015
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=ec671046-c512-4992-9a91-ac58ab2d0b31 / ext4 defaults 1 1
UUID=30993a21-eff2-4c8d-9fe5-d7055e6e3ed0 swap swap defaults 0 0
And raid configuration
[root@localhost]# cat /etc/mdadm.conf
# mdadm.conf written out by anaconda
MAILADDR root
AUTO +imsm +1.x -all
ARRAY /dev/md/root level=raid1 num-devices=2 UUID=331de03d:8ba39777:3b664baf:36366f33
ARRAY /dev/md/swap level=raid1 num-devices=2 UUID=f387cddd:e96384df:1a4f0d19:7d7fd10e
As we see the UUID in fstab and mdadm are different.
The questions:
Why does system work and "/" is mounted - we do see that UUID are different.
If I change UUID in fstab conf (to UUID from mdadm conf) what will it have as a result?
|
You can see the UUIDs for the various different components (physical disk, RAID, etc.) by running blkid
Here is a sample from one of my systems:
/dev/sda3: UUID="NAzDnw-zu08-iSt9-v76l-njNc-NElx-8RFzVg" TYPE="LVM2_member"
/dev/sdc3: UUID="215b625b-8531-26ed-c610-01f443697250" UUID_SUB="087e72db-ff75-bcbe-5b41-8f79a6bb54f5" LABEL="server:3" TYPE="linux_raid_member"
/dev/md3: UUID="04eaa265-36e2-4f24-93f9-6eb88a55e56b" TYPE="crypto_LUKS"
/dev/mapper/server_crypt_md3: UUID="GnOlBC-BS1f-32BV-PAP7-Tzsy-KaMm-kQDMpj" TYPE="LVM2_member"
/dev/mapper/server_crypt_md3-iso_images: LABEL="iso_images" UUID="99880b2b-25f8-46a0-b7b9-20ec7da53c32" TYPE="ext4"
You can see that the UUID for the filesystem labelled "iso_images" is different to the UUID for the underlying components (LVM, LUKS crypto, RAID). Each UUID allows the appropriate subsystem to identify its known disk partitions and devices and to assemble the necessary parts correctly.
You can quickly see that if you were change the UUID in /etc/fstab from one referring to a filesystem to one referring to, say, a RAID 1 device you would be referencing the wrong device and it wouldn't work. (Worse, under some circumstances, it might be possible to appear to mount a RAID 1 member as a filesystem but doing so would unverifiably corrupt the RAID 1 array and therefore its mirrored filesystem.)
| Raid devices are mounted with different UUID |
1,477,101,611,000 |
our goal is to create bash script that delete the unused / unnecessary UUID number/s from /etc/fstab file ,
brief background - in our labs , we have more then 500 RHEL servers , and we want to fix the fstab files that have incorrect fstab configuration as unused UUID number/s or unused UUID number/s that are in Comment lines , etc
we create the following bash script as example.
#!/bin/bash
blkid_list_of_uuid=` blkid | awk -F'UUID=' '{print $2}' | awk '{print $1}' | sed s'/"/ /g' `
grep UUID /etc/fstab >/tmp/fstab
while read line_from_fstab
do
echo "checking if ${line_from_fstab} is unused UUID"
if [[ ! ${line_from_fstab} =~ $blkid_list_of_uuid ]]
then
#sed -i "/$line_from_fstab/d" /etc/fstab
echo "delete unused line ${line_from_fstab} from fstab"
fi
done < /tmp/fstab
we captured the blkid number in blkid_list_of_uuid variable. and filter the UUID lines from fstab into /tmp/fstab file
the target of the if syntax - [[ ! ${line_from_fstab} =~ $blkid_list_of_uuid ]]
is to delete by sed ( for now in comment ) the UUID lines in /etc/fstab that are not in used
but the regex isn't working, and script actually delete the UUID that are in used
example of blkid
blkid
/dev/mapper/vg-VOL_root: UUID="49232c87-6c49-411d-b744-c6c847cfd8ec" TYPE="xfs"
/dev/sda2: UUID="Y5MbyB-C5NN-hcPA-wd9R-jmdI-02ML-W9qIiu" TYPE="LVM2_member"
/dev/sda1: UUID="0d5c6164-bb9b-43f4-aa9b-092069801a1b" TYPE="xfs"
/dev/mapper/vg-VOL_swap: UUID="81140364-4b8e-412c-b909-ef0432162a45" TYPE="swap"
/dev/mapper/vg-VOL_var: UUID="e1574eeb-5a78-4a52-b7e3-c53e2b8a4220" TYPE="xfs"
/dev/sdb: UUID="547977e2-a899-4a75-a31c-e362195c264c" TYPE="ext4"
/dev/mapper/vg-VOL_docker: UUID="2e1a2cbf-9920-4e54-8b6b-86d0482c5f7b" TYPE="xfs"
/dev/sdc: UUID="1a289232-0cfe-4df7-9ad5-6a6e2362a1c5" TYPE="ext4"
/dev/sdd: UUID="91493d1f-ffe9-4f5f-aa6d-586d2c99f029" TYPE="ext4"
/dev/sde: UUID="f11845e7-1dcb-4b81-a1d4-9a5fe7da6240" TYPE="ext4"
|
The reason it isn't working is because you are trying to match the wrong things. This is what your blkid variable contains:
$ printf '%s\n' "$blkid_list_of_uuid"
49232c87-6c49-411d-b744-c6c847cfd8ec
Y5MbyB-C5NN-hcPA-wd9R-jmdI-02ML-W9qIiu
0d5c6164-bb9b-43f4-aa9b-092069801a1b
81140364-4b8e-412c-b909-ef0432162a45
e1574eeb-5a78-4a52-b7e3-c53e2b8a4220
547977e2-a899-4a75-a31c-e362195c264c
2e1a2cbf-9920-4e54-8b6b-86d0482c5f7b
1a289232-0cfe-4df7-9ad5-6a6e2362a1c5
91493d1f-ffe9-4f5f-aa6d-586d2c99f029
f11845e7-1dcb-4b81-a1d4-9a5fe7da6240
This means that this:
if [[ ! ${line_from_fstab} =~ $blkid_list_of_uuid ]]
becomes something like this:
if [[ ! "UUID=0a3407de-014b-458b-b5c1-848e92a327a3 / ext4 defaults 0 1" =~ " 49232c87-6c49-411d-b744-c6c847cfd8ec
Y5MbyB-C5NN-hcPA-wd9R-jmdI-02ML-W9qIiu
0d5c6164-bb9b-43f4-aa9b-092069801a1b
81140364-4b8e-412c-b909-ef0432162a45
e1574eeb-5a78-4a52-b7e3-c53e2b8a4220
547977e2-a899-4a75-a31c-e362195c264c
2e1a2cbf-9920-4e54-8b6b-86d0482c5f7b
1a289232-0cfe-4df7-9ad5-6a6e2362a1c5
91493d1f-ffe9-4f5f-aa6d-586d2c99f029
f11845e7-1dcb-4b81-a1d4-9a5fe7da6240
" ]]
Of course, this will never be true: you are searching for the entire fstab line in the list of found UUIDs. What you wanted to do is search for the UUID only.
Don't do this, use one of the methods given in the other answers, using the shell for this sort of thing is a bad idea, but for the sake of completion, here's a mostly shell based approach using the logic I think you meant to use (note that this requires GNU grep):
$ grep -oP '^UUID=\S+' /etc/fstab | sed 's/=/="/; s/$/"/' |
while read -r fstab; do
sudo blkid |
grep -q "$fstab" &&
echo "GOOD: $fstab" ||
echo "BAD: $fstab"; done
BAD: UUID="e16a3de8-a58f-430f-b80f-3d87e9fb0b1d"
BAD: UUID="ef6747e2-f802-4b18-9169-ae65f9933ef1"
BAD: UUID="b00792c8-f7e0-4448-b98d-021eede31e6c"
GOOD: UUID="32133dd7-9a48-4b9d-b2e0-6e383e95631d"
GOOD: UUID="69ae5a79-9a15-489c-951d-1e0c2a16b7fc"
GOOD: UUID="6E5E-90F0"
GOOD: UUID="ff3c9de1-417c-4c4d-8150-a89d222ae60b"
The BAD: are the UUIDs in my /etc/fstab file that are not found in the output of blkid on my system.
| linux + delete by bash script the unused/incorrect UUID number/s from fstab file |
1,477,101,611,000 |
I have a new disk with one btrfs partition on it and want to mount it via fstab. The problem is, that all files are now owned by root but I want them to be owned by the user with ID 1000 (and group ID 1000).
With the ntfs partition on my old disk the entry looked like this:
UUID=AAAE86DAAE869E87 /media/disk ntfs auto,uid=1000,gid=1000,errors=remount-ro 0
My current btrfs entry looks like this:
UUID=eaadb7d0-4dba-46a7-85ac-0fbf81821840 /media/disk btrfs defaults 0 1
I can't set the uid and gid option on btrfs, because when I do it, I'll get an error while booting (the options uid and gid do not exist).
Is there an opportunity to set the ownership of all files to a specific user?
Regards,
Hauke
|
It sounds like you're thinking that uid and gid options in fstab are a generic way to override ownership on a filesystem. That's not really true. The NTFS driver, specifically, supports those options because NTFS doesn't store (Linux-compatible) ownership information on disk, so the driver has to fake them.
Btrfs, on the other hand, natively supports Linux file ownership, so there's no need for the driver to fake it; the btrfs driver has no uid or gid options. If you want to change who owns something on a btrfs filesystem, just use the chown command.
If you want the "entire filesystem" to be owned by a specific user, mount it and then chown the mountpoint directory. That sets the owner of the filesystem's root directory, so the user can create files there. (And the files created by that user will, of course, be owned by that user.)
| Ownership of btrfs partition via fstab |
1,477,101,611,000 |
I have this fstab entry:
machine.local:/srv/files /res/files nfs defaults 0 0
It was working great until machine.local dropped the connection momentarily. Now, the share isn't accessible. df, umount /res/files, ls /res all hang forever.
What should I do, short of a reboot?
|
NFS really ought to reconnect once the NFS server is back up. It may take a few minutes (it needs to notice the timeout). The timeo option lets you change how long the timeout takes.
umount -f /res/files will probably unmount the share (and kill all the processes waiting on it), if you try it a few times.
On older kernels, if you have the share mounted with intr, you can kill the waiting processes. On newer kernels (2.6.25+), you can kill -9 them.
NFS client options are documented in the nfs(5) manpage.
Note: Some versions of umount have a bug where they try to stat the filesystem before unmounting it. If so, you'll need a trivial C program like this:
#include <sys/mount.h>
int main() {
const char p[] = "/res/files";
umount2(p, MNT_FORCE);
umount2(p, MNT_FORCE);
return 0;
}
| NFS server dropped connection momentarily, now df, ls, and umount all hang |
1,477,101,611,000 |
I have manually installed linux on a USB drive.
It works and boots up when I plug the drive into the original computer.
The problem comes when the drive is on a different computer or there are other drives plugged in and my USB is NOT /dev/sdb.
I then get an error that the root drive can not be mounted because etc/fstab says root is /dev/sdb1.
How can I make the /etc/fstab either change on bootup or make it automatically use the partition that the kernel is in (my root partition)?
|
You can also identify the partitions with their UUIDs
The Unique Universal IDentifier is, as the name implies, unique and never changes. It even stays the same when using the media on a different computer.
You can use UUIDs instead of /dev/sdx by editing /etc/fstab
Note that you need to run the following commands as root.
Identify your partition with lsblk, e.g. /dev/sda1
Get the partitions UUID via blkid
Edit /etc/fstab and replace /dev/sda1 with the UUID as following
Before:
/dev/sda1 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 0 2
After:
UUID=5cd7485d-d22e-4860-bdb5-753d5456714a /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 0 2
| How to make a "dynamic" etc/fstab |
1,477,101,611,000 |
Im following along howtogeek: how to harmonize your dual boot, however I have ran in to an issue. I have added my storage drive to etc/fstab as followed:
# storage mount
UUID=748A56588A5616C8 /media/storage/ ntfs-3g auto,user,rw 0 0
based on my result from blkid
sudo blkid | grep Storage
/dev/nvme0n1p7: LABEL="Storage" UUID="748A56588A5616C8" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="17c911ed-1b6b-4d69-a08a-f5aec2e74439"
The issue is that when I reboot, my drive is there but mounted in as read-only mode
|
Windows 10 by default does a fast startup that does not fully release the drives in use. Linux cannot then write to them, because that would lead to corruption.
A temporary solution is to press Shift as you restart/reboot Windows 10. But that should be repeated every time.
A more permanent solution can be found here. I do not quote the details of those steps because it feels off-topic.
| Dual boot (Win10 & Mint) "storage drive": rw mount /etc/fstab loads as read-only |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.