date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,502,919,320,000 |
When I ssh into my computer, files I create on the main hard drive are owned by me:
$ touch test
$ ls -l test
-rw-r--r-- 1 smithty domain users 0 Aug 16 17:26 test
But when I move into a folder that's on a second hard drive, everything I create is owned by root as default:
$ cd data
$ touch test
$ ls -l test
-rwxrwxrwx 1 root root 0 Aug 16 17:28 test
I assume this is because I'm doing something wrong when I mount this drive, but I'm not sure what. I use the following config in /etc/fstab:
UUID=A88667B486678224 /media/data ntfs rw,nosuid,dev,exec,auto,nouser,async 0 2
I originally used the defaults option, but thought that shifting to nosuid would fix this. It hasn't though. Have I done something wrong in my fstab, or is there something else amiss.
This is on Ubuntu 14.04.1. My login shell is dash, but the problem is the same if I switch to bash.
|
NTFS doesn't know what a Linux user id is. It doesn't store such metadata. So everything gets to be root.
ext4/xfs (which is what your main hard drive likely is) does know it.
You might want to mount using the "uid=xxx option, see man page"
| files created on internal hard drive are always owned by root |
1,502,919,320,000 |
I'm trying to speed up a project which uses a folder for cache by mounting the cache folder on tmpfs. But whenever I mount it I get this error message:
mount: special device tmpfs does not exist
this is the entry on /etc/fstab:
tmpfs /home/rkmax/Projects/webapp/app/cache rw,size=500M,nosuid,uid=1000,gid=100 0 0
My distro is ArchLinux.
|
You didn't specify the filesystem type - this is required.
This is what you need:
tmpfs /home/rkmax/Projects/webapp/app/cache tmpfs rw,size=500M,nosuid,uid=1000,gid=100 0 0
| special device tmpfs does not exist |
1,502,919,320,000 |
On Cinnamon 5.0.7 (Linux Mint 20.2), how do I prevent mounted devices, specifically device created in fstab, from showing up on the desktop and Nemo sidebar.
Until recently, this was the default behavior. But after today's update/reboot, pretty much every mounted device is populating the desktop and sidebar.
Majority of these are critical, background devices (e.g. filesystem root, /tmp, /home, ramdisks, etc.) that should never be unmounted. They already have working mount points and need not be shown otherwise, especially with an "eject" button.
sample fstab line of some mounted device that show on desktop and Nemo:
UUID=### / btrfs defaults,subvol=@ 0 1
tmpfs /mnt/ramdisk tmpfs defaults,noatime,nofail,size=500M 0 2
|
To disable on the Desktop:
Right click the Desktop, click Customize, then click the hyperlink-looking Desktop Settings Button, and slide the Mounted Drives slider off.
To hide specific partitions from appearing in the user interface, you can open the Disks utility (gnome-disks). Then select the devices you want to hide and click the gear icon for that partition:
Next, select Edit Mount Options...
Next, uncheck User Session Defaults, and uncheck Show in user interface:
Once the system is rebooted, this device no longer showed up in the GUI on my VM
| Blocking mounted devices from showing on Cinnamon desktop |
1,502,919,320,000 |
I am trying to connect to sshfs with fstab on Ubuntu, but the files are not loading.
In the fstab I put the following :
[email protected]:/home/ssh2/SSHserver /home/asir3/Escritorio/SSHclient fuse.sshfs noauto,x systemd.automount,_netdev,user,idmap=user,follow_symlinks,identityfile=/home/ssh2/.ssh/id_rsa,allowother,default_permissions,uid=1001,gid=1001 0 0
Then I save and make a : mount -a
It doesn't give me any error, it lets me access the folder, but it doesn't synchronise with the server.
The server has the following content :
ssh2@Asir03:~$ tree SSHserver/
SSHserver/
├── ssh1
│ ├── 15.txt
│ ├── 1.txt
│ └── a
│ ├── 150.txt
│ └── 15.txt
├── ssh1.txt
├── ssh2
│ ├── pepa.txr
│ ├── pepa.txt
│ └── pepe.txt
├── ssh2.txt
└── ssh3
├── gema.txt
├── javi.txt
├── juan.txt
└── marina.txt
4 directories, 13 files
ssh2@Asir03:~$
And this is what I get in the client:
asir3@Asir03:~/Escritorio$ tree SSHclient/
SSHclient/
└── hola
1 directory, 0 files
asir3@Asir03:~/Escritorio$
It lets me add folders, files and so on, but it doesn't save on the server.
|
noauto means "no automatic mounting", so that this file system is neither mounted at boot nor when doing mount -a. You can, however, as normal user mount it.
| Mount with sshfs in /etc/fstab file |
1,502,919,320,000 |
One of my attached disks had xfs filesystem. I formatted the disk to ext4 using:
sudo mkfs.ext4 /dev/sdc1
Now when I run sudo -i blkid, I get this output:
/dev/sdc1: UUID="df722345-7e80-4a08-8da1-e6046cc2b0e1" TYPE="ext4" PARTLABEL="xfspart" PARTUUID="1df243b5-2b64-4c39-bd45-4cb31d7ff58e"
I can see that the PARTLABEL is xfspart. Before I make any changes to fstab, just want to make sure that PARTLABEL won't cause any problem, if I add this line to fstab
UUID=df722345-7e80-4a08-8da1-e6046cc2b0e1 /disk3 ext4 defaults,nofail 1 2
|
The PARTLABEL is a property of the partition table (GPT), unrelated to partition content (any filesystem or lvm, luks, raid, etc.). Thus it's not overwritten when you mkfs partition content.
If you are not using this value for anything, you can ignore it since it means nothing. Or, to avoid confusion, you can change it with any partition software of your choice.
Example with parted:
# parted /dev/loop0 print
Number Start End Size File system Name Flags
1 1049kB 94.4MB 93.3MB xfspart
# blkid /dev/loop0p1
/dev/loop0p1: PARTLABEL="xfspart" PARTUUID="a789cf0a-3a18-4b87-af2a-abfed6ca9028"
Change the PARTLABEL (partition name in parted) of partition 1 to something else:
# parted /dev/loop0 name 1 schnorrgiggl
Afterwards:
# blkid /dev/loop0p1
/dev/loop0p1: PARTLABEL="schnorrgiggl" PARTUUID="a789cf0a-3a18-4b87-af2a-abfed6ca9028"
# parted /dev/loop0 print
Number Start End Size File system Name Flags
1 1049kB 94.4MB 93.3MB schnorrgiggl
These names also appear under /dev/disk/by-partlabel which can be a convenient way to refer to partition block devices. Consider meaningful names like grub, boot, root, home, ... instead of xfspart or extpart which could be anything at all. However, if you use duplicate labels on separate disks, it's unclear which one the partlabel will point to.
PARTUUIDs exists to avoid such naming scheme conflicts, and filesystem UUID is the safest way to refer to a filesystem by content (regardless of where it is stored), so for /etc/fstab it's still best to use UUID= instead of any LABEL=, PARTLABEL=, PARTUUID= etc. alternatives.
| Does "PARTLABEL" affects fstab behavior in Ubuntu16.04? |
1,502,919,320,000 |
I recently installed Debian 10 on my laptop and for the first time I decided to give file encryption a go.
But I've found something interesting in the /etc/fstab file, and it's that it doesn't use UUID and instead it uses absolute paths.
This is my /etc/fstab:
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/sda1_crypt / ext4 errors=remount-ro 0 1
# /boot was on /dev/sda3 during installation
UUID=fb4bd462-2ad8-4e56-b84e-602a94bf8b31 /boot ext4 defaults 0 2
/dev/mapper/sda5_crypt none swap sw 0 0
And this is the output from lsblk -o PATH,UUID,NAME,MOUNTPOINT
NAME PATH UUID MOUNTPOINT
sda /dev/sda
├─sda1 /dev/sda1 f0ece3a3-69c2-4ad8-b819-311a18c37b21
│ └─sda1_crypt
│ /dev/mapper/sda1_crypt b73f7cef-ba4e-4587-9dba-da8385d93824 /
├─sda2 /dev/sda2
├─sda3 /dev/sda3 fb4bd462-2ad8-4e56-b84e-602a94bf8b31 /boot
└─sda5 /dev/sda5 ca96319f-82b3-4cbf-a1e1-7d30f7be4576
└─sda5_crypt
/dev/mapper/sda5_crypt 40b9e71c-46b5-4d29-91a4-aaa12ca0e109 [SWAP]
I have an encrypted root partition on sda1 and an encrypted swap partition on sda5.
I had to create an unencrypted /boot partition too (sda3).
sda2 is free space I'll use for other purposes.
As you can see in the /etc/fstab, Debian identifies my /boot partition with its UUID, as it did on other occasions that I installed an unencrypted system, but it uses absolute paths for the encrypted partitions.
Can anyone help me identify why this happens and if it would be a good idea or even a good practice to change the /etc/fstab file so it uses UUID instead of paths?
Thanks.
|
These absolute device paths are perfectly fine, since their names are stable and prescribed by the first fields in the lines of /etc/crypttab. Actually, they are symlinks to the numbered (thus unstable) device mapper device node names. If /etc/crypttab refers to their source devices (in the second fields) by stable names or UUIDs, your are safe from unpredictable device ordering.
| Mount points in /etc/fstab on Debian system for an encrypted partition |
1,502,919,320,000 |
I use the following awk in order to remove duplicate lines from the /etc/fstab file on Linux.
The problem that it also removes the lines that start with #.
How can I change the awk syntax in order to ignore lines starting with # in the file?
awk '!a[$0]++' /etc/fstab > /etc/fstab.new
cp /etc/fstab.new /etc/fstab
|
Tell AWK to accept lines starting with # as well as non-duplicate lines:
awk '/^#/ || !a[$0]++' /etc/fstab > /etc/fstab.new
If you want to avoid doing this if there are no duplicate lines (per your comments), you can use something like
if awk '!/#^/ && a[$0]++ { dup = 1 }; END { exit !dup }' /etc/fstab; then
awk '/^#/ || !a[$0]++' /etc/fstab > /etc/fstab.new
copy /etc/fstab.new /etc/fstab
fi
but that ends up dong the work twice effectively.
| awk + remove duplicate lines but ignore lines that begin with # |
1,502,919,320,000 |
Can you help me find UUID of my SSD which is partitioned already? The goal is I want to mount this SSD under my home dir. To do so, I need to add a line to my /etc/fstab file. To do so, I need to put its UUID in the line. To do so, I need to determine its UUID. Nothing is output by command blkid, which baffles me. The advice in the fstab file says:
" Use 'blkid' to print the universally unique identifier for a device"
and I certainly have this device, and this device is certainly working, and certainly partitioned, and I used sudo, but blkid steadfastly won't print anything about this device's UUID for some reason. But maybe the UUID is already being shown in the file /etc/fstab as UUID=69A1-BD52 so I don't really need blkid to work, and I can skip that nonsense. Not sure. OS is Ubuntu 14.04LTS.
Here is my current fstab:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/ubuntu--vg-root / ext4 errors=remount-ro 0 1
# /boot was on /dev/sda2 during installation
UUID=9b4fb887-5dd8-413c-b0b0-dd3c803cf4ab /boot ext2 defaults 0 2
# /boot/efi was on /dev/sda1 during installation
UUID=69A1-BD52 /boot/efi vfat umask=0077 0 1
/dev/mapper/ubuntu--vg-swap_1 none swap sw 0 0
/dev/nvme0n1 /mnt/fastssd auto nosuid,nodev,nofail,x-gvfs-show 0 0
# Following was added by ga for permanent fast swap file on ssd with high priority as created at cmd line earlier
/mnt/fastssd/100GiB.swap none swap sw 0 0
Here is output of df command:
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 65956452 0 65956452 0% /dev
tmpfs 13196096 9816 13186280 1% /run
/dev/mapper/ubuntu--vg-root 1789679056 27183296 1671562308 2% /
tmpfs 65980460 0 65980460 0% /dev/shm
tmpfs 5120 4 5116 1% /run/lock
tmpfs 65980460 0 65980460 0% /sys/fs/cgroup
/dev/nvme0n1 492128608 104929192 362177652 23% /mnt/fastssd
/dev/sda2 483946 250653 208308 55% /boot
/dev/sda1 523248 3668 519580 1% /boot/efi
tmpfs 13196096 0 13196096 0% /run/user/1000
Here is me running blkid to try to see the UUID:
$ blkid /dev/nvme0n1
$ blkid /mnt/fastssd
$ sudo blkid /mnt/fastssd
$ blkid /dev/sda1
Nothing outputted. I did not omit any output. My sudo login was successful.
This device is already partitioned and formatted and working nicely. Shouldn't every partitioned device already have a UUID?
Is UUID=69A1-BD52 its UUID? Can I confirm this?
Most important question: Can I safely repeat the UUID=69A1-BD52 in the new line I wish to add to my fstab file, so it will mount this SSD under my home directory?
Example of line I would add to my fstab if it's safe and correct to do so:
UUID=69A1-BD52 /home/user/fastssd auto rw,noauto,user,sync 0 2
The above line is just a big guesstimate, and I don't know if 2 belongs there etc.
This device would be mounted twice. Is it safe or do I need to remove a line? I don't have a reason to keep the old mount location if the new mount location is OK. Would I need to change my SSD swap file entry if I move it to being under my home directory? It's OK by me for this device to be inaccessible to other users, as its my computer.
|
Edit: As it turns out, the OP actually had a filesystem made on the unpartitioned SSD.
You probably don't want to mount the SSD. Probably you want to mount a partition on the SSD. To list the UUIDs of partitions:
sudo lsblk -o name,mountpoint,size,type,ro,label,uuid
Example result:
$ sudo lsblk -o name,mountpoint,size,type,ro,label,uuid
NAME MOUNTPOINT SIZE TYPE RO LABEL UUID
sda 40G disk 0
├─sda1 /boot 286M part 0 SERVERAX-BOOT 2db37cbc-6c0cb-4833-4511-3476aabf55d
└─sda2 39.7G part 0 2148212e-3652d-4c16-8115-2230b7c98a7
└─Serverax 39.7G crypt 0 BU961-FLmD-mXHQta-VUkW-xPAQ-2H4D-vubDr
├─Serverax-Swap [SWAP] 1.7G lvm 0 SERVERAX-SWAP bef1e619-85a9a-44eb-43fd-c404b4fdc8a
├─Serverax-System / 20G lvm 0 SERVERAX-SYSTEM c0a7b4d2-a6515-436d-e10f-bca5a2340ef
├─Serverax-Home /home 10G lvm 0 SERVERAX-HOME 8f410236-4e4c8-45f4-ab15-a8398dfa6fa
└─Serverax-Srv /srv 6G lvm 0 SERVERAX-SRV 0ceb5cd2-937e8-4c75-d4c4-67d5a10168f
sr0 1024M rom 0
If you can, it is better to set your terminal to 132 columns before running the command.
| Find UUID of SSD which is already partitioned |
1,502,919,320,000 |
I have a large, frequently read, ext3 file system mounted read-only on a system that is generally always hard power cycled about 2-3 times per day.
Because the device is usually powered off by cutting the power, fsck runs on boot on that file system, but for this application fast boot times are important (to the second).
I can disable boot time checks on the file system in fstab, but my question is, is it safe to do this? Given that the file system is mounted read-only but is never unmounted properly, is there any risk of accumulating file system corruption over a long period of time if I disable the boot time check?
|
From the mount manpage,
-r, --read-only
Mount the filesystem read-only. A synonym is -o ro.
Note that, depending on the filesystem type, state and kernel
behavior, the system may still write to the device. For example,
Ext3 or ext4 will replay its journal if the filesystem is dirty.
To prevent this kind of write access, you may want to mount ext3
or ext4 filesystem with "ro,noload" mount options or set the
block device to read-only mode, see command blockdev(8).
If ro,noload should prove to be insufficient, I know of no way to set up a read only device with just an fstab entry; you may need to call blockdev --setro or create a read-only loop device (losetup --read-only) by some other means before your filesystem is mounted.
If you make it truly read-only, it won't even know it was mounted. Thus no mount count updates and no forced fsck and especially no corruption possible, as long as nothing ever writes to the device...
| Safe to disable boot fsck on read-only ext3 file system? |
1,502,919,320,000 |
I have an NTFS partition that I want to mount using /etc/fstab. I don't want any files to have executable permissions on this drive, so I wrote the following rule:
/dev/sda2 /media/sharedfolder ntfs auto,user,noatime,noexec,rw,async 0 0
However, I don't believe this will prevent files from being created with executable permissions. It will simply prevent them from being executed. Perhaps this is fine, but is it possible to remove all executable permissions from newly created files on this partition using an /etc/fstab rule?
Would using umask and fmask be enough, like this rule?
/dev/sda2 /media/sharedfolder ntfs auto,user,noatime,noexec,rw,async,umask=0111, 0 0
I'm unsure because Wikipedia lists umask as an option specific to the FAT filesystem.
|
Wikipedia isn't as good a reference as the man page. Both the the traditional ntfs driver and the now-preferred ntfs-3g support the umask option.
You shouldn't set umask to exclude executable permissions on directories, though, since you can't access files inside a non-executable directory. Instead, use separate values for fmask=0111 (non-directories) and dmask=0777 (directories) (you can omit this one since all bits allowed is the default value).
| How do I mount an NTFS partition in /etc/fstab and prevent files/directories from receiving exec permissions when they're created? |
1,502,919,320,000 |
How use pmount that it omit fstab rules? For example
# fstab:
/dev/sr0 /media/cdrom ... (etc.)
# in terminal
pmount /dev/sr0 /media/xxx
# it will omit /media/xxx mount point and it will mount in /media/cdrom
Is there any easily solution to mount other mount point that is in fstab.
In this case I must use pmount beacause I mount as regular user and I am not allowed to add any new entry to fstab.
|
pmount is generally to be used for mounting custom external devices that are not in fstab. What you experience is a feature of pmount - a part of its policy (see man pmount, search for fstab). If you want to permit normal users to mount cdrom, you can either comment it out in /etc/fstab and use pmount or set up the cdrom entry in fstab so that users are allowed to mount. For the latter, you'd need to use the user mount option (see man fstab for more details).
| pmount - omit rules in fstab |
1,502,919,320,000 |
I'm running Fedora 14 with the 2.6.35.13-92.fc14.i686 kernel and Gnome 2.32.0. I have a few NTFS drives that are mounted when I start up. However, there is no entry for them in fstab and nothing in mtab. (EDIT: The NTFS drives aren't in /proc/mounts either) Furthermore there is no mention of any NTFS filesystems in /etc/filesystems and /proc/filesystems.
FYI, all of the NTFS commands on my system are as follows:
# compgen -c | grep ntfs
ntfs-3g
ntfsmount
ntfsmftalloc
ntfs-3g.probe
ntfsdump_logfile
ntfsfix
ntfsdecrypt
ntfs-3g
ntfs-3g.secaudit
ntfs-3g.usermap
ntfsls
ntfscat
ntfstruncate
ntfswipe
ntfsmount
lowntfs-3g
ntfscmp
ntfsinfo
ntfsck
ntfscluster
ntfsmove
ntfslabel
mount.ntfs-3g
mount.ntfs
mount.lowntfs-3g
mkntfs
ntfscp
mkfs.ntfs
ntfsundelete
mount.ntfs-fuse
ntfsclone
ntfsresize
Questions:
How does a Linux machine auto-mount an NTFS drive without looking
at fstab?
How is an NTFS drive mounted without NTFS being listed in
either of the above to filesystem files?
Why is there no mention of a mounted NTFS filesystem in mtab even though they're mounted on my system and browsable?
|
You are probably using the ntfs-3g driver, which is a user mode filesystem. It will show up in /proc/mounts and /etc/mtab as fuse.
| How are NTFS drives handled by Linux? Nothing is in fstab yet it's automounted. Nothing in mtab yet it's currently mounted |
1,502,919,320,000 |
I want to create systemd mount-unit equivalent for next fstab line
/dev/sdc1 /жышы ext4 defaults 1 2
Something as
жышы.mount
[Unit]
Description= /dev/sdc1 to /жышы
[Mount]
What=/dev/sdc1
Where=/жышы
Type=ext4
[Install]
WantedBy=multi-user.target
Yes, I tried to use systemd-escape for unit file name and for Where, but without success. My better approach was:
xd0xb6xd1x8bxd1x88xd1x8b.mount
[Unit]
Description= /dev/sdc1 to /жышы
[Mount]
What=/dev/sdc1
Where='/жышы'
Type=ext4
[Install]
WantedBy=multi-user.target
This variant almost working (no error for unit file name), but mounts /dev/sdc1 to auto-created folder /xd0xb6xd1x8bxd1x88xd1x8b instead of /жышы.
Please help to fix this mess.
|
From man systemd.mount:
Mount units must be named after the mount point directories they control. Example: the mount point /home/lennart must be configured in a unit file home-lennart.mount. For details about the escaping logic used to convert a file system path to a unit name, see systemd.unit(5).
OK, so from man systemd.unit:
The escaping algorithm operates as follows: given a string, any "/" character is replaced by "-", and all other characters which are not ASCII alphanumerics, ":", "_" or "." are replaced by C-style "\x2d" escapes. In addition, "." is replaced with such a C-style escape when it would appear as the first character in the escaped string.
When the input qualifies as absolute file system path, this algorithm is extended slightly: the path to the root directory "/" is encoded as single dash "-". In addition, any leading, trailing or duplicate "/" characters are removed from the string before transformation. Example: /foo//bar/baz/ becomes "foo-bar-baz".
This escaping is fully reversible, as long as it is known whether the escaped string was a path (the unescaping results are different for paths and non-path strings). The systemd-escape(1) command may be used to apply and reverse escaping on arbitrary strings. Use systemd-escape --path to escape path strings, and systemd-escape without --path otherwise.
So, we run
systemd-escape --path /жышы
and get
\xd0\xb6\xd1\x8b\xd1\x88\xd1\x8b
So, \xd0\xb6\xd1\x8b\xd1\x88\xd1\x8b.mount is the right file name. The backslashes are important!
| How to mount folder with nonASCII (cyrillic) letters by systemd mount-unit? |
1,502,919,320,000 |
I'm working on a script that is supposed to execute on startup, but the problem is that the script requires some files that are on a shared drive that is automatically mounted via fstab and at the time of it's execution the drive isn't mounted yet.
I've tried using cron @reboot and init.d route but they both execute too early. I also considered adding mount -a to the script, but I would rather avoid having to sudo it. For now I just added a delay to make it work, but that feels a bit hacky.
Is there a way to ensure that a startup script runs after fstab has been processed? Or force the mounts to be processed without using sudo?
|
For that you have to run your script as a systemd unit (assuming you have systemd) where you could define dependency...
If you want to stick with cron @reboot (what sounds the simple choice) you have to make your script a bit smarter (or start cron after fs mounts... what change I wouldn't suggest). Instead of a simple delay, you can check if the required filesystem is mounted (in bash):
while ! mount | awk '{print $3}' | grep -qx /the/mountpoint; do
sleep 1
done
Or you can check if the file is there what you need:
while ! [ -f /that/file ] ; do
sleep 1
done
| fstab mounting time |
1,502,919,320,000 |
I have a FreeBSD 10.2 server that I mount iSCSI drives to. I would like to have those drives mounted automatically in fstab so that they are persistent across reboots.
If I execute the command
mount /dev/da0p1 /mnt
It works perfectly.
mount
/dev/ada0p2 on / (ufs, local, journaled soft-updates)
devfs on /dev (devfs, local, multilabel)
/dev/da0p1 on /mnt (ufs, local, soft-updates)
However, if I put an entry in /etc/fstab the system halts its boot processes saying that /dev/dap1 is invalid. I am assuming that this is because networking and iscsi services are not up yet.
In my /etc/fstab I have added the "late" option for the filesystem I want to mount, but it dosen't have any effect.
# cat /etc/fstab
# Device Mountpoint FStype Options Dump Pass#
/dev/ada0p2 / ufs rw 1 1
/dev/ada0p3 none swap sw 0 0
#User Added Entries
#/dev/da0p1 /mnt ufs rw,late 3 3
What happens is that the boot process stops saying that there is "no file or directory /dev/da0p1. If I do a ls /dev/da* that directory shows up. (see image below)
In fact, after some testing, if I just wait a few seconds, and type "exit" at the prompt in single user mode, the system continues to boot and the drives get mounted normally.
Is there a way to put in a 5 second delay to allow the iscsi device to be created so the mount doesn't fail?
|
Disclaimer: I don't know if this is the right thing to do, but it worked for me.
So, I essentially needed the startup process to take a little extra time so that networking services could finish loading and the iSCSI mounts could be created so there would be something to mount to.
What I did was add sleep 5 to the /etc/rc.d/mountlate script.
# PROVIDE: mountlate
# REQUIRE: DAEMON
# BEFORE: LOGIN
# KEYWORD: nojail
. /etc/rc.subr
name="mountlate"
start_cmd="mountlate_start"
stop_cmd=":"
mountlate_start()
{
local err latefs
sleep 5 <-------- Added this line
# Mount "late" filesystems.
#
err=0
5 seconds seemed to be a good number for me; your mileage may vary and you will want test out different values.
Again, I don't know if this is the correct way of solving this particular issue and if someone has a better or the correct way, please post.
| Mount iSCSI Partitions Automatically at Boot on FreeBSD 10 |
1,502,919,320,000 |
I'm running RHEL 7.2 in Amazon Web Services and am trying to make my /tmp use an attached 10 GB volume /dev/xvdh. Data does not need to persist, but I have to have a bigger volume just for tmp, because of a customer requirement. Here's the entry in my fstab.
/dev/xvdh /tmp xfs defaults,nofail 0 2
When I run sudo mount -a, I don't get any errors, and yet, when I reboot, I don't see this mounting when I run lsblk, as seen below.
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 10G 0 disk
├─xvda1 202:1 0 1M 0 part
└─xvda2 202:2 0 10G 0 part /
xvdb 202:16 0 8G 0 disk /grid/01
xvdc 202:32 0 8G 0 disk /grid/02
xvdd 202:48 0 8G 0 disk /grid/03
xvde 202:64 0 8G 0 disk /grid/04
xvdf 202:80 0 8G 0 disk /grid/05
xvdg 202:96 0 20G 0 disk /var/log
xvdh 202:112 0 10G 0 disk
Got any pointers? The drive definitely exists...
I was asked to add the output of mount, here it is:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=484472k,nr_inodes=121118,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/xvda2 on / type xfs (rw,relatime,attr2,inode64,noquota)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=28,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
/dev/xvdb on /grid/01 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/xvde on /grid/04 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/xvdd on /grid/03 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/xvdg on /var/log type xfs (rw,relatime,attr2,inode64,noquota)
/dev/xvdf on /grid/05 type xfs (rw,relatime,attr2,inode64,noquota)
/dev/xvdc on /grid/02 type xfs (rw,relatime,attr2,inode64,noquota)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=101548k,mode=700,uid=1000,gid=1000)
here's the output of df /tmp:
/dev/xvda2 10473452 1603440 8870012 16% /
Edit: Like Edison says...
Well, I found another way that won't work. Per this thread, I tried masking this file /usr/lib/systemd/system/tmp.mount, but my mapping for /tmp still wouldn't work on reboot. So then I tried renaming the file, also didn't work. I restored the file and removed the mask.
|
You might want to try using the UUID instead. As in changing the fstab to
UUID=xxxxxxxxxxxx /tmp xfs defaults,nofail 0 2
| Mounting /tmp in another drive on RHEL 7.2 in AWS |
1,502,919,320,000 |
I would like to mount a filesystem permanently. If I understand this correctly, it can be done by adding a line to /etc/fstab.
If my mount syntax is like this:
mount -t cifs -o username=USERNAME,password=PASSWD //192.168.1.88/shares /mnt/share
Then, what must I add to fstab to make it work properly?
|
See man fstab for the details on the fields. In short your line will be:
//192.168.1.88/shares /mnt/share cifs username=USERNAME,password=PASSWD 0 0
See also man mount.cifs, especially the credentials= directive to keep the credentials apart from the fstab file.
| Is fstab syntax the same as mount? |
1,502,919,320,000 |
I have three EBS RAID 10 volumes in my /etc/fstab on an Amazon AMI hosted with AWS/EC2...
Everytime I reboot the instance, the volumes get mounted to the wrong mount points. Any ideas on how I can get these RAID volumes to mount to the correct mount points?
Correct Example
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 1.3G 6.6G 16% /
tmpfs 3.4G 0 3.4G 0% /dev/shm
/dev/md127 2.0G 129M 1.9G 7% /mnt/db
/dev/md126 35G 18G 18G 50% /mnt/web
/dev/md125 3.0G 267M 2.8G 9% /mnt/bc
After Reboot
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 1.3G 6.6G 16% /
tmpfs 3.4G 0 3.4G 0% /dev/shm
/dev/md127 2.0G 129M 1.9G 7% /mnt/bc
/dev/md126 35G 18G 18G 50% /mnt/db
/dev/md125 3.0G 267M 2.8G 9% /mnt/web
My /etc/fstab
LABEL=/ / ext4 defaults,noatime 1 1
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/dev/md127 /mnt/db xfs defaults 0 0
/dev/md126 /mnt/web xfs defaults 0 0
/dev/md125 /mnt/bc xfs defaults 0 0
|
blkid
Instead of the device handles you might want to try using the UUID for each of the devices. You can get the devices UUID's using the command blkid.
$ blkid
/dev/lvm-raid2/lvm0: UUID="2123d4567-1234-1238-adf2-687a3c237f56" TYPE="ext3"
Then add this to your /etc/fstab:
UUID=2123d4567-1234-1238-adf2-687a3c237f56 /mnt/db ext3 defaults 0 0
RAID Name?
@Patrick mentioned in the comments to create a RAID volume name. I was reluctant to suggest this because I quit honestly didn't understand your setup. But I'll include the details for creating the MD device just in case. Something like this:
$ sudo mdadm --assemble /dev/mdraid10 --name=myraid10 --update=name \
/dev/md125 /dev/md126 /dev/md127
I've been using RAIDs for 10+ years and I've never set the name of the device though. I usually use the UUID or the actual device handle of the RAIDs instead.
Example
$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdc1[0] sdb1[1]
2930266432 blocks [2/2] [UU]
unused devices: <none>
From the above output, the device handle is /dev/md0. So now you can check it's details:
$ mdadm --detail /dev/md0
/dev/md0:
Version : 0.90
Creation Time : Wed Dec 16 22:55:51 2009
Raid Level : raid1
Array Size : 2930266432 (2794.52 GiB 3000.59 GB)
Used Dev Size : -1
Raid Devices : 2
Total Devices : 2
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time : Sat Jul 20 07:39:34 2013
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
UUID : 2f2b26fd:ce4d985f:6a98fc18:3e8f2e46
Events : 0.23914
Number Major Minor RaidDevice State
0 8 33 0 active sync /dev/sdc1
1 8 17 1 active sync /dev/sdb1
I then typically add the above UUID to /etc/mdadm.conf using this command:
$ sudo mdadm --detail --scan
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=0.90 UUID=2f2b26fd:ce4d985f:6a98fc18:3e8f2e46
$ sudo mdadm --detail --scan > /etc/mdadm.conf
In my /etc/fstab to mount this RAID I'd use /dev/md0:
/dev/md0 export/raid1 ext3 defaults 1 2
I also always put LVM on top of my RAIDs. But that's another topic altogether.
References
How To Use UUID To Mount Partitions / Volumes Under Ubuntu Linux
| EBS Volumes Mounted On Wrong Directory After Reboot |
1,502,919,320,000 |
I read on the man page
defaults
Use default options: rw, suid, dev, exec, auto, nouser, async, and relatime.
Do the options set depend on a mounted filesystem or not?
|
In the man page, defaults is listed under Filesystem Independent Mount Options, which means it doesn't depend on the filesystem type.
| mount defaults and various filesystems |
1,502,919,320,000 |
I have two USB Drives, I created partitions, and formatted it to ext4.
Running fdisk -l shows that I have /dev/sda1 and /dev/sdb1
Device Boot Start End Sectors Size Id Type
/dev/sda1
/dev/sdb1
Then I mounted both on boot by running sudo nano /etc/fstab
I use Samba to access it from the different computer and most of the time it is working fine, but sometime I can't access the files via samba and running fdisk -l shows the the device for my drives changed to for example /dev/sdc1. Sometimes it changes only per one drive and sometimes for both.
I have no idea why it happened. Any help is much appreciated.
|
I have no idea why it happened
the convention /dev/sda {for example} is a mount by-name. If all you had was 1 disk, i.e. your operating system disk, that would always be /dev/sda. So no problem. Add more disks, you get sdb and sdc and so on. Mount by-name does not care or respect order or sequence all the time. So once your operating system disk is mounted by-name as sda then subsequent [usb] disk attachments show up in sequence as sdb and sdc. That's the only way mount by-name respects any kind of order. Do mount by-name in /etc/fstab so that mounting happens at boot, and order sequence is not respected. Maybe it goes by pci bus number order i don't know. But you will observe that your operating system disk no longer mounts first as sda. So you take your chances on using mount by-name in /etc/fstab {best i can do on describing it}.
For a running system, to temporary plug in a [usb] disk then mount by-name is acceptable. But for reliability and to not experience exactly what you did, do not use mount by-name in /etc/fstab to have devices mount that way at boot time, it's just bad practice now.
Mount either by-uuid or by-label. You have made the EXT4 partitions on your usb sticks, so put a label on them as well such as stick1 and stick2 and then use that mount syntax convention in /etc/fstab. The first column in /etc/fstab instead of having /dev/sda3 / for your operating system disk for example could be something like these two
UUID=800e924a-a869-4152-9533-9d9cfecbd19e /
or
LABEL=rootpartition /
look under /dev/disks/ to see the different mount conventions.
you could of course mount your usb disks by UUID {universally unique id} once you get that uuid of them, but a partlabel would be easier typing and remembering and be just as reliable... until you use someone else's usb stick that coincidentally has the same label syntax.
| USB Drive sometimes changes the Drive name |
1,502,919,320,000 |
My computer has 3 disk drives:
256GB ssd (root and swap partitions)
60GB ssd (ext4)
2TB hdd (ext4)
I want to automatically mount the two extra drives at boot and I want all users to be able to read/write/execute on them. To this end, I added the following lines to fstab:
/dev/sdb1 /mnt/vertex ext4 defaults,users,noatime 0 0
/dev/sdc1 /mnt/cuda ext4 defaults,users,noatime 0 0
From what I've read about fstab, I thought this should work but it doesn't. After rebooting or running sudo mount -a, the drives successfully mount but I cannot write to them as any user other than root.
Thinking this must because I created the dirs in /mnt as root, I created dirs in my home dir, and added these as the mount points in fstab. But the result is the same, at boot or by using sudo mount -a, the drives are mounted but cannot be written to without superuser privilege.
If I remove the lines from fstab and reboot, I can see the drives listed in my file manager (thunar). I can click on it and thunar will mount the drive, I can see the files on it, but I still cannot write to the drive.
I am lost.
|
The two partitions are formatted as ext4, which by default sets the owner and group of the root dir of that volume to root, and permissions to rwxrwxr-x (IIRC). You can check that with
$ ls -la /mnt/<mountpoint>
In order to make them writable for normal users, you can either change the group of that root dir to a common user's group with
$ sudo chgrp users /mnt/<mountpoint>
when the drive is mounted (replace the group name with an appropriate one), or you make that dir writeable for everyone (which may open a security gap):
$ sudo chmod o+w /mnt/<mountpoint>
This changes the volume's root dir's permissions (and only that) permanently, and needs to be done only once. Be aware that the owner, group and permissions of new FS entries (files, dirs etc.) still depend on the user that creates them. There are also more fine-grained possibilities to handle this, but that's an advanced topic and depends on your use-case. Extend this question or (better) create a new one if you have special requirements, and this doesn't work for you.
| How to have hdd's auto-mounted and usable by all users? |
1,502,919,320,000 |
Can we set the size in the following syntax as percentage instead of static size?
example from /etc/fstab
tmpfs /var/work tmpfs size=100g 0 0
lets say we have ram memory with 120g , we can set the size to used 100g from the ram as mentioned above
but is it possible to set for example 80% in size instead of static value - is it possible?
example
`tmpfs /var/work tmpfs size=80% 0 0` ?
|
From the kernel docs for tmpfs:
tmpfs has three mount options for sizing:
size: The limit of allocated bytes for this tmpfs instance. The
default is half of your physical RAM without swap. If you
oversize your tmpfs instances the machine will deadlock
since the OOM handler will not be able to free that memory.
nr_blocks: The same as size, but in blocks of PAGE_SIZE.
nr_inodes: The maximum number of inodes for this instance. The default
is half of the number of your physical RAM pages, or (on a
machine with highmem) the number of lowmem RAM pages,
whichever is the lower.
These parameters accept a suffix k, m or g for kilo, mega and giga and
can be changed on remount. The size parameter also accepts a suffix %
to limit this tmpfs instance to that percentage of your physical RAM:
the default, when neither size nor nr_blocks is specified, is size=50%
| Can we set the size in tmpfs syntax as percentage instead of static size |
1,502,919,320,000 |
This question has surely been answered somewhere else but I'm having trouble finding it.
I'm in a situation where I'm moving my root file system from a Debian install off of a hardware RAID and onto an internal USB. I want to keep the partition small and relatively secure. The goal is to keep frequent read/writes off the root partition for stability reasons. I have prepped a RAID10 disk with a few different partitions, one for /home and I will put the rest of the movable directories on another partition and bind them in fstab.
The question I have is, what directories can/should I move out of the root partition and mount them with fstab?
I know /bin, /sbin, /etc, /boot(kinda), /dev, /mnt, and others can't/shouldn't be moved out of the root partition as they are required on boot and well, fstab to even function.
I've glanced at the FHS and saw a list of required directories but according to other answeres such as this here, /var, /tmp, /usr, and others can be moved and in some instances, reccomended.
Note: When I mention "move", I mean to say to keep the parent directory though move the contents.
|
Today, you have to adapt to SystemD; 90% of the traditional and complicated "rules" for partitioning are obsolete.
The usr-bin "split" problem is also "normalized", thanks to systemd: Poettering explains why it has "always been broken" to have /usr split off and have a "minimal" /bin; the initrd is that "minimal root". he says. (systemd/TheCaseForTheUsrMerge)
That means, /usr stays on root. This makes sense and is a simplification. You can still use a sub-mountpoint like /usr/local/...
/var is the first mountpoint to split off for IO reasons (perf./safety). It has e.g. log/journal.
/home: can be split off for logical reasons ("/usr"=system, "home"=data )
/opt and /srv can be split off for volume reasons. How they are used will depend on what is installed.
/tmp and /run are type-tmpfs-mounted - could of course be configured else, and then likely be split off.
How this translates to your setup (internal USB?) I can't say. But if you bother for RAID, then maybe you want one kind of RAID for /var, and another for "/" and or home. When you add RAID, you don't have a 1-to-1 mountpoint-to-disk mapping anymore. You can create virtual disks of different flavors: "normal" RAID01, extra fast for /var, extra safe for /home.
| Which directories can be on a different partition outside of root? |
1,502,919,320,000 |
I need to use an alternative fstab file for mounting a folder in another folder, like the command
mount --bind /folder1 /folder2
I tried the command
mount --fstab /pathToFile.fstab
as stated in the man:
-T, --fstab path
Specifies an alternative fstab file. If path is a directory then the files in the directory are sorted by strverscmp(3); files that start with . or without an .fstab extension are ignored. The option can be specified more than once. This option is mostly designed for initramfs or chroot scripts where additional configuration is specified beyond standard system configuration.
I create the file this way:
/folder1 /folder2 auto bind 0 0
but the command
mount --fstab /path
does nothing.
I added the line from the alternative file in /etc/fstab and with the
mount -a
the folder is mounted correctly.
Does anybody have experience with the --fstab option?
|
The command
mount --fstab /pathToFile.fstab
is the same as mount with no options when using the standard fstab file, i.e. "list mounted filesystems".
To actually mount all automountable filesystems specified in a custom fstab file similar to using mount -a with the standard fstab files, you'll need to use the --fstab option together with the -a option:
mount --fstab /pathToFile.fstab -a
| Mounting using alternate fstab file |
1,502,919,320,000 |
I tried googling it and just found this https://ubuntuforums.org/showthread.php?t=2234886 and this https://bugs.launchpad.net/ubuntu/+source/gnome-disk-utility/+bug/1165437.
But it is not so clear.
So I thought the star icon represent a boot drive. And check my first drive the partition is 1.1GB ext4 bootable and the second partition is on LVM2 PV.
Then, but when I put my secondary internal drive, backup the data and convert it from ntfs to ext4, all the 3 partition on the second drive has the star icon.
The star icon will just show if the partition is mounted at startup. but when I remove them in the /etc/fstab the star button is gone.
So what is really the star icon? If it is for the boot drive, my secondary drive is just for data, I will not boot from it. And so how I can remove the start icon without removing it from the /etc/fstab?
|
I was hoping that reading manual will be enough, but the manual is very limited as well as the other documentation. So the source code had to come for help. Grepping through the code for "icon" keyword showed few occasions which sound like these icons:
src/disks/gduvolumegrid.c: g_ptr_array_add (icons_to_render, (gpointer) "user-bookmarks-symbolic");
Checking the icon confirms they are the ones we are looking for:
The code shows, what is the trigger for this icon to get rendered:
if (element->show_configured)
g_ptr_array_add (icons_to_render, (gpointer) "user-bookmarks-symbolic");
The show_configured is assigned when the device is "configured", whatever it means:
element->show_configured = is_block_configured (block);
We can probably simplify that to "gnome-disks known about this drive and about its configuration".
| What is the start icon on the partition on the gnome disk utility? |
1,502,919,320,000 |
I recently did a fresh reinstall of Ubuntu 16.04 on my laptop.
It has an ntfs partition with all my documents and stuff that I might want to use on Windows as well.
When I did the reinstall I forgot to backup the fstab entry that automatically mounted the ntfs partition.
In /etc/fstab I set the uid and guid. I checked this and it seems to work correctly. But when I start firefox (which uses the profile from the ntfs partition) it complains that it can't reach the profile.
Your Firefox profile cannot be loaded. It may be missing or inaccessible.
If I don't use fstab and mount the partition by hand using the gui file explorer, everything works fine.
I know it's possible to mount so that firefox can recognize it because that's what I previously did, but I'm now stuck.
The fstab entry looks like this:
UUID=13FBF8751719184A /media/user/files ntfs defaults,rw,exec,user,uid=1000,gid=1000,umask=000,nofail 0 2
When I check it using:
ls -la /media/user
It shows me the following:
drwxrwxrwx 1 user user 28672 mrt 21 12:43 files
The specific setting it's trying to load are in /media/user/files/sharedSettings/firefox.
This directory has the same permissions:
drwxrwxrwx 1 user user 24576 mrt 21 14:02 firefox
mount returns the following:
/dev/sda7 on /media/user/files type fuseblk (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,user)
A normal user can read and write normally to the partition, but for some reason firefox still complains about the profile not being accessible.
Does anyone have an idea?
EDIT:
I noticed that mount returns user_id=0 and group_id=0 while I clearly set those to 1000. Could this possibly the problem?
This is what mount returns after manually mounting the partition:
/dev/sda7 on /media/user/Files type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)
Could it have to do with the uhelper=udisks2 thing?
How would I add this into the fstab?
The manual page doesn't notice this option.
|
/dev/sda7 on /media/user/files
/dev/sda7 on /media/user/Files
files and Files are two different paths. Because the directory /media/user is in a native *nix filesystem. Filenames here are not interpreted e.g. as case-insensitive. They are simply strings of octets which do not contain NUL (0) or ASCII /.
| Firefox doesn't recognize profile when mounting using fstab |
1,502,919,320,000 |
I'm trying to mount /etc/folder and /var/folder to the same external volume UUID=xyz. This external volume already has subdirectories etc_folder and var_folder and has been formatted and available to mount. I want to change fstab and achieve something like the following, before doing "mount -a":
UUID=xyz:/etc_folder /etc/folder ext4 defaults,nofail 0 2
UUID=xyz:/var_folder /var/folder ext4 defaults,nofail 0 2
But this doesn't work... what exactly should I put in fstab?
|
The following works in /etc/fstab:
UUID=xyz /mnt ext4 defaults,nofail 0 2
/mnt/var_folder /var/folder none bind 0 0
/mnt/etc_folder /etc/folder none bind 0 0
I just need to mount the whole volume to one location e.g. /mnt, keep it there and create binds (seems very much like a symlink).
| Mounting two folders to corresponding directories within external volume? |
1,502,919,320,000 |
I've got two devices on my LAN: a Raspbian jessie and an Ubuntu 14.04. The latter has some nfs shared folders, which are available from Raspbian at startup, set up in its /etc/fstab file as:
192.168.1.10:/mnt/nfs/HDD /mnt nfs defaults,nofail,noatime 0 0
The problem is coming up when Ubuntu is offline and I try to run df on raspbian... infinite loop. No answer.
Does df have any way to ignore non-available devices? To show only those file systems that are currently available.
|
The automounter was designed exactly for this kind of problem. It automatically mounts drives (local or remote) only when they are needed, and unmounted them when they are no longer being used.
Install autofs on your NFS client and comment out (or remove) the entries in /etc/fstab. Edit /etc/auto.master and ensure that there is a line like this uncommented in the file
/net /etc/auto.net --timeout=120
Do not just uncomment the line /net -hosts as this requires NIS installed and configured to work - which is highly unlikely.
Restart the automounter with service autofs restart. You will now have access to your remote NFS filesystems under the /net directory. In your specific instance the path will be /net/192.168.1.10/mnt/nfs/HDD. You can then symlink that into your filesystem as if it were mounted:
ln -s /net/192.168.1.10/mnt/nfs/HDD /mnt/hdd
Some notes
My personal preference is to tweak the entries in /etc/auto.master so that items are managed underneath deeper less visible directories such as /var/autofs/net and /var/autofs/misc rather than /net and /misc, but for your specific situation I've left the configuration as standard as possible.
If you want to adjust the mount options for the NFS remote filesystem you will need to edit the file /etc/auto.net as options cannot be passed from auto.master.
| Ignore unmounted file systems |
1,502,919,320,000 |
I have an Ubuntu micro instance running on amazon EC2.
Recently after logging in I was alerted:
*** /dev/xvda1 will be checked for errors at next reboot ***
I've rebooted a couple times using init 6, however when I log on I am still getting the same notice, so apparently fsck is not running at startup.
I read this blog post which mentions that if the /etc/fstab <pass> column is set to 0 then a disk check will be skipped during reboot. Here is my fstab file:
<file system> <mount point><type><options><dump><pass>
LABEL=cloudimg-rootfs / ext4 defaults 0 0
/dev/xvdh /vol xfs noatime 0 0
This is the default configuration for an ec2 image fromubuntu.
Is it normal for <pass> to be set to 0 here?
Why would it be set to 0?
What is the best way to run fsck - should I change this value or only run it manually when alerted?
|
Why would it be set to 0?
I can see a few possible reasons for this.
Because you are running on EC2, your hardware (storage and compute instance) is virtualized. It is much less likely for such a configuration to encounter failures of any sort causing filesystem corruption, and an actual physical defect in the storage (like bad blocks on magnetic storage) are nearly impossible.
This means filesystem problems are much less common, so checking the filesystem doesn't have to happen as frequently. Perhaps you are expected to run fsck manually when you suspect a problem.
EC2 is "intended" to have a read-only root filesystem, or at least one which restores a fixed state at instance launch. A primary benefit of EC2 is the ability to launch small instances on demand and then terminate them when demand drops, always running the same OS configuration. In that situation, checking the root fs makes no sense, because it will never change. This obviously does not apply if you use the system for 'development' or general use, but I don't perceive that to be Amazon's real intent for EC2.
Running fsck on EC2 wastes bandwidth and processor power. This translates into cost, both for the user and for Amazon.
Is it normal for <pass> to be set to 0 here?
I believe 1 is typical for a new Linux installation, but that can be distribution-specific. Amazon's pre-built EC2 images are also pre-configured to fit EC2.
What is the best way to run fsck - should I change this value or only run it manually when alerted?
Both options have merit. If you're frequently rebooting, you might prefer to run it manually rather than frequently increase the volume's mount count.
As an aside, I'm not sure if the root fs is typically checked after mounting (when fstab becomes available), or before mounting. If before, a 'typical' Ubuntu installation might actually perform the fsck in initramfs, before even mounting the root fs. In that case, initramfs might be different on EC2, and may ignore any flags that suggest the filesystem should be checked.
| Should I run fsck on boot for an amazon ec2 image? |
1,502,919,320,000 |
I have been trying to mount a simple share. All domain users should have read permissions. Kubuntu was configured for domain, I can see domain and log in with domain user. When I access shares with Dolphin file manager I can successfully open and browse them (Network -> Shared Folders (SMB) -> Add the folder).
I have tried several commands to mount this: mount -t cifs, mount.cifs, fstab + mount-a, ...
All with no success. dmesg says: (I also got error -22, but idk exact setup at that time)
[ 9478.459984] CIFS: fs/cifs/connect.c: VFS: leaving cifs_get_smb_ses (xid = 330) rc = -13
[ 9478.459986] CIFS: fs/cifs/dfs_cache.c: __dfs_cache_find: search path: \DOMAIN\files
[ 9478.459989] CIFS: fs/cifs/dfs_cache.c: get_dfs_referral: get an DFS referral for \DOMAIN\files
[ 9478.459993] CIFS: fs/cifs/fscache.c: cifs_fscache_release_client_cookie: (0x0000000058c5ce4f/0x00000000c6989c97)
[ 9478.459998] CIFS: fs/cifs/connect.c: VFS: leaving mount_put_conns (xid = 329) rc = 0
[ 9478.459999] CIFS: VFS: cifs_mount failed w/return code = -13
With command, password gets accepted:
root@HOSTNAME:/mnt# sudo mount -t cifs -o username=user.name@DOMAIN '\\DOMAIN\files' /mnt/DOMAIN/X
Password for user.name@DOMAIN@\DOMAIN\files: *****************
mount error(22): Invalid argument
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
Shares are reachable:
root@HOSTNAME:~# smbclient -U user.name@DOMAIN -L \\\DOMAIN\\files
Enter user.name@DOMAIN's password:
Sharename Type Comment
--------- ---- -------
...
...
Files Disk
...
...
...
...
SMB1 disabled -- no workgroup available
This does not work:
root@HOSTNAME:~# smbclient -k -U user.name@DOMAIN -L \\\DOMAIN\\Files
gensec_spnego_client_negTokenInit_step: gse_krb5: creating NEG_TOKEN_INIT for cifs/DOMAIN failed (next[(null)]): NT_STATUS_INVALID_PARAMETER
session setup failed: NT_STATUS_INVALID_PARAMETER
I have found some posts saying I need keyutils:
root@HOSTNAME:/mnt/4TB# apt list ---installed | grep keyutils
keyutils/focal,now 1.6-6ubuntu1 amd64 [installed]
libkeyutils1/focal,now 1.6-6ubuntu1 amd64 [installed,automatic]
fstab:
#/etc/fstab
//DOMAIN/files /mnt/DOMAIN/X cifs credentials=/home/user.name@DOMAIN/.credentials/samba,file_mode=0644,dir_mode=0755,iocharset=utf8,sec=ntlmssp,vers=2.1,rw 0 0
# I have tried vers=1.0, vers=2.0, vers=2.1, no vers
Not beeing able to mount simple samba share makes me feel very silly :) I hope I am doing something that is very obviously wrong.
EDIT: I have changed some stuff and got a different dmesg output:
user.name@DOMAIN@hostname:[~]$ sudo mount -a
mount error(22): Invalid argument
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
[86121.898379] CIFS: Attempting to mount \\DOMAIN\files
[86121.901569] CIFS: VFS: \\DOMAIN\files DFS capability contradicts DFS flag
[86121.903442] CIFS: VFS: cifs_mount failed w/return code = -22
Fstab:
//DOMAIN/files /mnt/DOMAIN/X cifs credentials=/home/user.name@DOMAIN/.credentials/samba,file_mode=0644,dir_mode=0755,nounix,iocharset=utf8,sec=ntlmssp,vers=2.1,rw 0 0
|
Ok, so I figured out what the problem is. Shares are behind DFS and this caused problems.
https://www.geeksforgeeks.org/what-is-dfsdistributed-file-system/
A Distributed File System (DFS) as the name suggests, is a file system that is distributed on multiple file servers or multiple locations. It allows programs to access or store isolated files as they do with the local ones, allowing programmers to access files from any network or computer.
Fstab:
//MACHINE.DOMAIN/some/folders /mnt/DOMAIN/some_folder cifs credentials=/home/user.name@DOMAIN/.credentials/samba,uid=USER_ID,gid=GROUP_ID,file_mode=0644,dir_mode=0755,nounix,iocharset=utf8,sec=ntlmssp,vers=2.0,rw 0 0
Mounting directly from PC that has the share, works perfectly. I know this is not the solution, but works really great as a workaround.
| Kubuntu 20.04.4 LTS cannot mount domain samba shares (either from terminal or fstab) |
1,502,919,320,000 |
As SSD drives have limited writes, I would like to know whether disabling access time logging still plays a significant role in 2021. Most websites I see on the subject are from 2015 and before, and SSD might be more robust nowadays.
I don't really realise how SSD writes are managed on Linux systems with respect to cacheing, nor do I know how many files are actually concerned by those logs or whether all or only some of the accessed files are updated to include access times.
My final question concern the disadvantages of disabling access time logging. What services use access times? Will something break? Is there something I should know?
Thanks in advance!
PS: I am using Ubuntu 21.04 for daily usage on a 300 GB partition on SSD. My computer model was released mid-2020.
|
There are a number of optimizations in the kernel and ext4 to reduce the overhead of atime updates, such as relatime (only update atime when it is older than mtime or more than a day old) and lazytime (delay atime updates and aggregate writes of multiple inodes in a single block only when needed or if more than a day old).
The cheapest consumer-grade flash device are rated at 1 full Drive Write Per Day (DWPD) for 3 years. Inodes are typically 1/32 or less of the blocks in the filesystem, so the atime updates of inodes (limited to one atime write per day) are not going to be the deciding factor for exceeding the DWPD of the device.
| How useful is it to disable access time logging on SSD and are there disadvantages doing so? |
1,502,919,320,000 |
related to thread How to edit /etc/fstab properly for network drive?
i have added the following line to /etc/fstab
//192.168.0.52/public /mnt/PC52/public cifs username=guest,password="" 0 0
if i call sudo mount -a the directory mounts and all works fine, but if i reboot the computer it fails to add at boot, i feel like its something to do with the blank password that i have passed, if i omit the password field completely it prompts for a password during boot and if i just press enter it boots and works
|
sorted it.
as its an open directory it doesn't matter what password i send, as long as i add the password field setting the password to something it works.
changing the line in fstab to the following, it works fine
//192.168.0.52/public /mnt/PC52/public cifs username=guest,password=123 0 0
| mounting smb at boot with /etc/fstab |
1,589,564,789,000 |
I am trying to set up a virtual drive from a file. This file will then be written to a flash device (not relevant). Because creating and manipulating the virtual drive will be in a script, I need to do it in user space, i.e., not as root. The script is for building and creating an image for a flash device; so, running as root will be problematic.
In order to mount the file as a virtual drive, I added the following line to /etc/fstab:
/home/user/drive.img /home/user/mnt ext4 loop,rw,user,noauto,noexec 0 0
The problem is that when I mount the virtual drive, root takes ownership of ~/mnt, defeating the purpose of mounting it as a regular user.
I know that other file systems allow you to mount while specifying the uid/gid, but the virtual drive must be ext4 to be compatible with an existing process. I tried udisksctl, but it requires root authentication for loopback.
I am going to try mounting then changing ownership (as root) but never unmount it. I will do a 'sync' then take a snapshot of the virtual drive. I do not like it because it is not clean, but it may work for now.
|
The step you haven't mentioned is how you created the ext4 filesystem, which is the source of the problem. Using mkfs.ext4 /home/user/drive.img will create a root inode owned by root, so when you mount it, it will still belong to root.
The solution is to add option -E root_owner to make it belong to the user running mkfs.ext4, or even -E root_owner=$uid:$gid for some explicit numeric user and group id.
(Another solution is to use debugfs (package e2fsprogs for Fedora) to edit the inode.) This example worked for me:
uid=$(id -u)
gid=$(id -g)
rm -f /tmp/ext4fs
truncate -s 50M /tmp/ext4fs
if true
then mkfs.ext4 -E root_owner=$uid:$gid /tmp/ext4fs
else mkfs.ext4 /tmp/ext4fs
debugfs -w -R "set_inode_field . uid $uid" /tmp/ext4fs
debugfs -w -R "set_inode_field . gid $gid" /tmp/ext4fs
fi
# echo '/tmp/ext4fs /tmp/mymnt ext4 loop,rw,user,noauto,noexec' >>/etc/fstab
mkdir -p /tmp/mymnt
mount /tmp/ext4fs
ls -lRa /tmp/mymnt
touch /tmp/mymnt/afile
ls -l /tmp/mymnt
umount /tmp/ext4fs
On mount the ls shows the mount point as
drwxr-xr-x 3 meuh users 1024 May 15 21:04 .
and allows me to create a file there.
| Mounting as <user>, a loop still assigns root ownership |
1,589,564,789,000 |
I recently started using systemd in linux. On systemd mount, I have some observations:
mount unit file is generated with mount point name when there is an entry in /etc/fstab.
I also observed the two scenarios listed below:
Precondition: I have below entry in fstab:
/dev/sda3 /test_mount ext4 rw,acl,nobarrier,nodelalloc 0 0
(So test_mount.mount file is generated under /var/run/systemd/generator/ directory after reboot.)
Scenario 1: I deleted the entry from fstab and rebooted the machine. My expectation is test_mount.mount file should be deleted from /var/run/systemd/generator/ directory. But the file is not deleted and systemd is attempting to mount the device node.
Scenario 2: I modified the entry in fstab. I renamed the mountpoint to sec_test_mount and rebooted the machine. My expectation is test_mount.mount file should be deleted from /var/run/systemd/generator/ directory and sec_test_mount.mount file should be newly created. sec_test_mount.mount is newly created but test_mount.mount is not deleted. Both the mount files are trying to mount and mount has happened twice.
|
systemd-fstab-generator creates the mount units under /run, because that filesystem is a tmpfs (in-memory filesystem), which is not preserved across reboots, so it's expected to be replaced with an empty volume on every boot.
(/var/run is supposed to be a symlink to /run, which is the tmpfs mount. That name exists for compatibility only, modern Linux uses /run directly everywhere.)
If that is not the case on your machine, I'd say that is where the problem is... If you fix that, the generator will properly recreate the mount units on every boot, since the tmpfs will be empty each time.
| Systemd generated mount file is not deleted when the mount point entry in fstab is deleted or modified |
1,589,564,789,000 |
I have an nfs device mounted. I am trying to set it with nosuid through /etc/fstab, but I am having trouble. I have set /etc/fstab correctly (I think), but here's the issue. When I reboot the system and run mount | grep nfs, I can see that it's not mounted with nosuid. Then, when I run umount -l sunrpc, and then mount sunrpc, it mounts correctly with nosuid.
Does anyone know what could be happening?
Command:
# mount |grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
# umount -l sunrpc
# mount sunrpc
# mount |grep nfs
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,nosuid)
/etc/fstab
#
# /etc/fstab
# Created by anaconda on Thu Apr 19 09:13:00 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_opendcsoel6-lv_root / ext4 defaults 1 1
UUID=e3b1a0fb-c27f-42e9-ab93-15295497a293 /boot ext4 defaults 1 2
/dev/mapper/vg_opendcsoel6-lv_swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs nosuid 0 0
|
I guess you are mixing rpc_pipefs with nfs.
rpc_pipefs is helper necessary for nfs operation but it is not actual mount of remote nfs server. Most of the time it can be safely omitted from /etc/fstab (usually proper defaults for rpc_pipefs are hardcoded in nfs startup script which ignores /etc/fstab).
For example the following line is from my /etc/fstab (note nfs instead of rpc_pipefs in 3-rd column):
192.168.200.1:/mnt/vg/git /mnt/host/git nfs defaults 0 0
192.168.200.1:/mnt/vg/work /mnt/host/work nfs nosuid,noexec 0 0
| Why is /etc/fstab not being used on boot? |
1,589,564,789,000 |
I'm trying to create an appropriate /etc/fstab file for my LFS partition, as in LFS part 8.2. How do I find out the file systems for my / mount-point and my swap mount point ( and )? And how do I find out the type of my / mount-point? I'm using a Ubuntu 17.04 host, and this is what I'm using as a model (pasted below).
cat > /etc/fstab << "EOF"
# Begin /etc/fstab
# file system mount-point type options dump fsck
# order
/dev/<xxx> / <fff> defaults 1 1
/dev/<yyy> swap swap pri=1 0 0
proc /proc proc nosuid,noexec,nodev 0 0
sysfs /sys sysfs nosuid,noexec,nodev 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /run tmpfs defaults 0 0
devtmpfs /dev devtmpfs mode=0755,nosuid 0 0
# End /etc/fstab
EO
|
as per lfs book description:
Replace <xxx>, <yyy>, and <fff> with the values appropriate for the
system, for example, sda2, sda5, and ext4.
your root partition described by:
/dev/<xxx> / <fff> defaults 1 1
is the same partition you set up in chapter "2.4. Creating a New Partition", and later mounted as your $LFS (by default /mnt/lfs). and as per book description it's something like /dev/sda5.
type of root partition (<fff> in fstab example) was set by you in chapter "2.5. Creating a File System on the Partition". by default it is ext4.
if unsure, you can use mount command from your host. wihout any options it returns all mounted partitions, so you look for something like:
/dev/sda9 on /mnt/lfs type ext4 (rw,relatime,data=ordered)
in my case it is device /dev/sda9 and type is ext4, and that's what i put in my fstab for <xxx> and <fff>.
swap partition described by:
/dev/<yyy> swap swap pri=1 0 0
was probably already on your ubuntu host, so you didn't set it up in chapter 2. but we can again look it up in already mounted partitions.
command mount | grep swap will show you only mounted swap partitions. and again, you take device name and substitute <yyy> for it :)
rest of the fstab file you leave as it is in the example, should work without any more changes.
| How do I create a proper /etc/fstab file for my LFS partition? |
1,589,564,789,000 |
I have a kubernetes cluster running on baremetal ubuntu server 16.04 with glusterfs and heketi. Heketi will automatically add volume groups and add those to fstab. Due to $reasons, that volume group might not exist on boot.
If the initramfs encounters a non-existant volume group in the fstab, it will cease to boot and throw the server into grub emergency mode - which really sucks for servers sitting in some data center somewhere in the world.
Is it possible to let the kernel try to continue booting despite a wrong entry in fstab?
|
If your ubuntu has systemd, you can edit /lib/systemd/system/local-fs.target and comment out the last two lines:
#OnFailure=emergency.target
#OnFailureJobMode=replace-irreversibly
I haven't tested this extensively and don't know if there are any risks or side effects involved, but so far it works like a charm. It mounts the root volume and all other volumes, except those that are misconfigured, obviously
| boot server despite wrong fstab |
1,589,564,789,000 |
From systemd/fstab-generator.c it follows that systemd treats root= as required kernel parameter, the only configuration source for /sysroot mount. However from kernel/init/main.c and kernel/init/do_mounts.c it is not clear if that is so. Question: how can one do kernel init without 'root=' parameter (and tell systemd to support it)?
See: https://github.com/systemd/systemd/issues/3551
|
In do_mounts.c, the variable saved_root_name is set to the value of the root= command line parameter, if present. This value is a path-like string passed by the kernel, it typically looks like /dev/something (though the /dev/ prefix is optional) but it doesn't actually correspond to any on-disk path. If the root= parameter is absent, the value of ROOT_DEV is used; this is normally 0 but a different value can be injected in the system binary. The util-linux toolchain used to include a utility called `rdev to do this (on x86 only) but
it disappeared a few years ago.
All of this happens only if the initramfs or initrd hasn't taken care of mounting the root (initramfs by running /init which is supposed to call mount, initrd by calling pivot_root).
I don't know about the systemd part. There isn't much that systemd can do about the root filesystem anyway apart from mounting it read-write.
| kernel init without 'root=' parameter |
1,589,564,789,000 |
Fresh Arch Linux install on (hardware) RAID0 under 64-bit UEFI system with GPT partitions. Had to add
MODULES="ext4 dm_mod raid0"
HOOKS="base udev autodetect modconf block mdadm_udev filesystems keyboard fsck"
into /etc/mkinitcpio.conf so that partitions on RAID0 are recognized properly on boot. Otherwise,
ERROR: device 'UUID=<uuid>' not found. Skipping fsck.
ERROR: Unable to find root device 'UUID=<uuid>'.
...
would be issued.
There is one peculiarity however, and I don't know how to explain it. On the one hand, when /etc/fstab contains either /dev/* or UUID=* sources, Arch Linux boots normally. On the other hand, when it contains PARTUUID=* sources, a bunch of the corresponding Dependency failed errors (regarding mounting of those sources from /etc/fstab) happen on boot and it hangs.
Could you explain what's wrong about having PARTUUID=* in /etc/fstab in this case? Does that have something to do with RAID0?
$ cat /proc/mdstat
Personalities : [raid0]
md126 : active raid0 sda[1] sdb[0]
976768000 blocks super external:/md127/0 128k chunks
md127 : inactive sda[1](S) sdb[0](S)
4904 blocks super external:imsm
unused devices: <none>
$ dmsetup table
No devices found
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
└─md126 9:126 0 931.5G 0 raid0
├─md126p1 259:0 0 1G 0 md /boot/efi
├─md126p2 259:1 0 1G 0 md
├─md126p3 259:2 0 1G 0 md
├─md126p4 259:3 0 256G 0 md
├─md126p102 259:4 0 16G 0 md [SWAP]
├─md126p103 259:5 0 16G 0 md /
├─md126p104 259:6 0 16G 0 md /var
└─md126p105 259:7 0 256G 0 md /home
sdb 8:16 0 465.8G 0 disk
└─md126 9:126 0 931.5G 0 raid0
├─md126p1 259:0 0 1G 0 md /boot/efi
├─md126p2 259:1 0 1G 0 md
├─md126p3 259:2 0 1G 0 md
├─md126p4 259:3 0 256G 0 md
├─md126p102 259:4 0 16G 0 md [SWAP]
├─md126p103 259:5 0 16G 0 md /
├─md126p104 259:6 0 16G 0 md /var
└─md126p105 259:7 0 256G 0 md /home
sr0 11:0 1 1024M 0 rom
$ blkid
/dev/sda: TYPE="isw_raid_member"
/dev/sdb: TYPE="isw_raid_member"
/dev/md126p1: LABEL="EFI" UUID="722E-E4AB" TYPE="vfat" PARTLABEL="EFI system partition" PARTUUID="a8e94657-e6ea-4712-be06-ac9ffe6e2258"
/dev/md126p3: LABEL="Windows PE 5.0 (x64)" UUID="181C2F991C2F7144" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="15848c79-1456-418b-a243-830d0db894ce"
/dev/md126p4: LABEL="Windows 8.1 (x64)" UUID="AAB83149B83114F3" TYPE="ntfs" PARTLABEL="Basic data partition" PARTUUID="7d3a06f5-4c67-4299-80b0-029501e14f18"
/dev/md126p102: UUID="6a2d4998-3ac8-4135-9d72-47960b201d5d" TYPE="swap" PARTLABEL="Swap" PARTUUID="d418edd6-44eb-4058-921f-c68aa191c5ac"
/dev/md126p103: UUID="2c241730-a076-48d9-8d1f-6e10573a994f" TYPE="ext4" PARTLABEL="Arch Linux" PARTUUID="37200e1e-dea4-435a-a873-427e3ee8c494"
/dev/md126p104: UUID="8d4eff47-3a2b-46b4-9263-7bbf00d8d0db" TYPE="ext4" PARTLABEL="Variable" PARTUUID="cd15b1f0-e948-4975-9218-591efa5b9b95"
/dev/md126p105: UUID="e0b15e56-3846-4e75-96f8-4f75058b4a6b" TYPE="ext4" PARTLABEL="Home" PARTUUID="54e85323-522c-415a-b7bd-2eb83b6b4ee6"
/dev/md126: PTUUID="e4e1b9b8-c26f-416d-82d9-e9350d0b5ac2" PTTYPE="gpt"
/dev/md126p2: PARTLABEL="Microsoft reserved partition" PARTUUID="6e9264fd-da04-4966-b8e0-8f3124f47050"
|
Since it's now clear you're running software raid ("fake raid", where the firmware/BIOS also has a software RAID implementation to make booting Windows off of it easier—in this case, Intel Matrix Storage), you're probably seeing some bug in Arch's initramfs w/r/t partitioning md arrays.
True hardware raid is almost entirely transparent to the OS; e.g., you would see only one device, the RAID array, not one device per disk. A hardware RAID array looks just like a normal disk to the OS, at least once you've got the RAID driver installed (without it, the OS just doesn't see it at all).
For quite a while, you couldn't partition md arrays at all (it was common—still is—to use LVM on top of them, or to create multiple arrays); later, you could set up a partitionable one, but it wasn't the default; nowadays they can all be partitioned. But probably something still has an assumption about them not being partitionable, and is looking for that partuuid on a physical disk, not the RAID array.
Personally, I'd not worry about it and just use the UUID instead. Also, in general, for a Linux-only box, is usually better to not use the "fake raid" at all, and just use Linux mdraid directly with its native formats. With RAID-0, I'm sure you'll have a chance to rebuild the box soon enough...
| 'PARTUUID' in '/etc/fstab' and (hardware) RAID0 don't play well together, do they? |
1,589,564,789,000 |
Everything has been going fine for 6 months now until my swap suddenly vanished today. Now that I look into it, I find that my disk partitioning is a bit weird. What happened to it, and what should I do to recover it quickly without having to reinstall everything? (I need to finish a job before I do a fresh install again)
Here is my /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/ubuntu--vg-root / ext4 errors=remount-ro 0 1
# /boot was on /dev/sda1 during installation
UUID=ede3c189-5d39-4f55-b263-0d6bcafc5d7b /boot ext4 defaults 0 2
/dev/mapper/ubuntu--vg-swap_1 none swap sw 0 0
/dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0
# /dev/sdb1 /media/usb0 auto rw,user,noauto 0 0
# /dev/sdc1 /media/usb1 auto rw,user,noauto 0 0
(why is there "Ubuntu" there by the way?)
Here is the result of sudo fdisk -l /dev/sda:
Disk /dev/sda: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000aabb9
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 499711 248832 83 Linux
/dev/sda2 501758 625141759 312320001 5 Extended
/dev/sda5 501760 625141759 312320000 8e Linux LVM
As for lvm information, here is the output of ls -l /dev/mapper
total 0
crw------T 1 root root 10, 236 juin 29 07:28 control
lrwxrwxrwx 1 root root 7 juin 29 07:28 ubuntu--vg-root -> ../dm-0
lrwxrwxrwx 1 root root 7 juin 29 07:28 ubuntu--vg-swap_1 -> ../dm-1
.. the output of lvscan
ACTIVE '/dev/ubuntu-vg/root' [294,10 GiB] inherit
ACTIVE '/dev/ubuntu-vg/swap_1' [3,74 GiB] inherit
.. and the output of pvscan
PV /dev/sda5 VG ubuntu-vg lvm2 [297,85 GiB / 12,00 MiB free]
Total: 1 [297,85 GiB] / in use: 1 [297,85 GiB] / in no VG: 0 [0 ]
.. and every day it looks more clear that we might have not wiped Ubuntu properly from my disk that night -_-
Is there a way I can quickly get my swap back before I get time to wipe all this?
|
To enable the swap device you can
swapon /dev/mapper/ubuntu--vg-swap_1
If there is an error with that swap space, because it was destroyed somehow, you can reformat the swap device with
mkswap /dev/mapper/ubuntu--vg-swap_1
Check the related manual pages swapon(1) and mkswap(1) for more information.
| Swap suddenly vanished from Debian? |
1,589,564,789,000 |
SETUP
Im running Debian 8 (jessie/testing) amd64 with systemd.
On my system partition containing a btrfs filesystem, I do have the following layout:
/dev/sda1
|-root
|-root_snapshots/
|-snapshot#1
|-snapshot#2
In order to fully boot from a snapshot, I currenty have to change the subvolume:
in /etc/fstab
in the grub console (edit mode)
when booting.
PROBLEM
The change to '/etc/fstab' requires a running system - i.e. from a flashdrive - what I want to avoid
SCENARIO
Assume something is really broken, and I had to boot from a snapshot#1:
I'd rather only change the subvolume in the grub console, and have the rootfs mounted on the correct subvolume (here: snapshot#1).
Without a change in /etc/fstab, systemd would still mount the rootfs from the entry specified in /etc/fstab -> yielding the wrong rootfs to be mounted
QUESTION
Can systemd be told to mount the rootfs from the 'rootflags=subvol=' parameter of '/proc/cmdline'
Or is there another solution to circumvent this problem
|
I think it might be helpful. There is a list of kernel command line params which systemd understands: http://www.freedesktop.org/software/systemd/man/kernel-command-line.html
There is an option fstab=, and rd.fstab
Takes a boolean argument. Defaults to "yes". If "no", causes the generator to ignore any mounts or swaps configured in /etc/fstab. rd.fstab= is honored only by initial RAM disk (initrd) while fstab= is honored by both the main system and the initrd.
So if set in grub/grub2 (I don't know what you are using) root=/dev/required_dev fstab=no it should boot as expected.
| systemd mount 'rootfs' according to '/proc/cmdline' |
1,589,564,789,000 |
I have an small server in my house with an external usb 2TB hard drive:
/dev/sdb1: LABEL="Data" UUID="eedc3098-221d-4800-b8b4-efa4fef23f5f" TYPE="ext4"
I have the next line in /etc/fstab:
UUID=eedc3098-221d-4800-b8b4-efa4fef23f5f /home/data ext4 defaults 0 2
When I boot the system I get the next error:
Unable to resolve 'UUID=eedc3098-221d-4800-b8b4-efa4fef23f5f' fsck died with exit status 8
Then, the system ask me about root password for maintenance. If I log in like root and type:
fsck.ext4 'UUID=eedc3098-221d-4800-b8b4-efa4fef23f5f'
I get:
Data: clean 99709/122101760 files, 232470354/488378368 blocks
If I say to fsck to not ckeck the filesystem on startup (changing last 2 into a 0) the system starts properly (with the warning: special drive UUID=eedc3098-221d-4800-b8b4-efa4fef23f5f doesn't exist) and my partition doesn't mount. But the uuid exists in /dev/disks/by-uuid
How can I mount my drive properly? I thoght that maybe it can be a problem related with USB
|
The problem might be that the drive needs to be initialized by the USB driver and this initialization takes time, so that when the partitions in fstab are mounted, the drive isn't ready yet, but by the time you log in, the drive is ready and mounting or fsck works.
If this is the problem, try adding the option noauto in /etc/fstab and mounting the drive manually later in the boot sequence, for example in /etc/rc.local (or whatever your distribution offers). Alternatively, add noauto in /etc/fstab and tell udev to do the mounting, with a line like this in /etc/udev/rules.d:
KERNEL=="sd?", PROGRAM=="/sbin/blkid -o value -s UUID %N1", RESULT=="EEDC-3098", RUN+="mount /home/data"
| Error mounting drive with fstab |
1,589,564,789,000 |
I made a separate partition for /home, but during installation process I forgot to mount it and hence no entry was made in fstab.
I had everything in partition under the root ( well not the swap and efi system partition). I realised what I did, very late and by that time I had already installed packages and wrote data in the home directory.
Now what I want to know is “is there any way possible to move my home directory to a separate partition with out losing any data?”
I was thinking of doing something like mounting the root directory in /mnt and than mount a new partition(for home) in /mnt/home from a liveUSB and than generate the fstab.
But I am like 79% sure that this will wipe out my home directory.
SPEC: Arch Linux x86_64 latest kernel (5.0.4)
|
Because you already have an home partition, we should be able to do this with out a live OS.
mount the new home on /mnt
move files from old-home (/home), to new home (/mnt). (/home should now be empty).
remount new-home to /home (bind mount sudo mkdir -p /home && sudo mount --bind /mnt /home (you can also use --move, in place of --bind), or unmount then mount).
It is not as you want, but the mount is not persistent.
edit /etc/fstab (There may be tools to help you with this, I can't remember).
| Add (already created) partition for /home after OS installation |
1,589,564,789,000 |
I have installed rust by curl https://sh.rustup.rs -sSf | sh and followed instructions thereof. Installation was successful and the PATH was added to the .bash_profile as follows:
export PATH=$HOME/.cargo/bin:$PATH
echo ing $PATH shows variable has been set properly, as follows:
rust@rusty:~$ echo $PATH
/home/rust/.cargo/bin:/usr/local/bin:/usr/bin:/bin:/usr/games
I am mounting /home as a separate partition and mounting through /etc/fstab as follows
# Mounting home partition
/dev/sda4 /home ext4 rw,async,users 0 0
I initially had noexec as one of the options. But, removing that did not bring any change in the outcome.
I doubting that my default /home partition permissions but don't have any other linux running box to verify.
total 20
drwx------ 2 root root 16384 Jan 18 08:38 lost+found
drwxr-xr-x 22 rust rust 4096 Jan 19 19:45 rust
Is this permissions correct?
If someone could shed some light on what I am missing to notice/doing wrong and how to troubleshoot and fix the issue would be much appreciated.
Realized after the comment from @kusalananda
EDIT-1
rust@rusty:~$ cargo
bash: /home/rust/.cargo/bin/cargo: Permission denied
It supposed to prompt me with the help documentation of cargo but fails saying the above.
EDIT-2
Added the permissions of .cargo and .cargo/bin
rust@rusty:~$ ls -l .cargo/
total 8
drwxr-xr-x 2 rust rust 4096 Jan 19 18:45 bin
-rw-r--r-- 1 rust rust 37 Jan 19 18:58 env
rust@rusty:~$ ls -l .cargo/bin/
total 108560
-rwxr-xr-x 10 rust rust 11116056 Jan 19 18:45 cargo
-rwxr-xr-x 10 rust rust 11116056 Jan 19 18:45 cargo-clippy
-rwxr-xr-x 10 rust rust 11116056 Jan 19 18:45 cargo-fmt
-rwxr-xr-x 10 rust rust 11116056 Jan 19 18:45 rls
-rwxr-xr-x 10 rust rust 11116056 Jan 19 18:45 rustc
-rwxr-xr-x 10 rust rust 11116056 Jan 19 18:45 rustdoc
-rwxr-xr-x 10 rust rust 11116056 Jan 19 18:45 rustfmt
-rwxr-xr-x 10 rust rust 11116056 Jan 19 18:45 rust-gdb
-rwxr-xr-x 10 rust rust 11116056 Jan 19 18:45 rust-lldb
-rwxr-xr-x 10 rust rust 11116056 Jan 19 18:45 rustup
EDIT-3:
>> curl https://sh.rustup.rs -sSf | sh
info: downloading installer
Welcome to Rust!
This will download and install the official compiler for the Rust programming
language, and its package manager, Cargo.
It will add the cargo, rustc, rustup and other commands to Cargo's bin
directory, located at:
/home/rusty/.cargo/bin
This path will then be added to your PATH environment variable by modifying the
profile files located at:
/home/rusty/.profile
/home/rusty/.bash_profile
You can uninstall at any time with rustup self uninstall and these changes will
be reverted.
Current installation options:
default host triple: x86_64-unknown-linux-gnu
default toolchain: stable
modify PATH variable: yes
1) Proceed with installation (default)
2) Customize installation
3) Cancel installation
>1
info: syncing channel updates for 'stable-x86_64-unknown-linux-gnu'
info: latest update on 2019-01-17, rust version 1.32.0 (9fda7c223 2019-01-16)
info: downloading component 'rustc'
79.5 MiB / 79.5 MiB (100 %) 883.2 KiB/s ETA: 0 s
info: downloading component 'rust-std'
54.3 MiB / 54.3 MiB (100 %) 611.2 KiB/s ETA: 0 s
info: downloading component 'cargo'
4.4 MiB / 4.4 MiB (100 %) 761.4 KiB/s ETA: 0 s
info: downloading component 'rust-docs'
8.5 MiB / 8.5 MiB (100 %) 553.6 KiB/s ETA: 0 s
info: installing component 'rustc'
info: installing component 'rust-std'
info: installing component 'cargo'
info: installing component 'rust-docs'
info: default toolchain set to 'stable'
stable installed - (error reading rustc version)
Rust is installed now. Great!
To get started you need Cargo's bin directory ($HOME/.cargo/bin) in your PATH
environment variable. Next time you log in this will be done automatically.
To configure your current shell run source $HOME/.cargo/env
|
The issue was the /etc/fstab entry that I had. It worked after I changed the way I was mounting. Here is my new fstab entry:
/dev/sda4 /home/rusty ext4 defaults 0 2
I changed the owner & group of /home/rusty to be rusty and it worked.
| cargo execution - permission denied [PREVIOUSLY]rust installation - permission denied |
1,589,564,789,000 |
discussion - we have redhat linux machines and my question is about the UUID configuration in /etc/fstab file , and in which cases UUID risk the OS
as I understand we MUST NOT use UUID in /etc/fstab if using software RAID1.
Why? Because the RAID volume itself and the first element of the mirror will appear to have the same file system UUID. If the mirror breaks or for any other reason the md device isn't started at boot, the system will mount any random underlying disk instead, clobbering your mirror.
so my question is
what are the RAID levels ( numbers ) that we must not is UUID in fstab ?
info about the raid level - https://en.wikipedia.org/wiki/Standard_RAID_levels
|
We'll just go ahead and test this on ArchLinux and mdadm. But first of all this shouldn't matter for partition based arrays because then the member partitions have their own UUIDs so this would in theory only appply to whole disk members.
TL;DR: This isn't a real problem even with old metadata blocks. It might have been a bug in older software I don't know. But it doesn't affect a modern ArchLinux.
#uname -sr
Linux 4.14.7-1-ARCH
#modprobe raid1
#mdadm --create --verbose /dev/md0 --metadata 0.9 --level=mirror --raid-devices=2 /dev/sdb /dev/sdd
mdadm: size set to 102336K
mdadm: array /dev/md0 started.
#cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdd[1] sdb[0]
102336 blocks [2/2] [UU]
unused devices: <none>
#mdadm --detail --scan >> /etc/mdadm.conf
fdisk /dev/md0
lsblk /dev/md0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 100M 0 disk
└─md0 9:0 0 100M 0 raid1
└─md0p1 259:0 0 98.9M 0 md
sdd 8:48 0 100M 0 disk
└─md0 9:0 0 100M 0 raid1
└─md0p1 259:0 0 98.9M 0 md
md0 8:0 0 100M 0 raid1
└─sda2 8:2 0 98.9M 0 md
mdstat -> [UU]
#blkid /dev/md0
/dev/md0: PTUUID="d49d8666-e580-8244-8c82-2bc325157e66" PTTYPE="gpt"
#blkid /dev/sdd
/dev/sdd: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"
#blkid /dev/sdb
/dev/sdb: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"
#mkfs.ext4 /dev/md0p1
mke2fs 1.43.7 (16-Oct-2017)
creating filesystem with 101292 1k blocks and 25376 inodes
Filesystem UUID: 652bcf77-fe47-416e-952c-bbOa76a78407
Superblock backups stored on blocks: 8193, 24577, 40961, 57345, 73729
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
#mount /dev/md0p1 /mnt
#lsblk -o NAME,UUID,MOUNTPOINT /dev/sdb /dev/sdd
NAME UUID MOUNTPOINT
sdb b3d82551-0226-6687-8279-b6dd6ad00d98
└─md0
└─md0p1 652bcf77-fe47-416e-952c-bbOa76a78407 /mnt
sdd b3d82551-0226-6687-8279-b6dd6ad00d98
└─md0
└─md0p1 652bcf77-fe47-416e-952c-bbOa76a78407 /mnt
So far so good. Not only does this correctly identify the member devices as raid devices but there are two partition level UUIDs that match. In fact these as part of the same container device md0 and lists the same mount point. It DOES NOT list any normal partition containers on sdd or sdb. Note that the md0 device itself does NOT have a UUID. Only its members have the UUID and its actually the same UUID.
#echo "UUID=652bcf77-fe47-416e-952c-bbOa76a78407 /mnt ext4 rw,relatime,data=ordered 0 2" >> /etc/fstab
umount /mnt
mount /mnt
cd /mnt
fallocate -l 50MiB data
mdstat -> [UU]
Noting that we asked for the file system UUID of the raid members now lets try running the system without mdadm running.
#cd
#umount /mnt
#mdadm --stop /dev/md0
mdadm: stopped /dev/md0
#lsblk /dev/sdb /dev/sdd
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdb 8:16 0 100M 0 disk
sdd 8:48 0 100M 0 disk
Now the system thinks these are correctly raw disks because they have no partition table and so are not containers. However if we ask about what they are:
#blkid /dev/sdd
/dev/sdd: UUID="b3d82551-0226-6687-8279-b6dd6ad00d98" TYPE="linux_raid_member"
It's still a linux_raid_member and if we try to mount it:
#mount /dev/sdd /mnt
mount /mnt: unknown filesystem type "linux raid member"
How about:
#mount /mnt
mount: /mnt can't find UUID=652bcf77-fe47-416e-952c-bbOa76a78407
And that makes sense because sdd is NOT a container and therefor there are no file systems that are probed. However if I run:
#mdadm --assemble --scan && mount /mnt
mdadm: /dev/md0 has been started with 2 drives.
And if I stop it again and remove mdadm.conf:
#umount /mnt && mdadm --stop /dev/md0
#modprobe -r raid1
#rm /etc/mdadm.conf
#modprobe raid1
#mdadm --assemble --scan
mdadm: /dev/md/0 has been started with 2 drives.
Dually note that my configuration for md0 device name is no longer taking effect and its being created at /dev/md/0 automatically. Now lets reboot and see what systemd/Linux does with fstab.
#mdadm --stop /dev/md/0
mdadm: stopped /dev/md/0
#systemctl reboot
#dmesg | grep md0
[ 14.550231] md/raidl:md0: active with 2 out of 2 mirrors
[ 14.550261] md0: detected capacity change from 0 to 104792064
[ 14.836905] md0: p1
[ 16.909057] EXT4-fs (md0p1): mounted filesystem with ordered data mode. Opts: data=ordered
#lsblk /dev/md0
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
md0 9:0 0 100M 0 raidl
└─md0p1 259:0 0 98.9M 0 md /mnt
And again with raid=noautodetect kernel parameter as this would also simulate versions of Linux that would not autodetect all raids and in all superblock/metadata versions etc. Yet still it mounts the raid because I asked for it in fstab and it force loaded mod raid1. So lets try again with it black listed with modprobe.blacklist=raid1:
Okay so whats going on?:
So linux knows its a raid type device even if it has no raid support. When trying to mount it, it correctly detects its a raid device and when using fstab it doesn't find the UUID despite it being in the file systems super block.
And again! With no information in fstab or mdadm.
#mount /dev/sdd /mnt
mount: /mnt: unknown filesystem type "linux_raid_member".
I think the gist of this is not only is Linux's probing is smart. Besides that using tools like fdisk warm that there is extra information stuffed in the partition table area. You would have to be trying really hard to make this mistake your file system UUID for one of the member disks.
| UUID in fstab + in which cases we must not configured UUID in fstab |
1,642,003,952,000 |
I wish to make the following mount permanent:
[michael@devserver ~]$ findmnt | grep public
└─/home/jail/home/public/repo /dev/mapper/centos-root[/home/michael/testing/gateway/repo] xfs ro,relatime,attr2,inode64,noquota
[michael@devserver ~]$
I created this mount using the following:
sudo mkdir /home/jail/home/public/repo
sudo mount --bind /home/michael/testing/gateway/repo /home/jail/home/public/repo
sudo mount -o remount,ro,bind /home/jail/home/public/repo
My /etc/fstab currently looks like the following.
I expected that I should just add /home/michael/testing/gateway/repo /home/jail/home/public/repo xfs ro,relatime,attr2,inode64,noquota 0 0 to /etc/fstab, but upon doing so, my server chokes and I have to go in emergency mode to remove this line from /etc/fstab. What is the proper way to permanently bind mount a directory for read-only access?
[michael@devserver ~]$ cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Apr 8 14:15:42 2016
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 1 1
UUID=362355d4-e5da-44de-bf5c-5ce92cf43888 /boot xfs defaults 1 2
/dev/mapper/centos-swap swap swap defaults 0 0
[michael@devserver ~]$
|
Well, your /etc/fstab file does not seem to have bind mount-point configured.
Be so kind and add the following line:
/home/michael/testing/gateway/repo /home/jail/home/public/repo none bind,ro 0 0.
Then, I would type the following command to verify, if mountpount is persistent and works.
mount /home/jail/home/public/repo
After that, you can reboot your system.
| Editing /etc/fstab to permanently bind mount directory |
1,642,003,952,000 |
We want to comment the specific line in fstab file that contained the relevant UUID number
Example:
Disk=sde
UUID_STRING=` blkid | grep $Disk | awk '{print $2}' `
echo $UUID_STRING
UUID="86d58af9-801b-4c25-b59d-80b52b4acc61"
sed -e "/$UUID_STRING/ s/^#*/#/" -i /etc/fstab
but from /etc/fstab , the line - UUID=86d58af9-801b-4c25-b59d-80b52b4acc61 /data/sde ext4 defaults,noatime 0 0 - not commentated
more /etc/fstab
UUID=cb47ad8e-5b90-4ddc-97f5-2c0fa1f1b7e7 /data/sdc ext4 defaults,noatime 0 0
UUID=169da708-3c48-4306-beba-95dab722d3ab /data/sdd ext4 defaults,noatime 0 0
UUID=86d58af9-801b-4c25-b59d-80b52b4acc61 /data/sde ext4 defaults,noatime 0 0
UUID=640e2c41-d5c6-4e02-beb9-714ec99e16e2 /data/sdf ext4 defaults,noatime 0 0
UUID=58a8cddf-7ce9-431c-bb71-f4f44c8d62a5 /data/sdg ext4 defaults,noatime 0 0
UUID=6779c108-f74b-4a05-8faf-cf8752844c53 /data/sdh ext4 defaults,noatime 0 0
UUID=3c2352f6-df8e-4b14-b6c0-60caaef0dce0 /data/sdi ext4 defaults,noatime 0 0
UUID=ba59e473-d856-4c8b-a3be-4bfc40009f0d /data/sdb ext4 defaults,noatime 0 0
is it possible to ignore the --> " ? in sed command - sed -e "/$UUID_STRING/ s/^#*/#/" -i /etc/fstab
other solution could be as
uuid_capture=` echo $UUID_STRING | sed s'/"/ /g' | awk '{print $NF}' `
sed -e "/$uuid_capture/ s/^#*/#/" -i /etc/fstab
more /etc/fstab
UUID=cb47ad8e-5b90-4ddc-97f5-2c0fa1f1b7e7 /grid/sdc ext4 defaults,noatime 0 0
UUID=169da708-3c48-4306-beba-95dab722d3ab /grid/sdd ext4 defaults,noatime 0 0
#UUID=86d58af9-801b-4c25-b59d-80b52b4acc61 /grid/sde ext4 defaults,noatime 0 0
UUID=640e2c41-d5c6-4e02-beb9-714ec99e16e2 /grid/sdf ext4 defaults,noatime 0 0
UUID=58a8cddf-7ce9-431c-bb71-f4f44c8d62a5 /grid/sdg ext4 defaults,noatime 0 0
UUID=6779c108-f74b-4a05-8faf-cf8752844c53 /grid/sdh ext4 defaults,noatime 0 0
UUID=3c2352f6-df8e-4b14-b6c0-60caaef0dce0 /grid/sdi ext4 defaults,noatime 0 0
UUID=ba59e473-d856-4c8b-a3be-4bfc40009f0d /grid/sdb ext4 defaults,noatime 0 0
|
The only issue with your code is that your variable contains the UUID in double quotes, while the UUID in /etc/fstab is not in quotes.
Suggestion: Use the export output format of blkid which exists to allow you to eval the output, which would set the relevant shell variables, for example UUID. Then use $UUID in your sed command.
eval "$( blkid -o export /dev/"$Disk" )"
sed -i '/^UUID='"$UUID"'/ s/^/#/' /etc/fstab
This would find the line(s) that starts with UUID= followed by your UUID string. Those lines would have a # character prepended to the start.
Since the initial pattern is anchored to the start of the line, this also avoids adding the # character more than once if you re-run the command.
The -e option is not needed when only giving sed a single expression, and -i is commonly given before the editing expression(s).
You could also use GNU awk like so:
awk -i inplace -v uuid="$UUID" '$1 == "UUID=" uuid { $0 = "#" $0 }; 1' /etc/fstab
... which would have the same effect given the data that you present. It uses the inplace source module, available since GNU awk 4.1.0, to perform an in-place edit in much the same was as sed -i does it (see also How to change a file in-place using awk? (as with "sed -i")).
The actual code compares the first field with UUID= followed by our UUID string, and if there is a match, the line is modified by adding a # to the start. All lines, whether modified or not, are then printed (outputted to the output file).
This is all assuming that you can't work directly on /etc/fstab using $Disk with something like
sed -i '\|^UUID=.* /data/'"$Disk"' | s/^/#/' /etc/fstab
or
awk -i inplace -v disk="$Disk" '!/^#/ && $2 == "/data/" disk { $0 = "#" $0 }; 1' /etc/fstab
In all cases above, the comment character # could be any string, for example ###FAULTY_DISK###.
| comment the specific line in fstab file that contained the relevant UUID number |
1,642,003,952,000 |
I have a network drive hosted on a Windows10 Machine, it mounts fine to my CentOS7 machine through the command:
sudo mount -t cifs //ipaddress/sharedfoldername /mountpoint --verbose -o credentials:/credential/file/location,file_mode=0666,dir_mode=0777
The file and dir modes are for the permissions on the mount. Anyway, that mounts fine, but when I try to do an /etc/fstab mount, I get an error back.
I will supply my entire fstab file contents and the exact error below. The error appears on startup, it boots to emergency mode and shows the error and gives me the option to use CTRL + D to continue.
The fstab mount I am trying to get to work is:
//ipaddress/sharedfoldername /mnt cifs credentials=/etc/smbcredentials,uid=1001,gid=1001,_netdev 0 0
My /etc/fstab contents:
#
# /etc/fstab
# Created by anaconda on Thu Dec 13 09:33:55 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=4f3871fe-a798-4d51-ad90-c40b095a2bd0 / ext4 defaults 1 1
UUID=1bb03b6d-3a76-4979-aa63-ff3e0eb4cc5f /boot ext4 defaults 1 2
UUID=f89fdb96-6dbf-4865-aa6b-1d5cc74f2d48 /home ext4 defaults 1 2
UUID=86f38c73-f9e0-490b-8c96-3321f9413c0d swap swap defaults 0 0
//ipaddress/sharedfoldername /mnt cifs credentials=/etc/smbcredentials,uid=1001,gid=1001,_netdev 0 0
The error appears on startup and you can find it below:
You're looking at the CIFS bit, the bad mount option huge needs sorting anyway, that was there before the fstab cifs mount. Thanks
@telcoM's answer response
I rebooted and get the following error on startup:
Then when I login after seeing the error, I get a shortcut appear in the left of my file browser, when I click it, I get this error:
Unable to mount 'shared-folder-name', mount: only root can mount //ipaddress/sharedfoldername on /mountpoint
MY FSTAB FILE AFTER @TELCOM'S SUGGESTION
#
# /etc/fstab
# Created by anaconda on Tue Dec 11 14:28:31 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=4d48ab0d-e1ab-4d7e-9f64-8481a7690060 / ext4 defaults 1 1
UUID=a7fad550-81d7-4150-8b76-e89584e4cfdf /boot ext4 defaults 1 2
UUID=0baabbc4-2dc0-4971-9d2b-c123e5ad7355 /home ext4 defaults 1 2
UUID=7756eafb-382c-46b3-aae8-e44d7e2cfe06 swap swap defaults 0 0
#
//ipadress/sharedfoldername /mount/location cifs x-systemd.after=network-online.target,credentials=/credentials/location,vers=3.0,file_mode=0666,dir_mode=0777,uid=1001,gid=1001 0 0
|
The tmpfs: Bad mount option huge turns out to be a kernel bug: see this link.
The "Error connecting to a socket" means the system is trying to mount the Windows share before network interfaces have been fully enabled. It should not be happening, but you could add a new systemd-style mount option to be explicit about it: x-systemd.after=network-online.target. The _netdev option used to be an old way to do the same, but apparently it does not work any more after CentOS moved to systemd in version 7.0.
As i wrote in my answer to your earlier question, if you want everyone to be able to access the share, you'll need to supply the mount options file_mode=0666,dir_mode=0777. And if you do this, then the uid=1001,gid=1001 options will probably be unnecessary, but you can still use them if you want.
And to silence an ugly warning about a changed default version of the SMB protocol (since the aftermath of the WannaCry ransomware infestation back in May 2017), you'll want to add vers=3.0 mount option too, if the share is provided by a reasonably modern version of Windows.
So, the /etc/fstab entry should probably be like this (split to multiple lines for readability):
//ipaddress/sharedfoldername /mnt cifs
x-systemd.after=network-online.target,credentials=/etc/smbcredentials,
vers=3.0,file_mode=0666,dir_mode=0777,uid=1001,gid=1001 0 0
An fstab entry should always have exactly 6 fields separated by whitespace - no more and no less.
| Permanent network drive mount in fstab not working (due to network not being online whilst attempting to mount) |
1,642,003,952,000 |
I don't understand why the following filesystem shows up in /etc/fstab, but not using df -a:
/dev/sdb1 /var/log/apache_logs reiserfs user,noauto,rw,exec,suid,user_xattr 0 2
I've verified that the folder /var/log/apache_logs does indeed exist and can be accessed.
Shouldn't the df -a command list ALL filesystems?
$ df -a
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ghost-root
7583436 1252188 5946020 18% /
proc 0 0 0 - /proc
none 0 0 0 - /sys
none 0 0 0 - /sys/fs/fuse/connections
none 0 0 0 - /sys/kernel/debug
none 0 0 0 - /sys/kernel/security
udev 254652 164 254488 1% /dev
none 0 0 0 - /dev/pts
none 254652 0 254652 0% /dev/shm
none 254652 52 254600 1% /var/run
none 254652 0 254652 0% /var/lock
none 254652 0 254652 0% /lib/init/rw
/dev/sdc1 198321 5763 182319 4% /tmp
/dev/sda5 233335 12670 208217 6% /boot
$ cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/dev/mapper/ghost-root / ext4 errors=remount-ro 0 1
# /boot was on /dev/sda5 during installation
UUID=f9f46813-a78a-42e8-a007-53308212ee26 /boot ext2 defaults 0 2
/dev/sdb1 /var/log/apache_logs reiserfs user,noauto,rw,exec,suid,user_xattr 0 2
/dev/sdc1 /tmp ext2 noexec,nosuid,rw 0 0
/dev/mapper/ghost-swap_1 none swap sw 0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0
$
|
Note that the filesystem mount options in /etc/fstab include the noauto option. As a result it will not be mounted automatically at boot time, nor with mount -a.
It will only be mounted with a specific mount /dev/sdb1 or mount /var/log/apache_logs command. Apparently this command has not been issued yet.
df -a will list all mounted filesystems - including pseudo filesystems like /proc or /sys, and also duplicate and inaccessible mounted filesystems, but not unmounted filesystems.
There's also the user option, indicating that even a regular user can mount that specific filesystem into that specific mountpoint, and only the user that mounted it (or root of course) can unmount it again.
| Filesystem show up in `/etc/fstab`, but not using `df -a`? |
1,642,003,952,000 |
I only recently realised that you can specify multiple swap partition, spreading them across drives. Well that's great for me as my desktop system often uses swap space and I have three different drives spread across two controllers. One of which is a dedicated raid5.If your curious, it's a retired server. :)
If you set them to the same priority it will "round-robin" them, or spread the workload between them. At least as I understand it.
Nonetheless, I can't seem to get both swap partitions to the same priority. Here is my fstab:
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
/dev/sda1 / ext4 errors=remount-ro,user_xattr 0 1
/dev/sdb1 none swap sw pri=1 0 0
/dev/sda3 none swap sw pri=1 0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0
I'm using swapoff -a and swapon -a to turn off then back on the swap files. When I use swapon -s I get:
Filename Type Size Used Priority
/dev/sdb1 partition 5855656 1408 -1
/dev/sda3 partition 2093052 0 -2
Any ideas why it's not setting the priority the same?
Thanks for any help.
|
The options field in fstab is comma-delimited (note every other (non-swap) line). You have spaces.
Fix that and it should work as intended.
| LinuxMint, unable to set swap partitions to equal priority |
1,642,003,952,000 |
edit: I think I solved my own problem -- see bottom of question for details
I have a copy of all the files under / on an external usb-connected harddrive. To test whether this backup works, i'm trying to boot from it. However, this is proving a bit more difficult than I expected.
In what I assume is the BIOS, I select my external HD as the boot device. However, whether I select this device or my usual drive as the boot device, I am shown the same GRUB menu. This happens whether I manually went and installed GRUB on my external HD or not. So my question is, is GRUB being loaded from my normal boot drive or from the external drive, and does it matter?
I found that when using GRUB command line without GRUB installed on my external drive, the external drive was shown as (hd2,gpt1), however after I installed GRUB on the external drive, it came up as (hd0,gpt2).
Perhaps the difficulties I'm having are related to just my first question, but when I boot after specifying linux /boot/vmlinuz-linux root=/dev/sda2, I always get some failure relating to the drive specified as root not being found. This happens even when I specify root by device uuid. My question is, do I need to worry about
/etc/fstab? Is this something I need to change in order to make sure that root is found? And does it matter whether I change the /etc/fstab file on my internal drive or on the external one. (I did modify /etc/fstab on the external drive so that the drive with the UUID of the root partition should be mounted to /, but to no avail)
Perhaps because it is a usb connected external HD, do I need to do anything special? I noticed seeing the error usb 2-4: device descriptor read/64, error -71 while booting normally, but since this has no apparent effect on the functionality of the drive, I ignored it. This error also appeared on two separate external HD docks, so I assume it is not a hardware issue.
Clarifications:
I created my external backup using Borg, which creates a copy of all the files. It doesn't copy the entire disk over like dd, so for example while installing grub I still need to manually create a partition with the bios_grub flag.
The exact command I use in grub while attempting to boot is
set root=(hd0,gpt2)
linux vmlinuz-linux root=/dev/sda2
initrd initramfs-linux.img
boot
I've managed to boot.
I had tried both linux /boot/vmlinuz-linux root=/dev/sda2 and linux /boot/vmlinuz-linux root=/dev/sdc2 to no avail -- the first because the drive was showing up as (hd0,gpt2) in grub, and the second because the partition gets labeled as /dev/sdc2 when I boot normally. However, neither of these worked, and both dropped me into a strange command line. I discovered that the partition with the correct UUID was actually mounted to /dev/sdb2 for some reason! Using root=/dev/sdb2 I booted the system just fine. I think my previous attempts to boot by specifying UUID failed for one of various reasons (GRUB not being installed, typos in the UUID, etc).
This is pretty anticlimactic. I am still curious about the original questions I had -- namely, 1. how it is decided which GRUB is used when there are multiple drives with GRUB installed? 2. does /etc/fstab play a role in the booting process, or is it irrelevant? -- and I'll award the bounty for answers to those questions.
|
your problem is root=/dev/sda2 because that is doing a mount by device name which is not unique. If you have only one drive installed, then that will typically always show up as /dev/sda so no problem. But install a second disk or any other thing in addition that shows up as /dev/sd? then there is no guarantee of the order of anything and oftentimes what was sda is moved down to sdb... can become messy quick and fail.
Best to mount via by device-id or by-uuid which will be unique.
Under /dev/disk/ you will see folders like
by-id/
by-label/
by-path/
by-uuid/
And for example under by-id/ you will see links such as
here is my /etc/fstab that mounts by device-id to give you an idea, i removed extra lines to keep it on point. And I use EFI not GRUB, but the principle is the same just more elaborate with Grub {the grand part in grand unified boot loader}:
/dev/disk/by-id/scsi-35000cca070168a20-part2 / ext3 acl,user_xattr 1 1
/dev/disk/by-id/scsi-35000cca070168a20-part1 /boot/efi vfat umask=0002,utf8=true 0 0
/dev/disk/by-id/scsi-36003048018e26e011d81ba1714e4c99f-part1 /data xfs defaults 1 0
/dev/disk/by-id/scsi-36003048018fa44011d57b61bbe1b8533-part1 /scratch xfs defaults 1 0
/dev/disk/by-id/scsi-36003048018e266011d81ba7e1afeadf6-part1 /bkup xfs defaults 1 2
Note: this is what I see in SLES 11.4. And while I use EFI, you need to find the specific items within GRUB or GRUB2 and modify. As an example here is my /boot/efi/efi/SuSE/elilo.conf file, notice the root= part. What corresponds to this in your GRUB you want to modify to be either by device-id or by uuid. and don't forget to modify /etc/fstab to be by a unique method also either by-id or by-uuid.
# This file has been transformed by /sbin/elilo.
# Please do NOT edit here -- edit /etc/elilo.conf instead!
# Otherwise your changes will be lost e.g. during kernel-update.
#
# Modified by YaST2. Last modification on Mon Oct 15 11:04:42 EDT 2018
timeout = 80
##YaST - boot_efilabel = "SUSE Linux Enterprise Server 11 SP4"
default = SLES11_SP4_16
prompt
image = vmlinuz-3.0.101-108.77-default
###Don't change this comment - YaST2 identifier: Original name: linux###
label = SLES11_SP4_16
append = "splash=verbose showopts "
initrd = initrd-3.0.101-108.77-default
root = /dev/disk/by-id/scsi-35000cca070168a20-part2
image = vmlinuz-3.0.101-108.77-default
###Don't change this comment - YaST2 identifier: Original name: failsafe###
label = Failsafe_15
append = "showopts ide=nodma apm=off noresume edd=off powersaved=off nohz=off highres=off processor.max_cstate=1 nomodeset x11failsafe "
description = "Failsafe (3.0.101-108.77-default)"
initrd = initrd-3.0.101-108.77-default
root = /dev/disk/by-id/scsi-35000cca070168a20-part2
You do NOT want boot=/dev/sd? or root=/dev/sd? anywhere, where ? is whatever letter. Reference the disk out of /dev/disk/by-id or /dev/disk/by-uuid; you could even use by-label provided you set partition labels and trust them to be unique.
| Using grub to properly boot from an external backup drive |
1,642,003,952,000 |
I have a question about mount in Linux Fedora.
I have a mount point inside my home directory. The mount point is at /home/user/project and in fstab I have added the line:
/dev/mapper/fedora-proj /home/user/project ext4 defaults 1 2
The directory /home/user/project has the file permissions 0755 and it is owned by user. But when I do 'mount -a', the directory owner gets changed to root and the permissions are 777.
I know ext2/3/4 do not have uid= and gid= options, but why is the mounting point receives hard coded file permissions during mount and how can I change it?
P.S
The test was made on Fedora 25. When I am doing the exact same procedure on Fedora 23 I see a different behavior: the mount directory permissions are changing to 755 (before mount it is 0777)
|
The permissions for the root of a mountpoint are stored on the mounted filesystem (it actually makes sense this way; otherwise, where would the permissions for the root directory / be stored?). You change them the normal way: chmod, chown, etc.
Before mounting, you're seeing the permissions for the mountpoint directory on the parent filesystem. After mounting, you're seeing the permissions for the root of the mounted filesystem.
Example: You have two filesystems:
FS-A FS-B
/ /
/mnt /file1
/foo /file2
/etc
⋮
Note both of them have a topmost/root directory (/), as all (Unix) filesystems do. FS-A has has two subdirectories shown (/mnt and /etc) and /mnt has a subdirectory /mnt/foo. FS-B has two files, /file1 and /file2. Being Unix filesystems, all of these directories and files have a user, a group, and permissions. Now, let's make FS-A the root filesystem, and mount FS-B at /mnt/foo. We then get:
/ # FS-A /
/mnt # FS-A /mnt
/foo # FS-A /mnt/foo *or* FS-B /
/file1 # FS-B /file1
/file2 # FS-B /file2
/etc # FS-A /etc
⋮
Note how we have a choice of what /mnt/foo is—it could be /mnt/foo from FS-A or / from FS-B. Both have exactly the same path. Unix's designers chose FS-B.
PS: your fstab line is missing the filesystem type. Should come before the options (defaults).
| Mount permissions on Linux |
1,642,003,952,000 |
I placed an entry in my fstab file to add a swap partition.
I used output of a bash command to get the UUID of vdb1 partition(I can't copy paste).
Like this:
UUID=$(blkid -o value -s UUID /dev/vdb1) swap swap defaults 0 0
I'm getting a parse error when I run 'mount -a'.
How can I do this correctly?
|
As Kusalananda comments, fstab cannot interpret embedded shell commands, resulting in your fstab causing this error.
With regards to your comment about cut/paste - I understand that typing in a uuid is daunting and likely error prone, but you could simply append the uuid to the end of your fstab by executing:
blkid -o value -s UUID >> /etc/fstab
... And then editing /etc/fstab in order to edit the 'junk' uuid line into a valid syntax.
I suggest this only as a way of compensating for the lack of a mouse/copy/paste facility.
If you don't fully understand what I am proposing here then do not do this! It will make your fstab syntax invalid, and prevent your system from booting until corrected.
| /etc/fstab - using bash command output to get UUID? |
1,642,003,952,000 |
I am trying to create an automated mount for an external hard drive, but it keeps failing. I am a little newbiew at linux.
I have googled and searched in StackExchange and I tried a lot of things, but I did not find a solution for my problem.
OS: Raspbian Stretch
Those are the steps I did:
Format external drive to ext4
sudo mkfs.ext4 /dev/sda1 -L hdd_moc
mke2fs 1.43.4 (31-Jan-2017)
/dev/sda1 contains a ext4 file system labelled 'hdd_owncloud'
last mounted on Mon Feb 12 09:34:38 2018
Proceed anyway? (y,N) y
Creating filesystem with 244181760 4k blocks and 61046784 inodes
Filesystem UUID: b9b47e44-db76-40de-a0ed-940c9699799a
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done
Create directory for mounted external usb drive
sudo mkdir /mnt/hdd_moc
Create the www-data user to the group
sudo groupadd www-data
sudo usermod -a -G www-data www-data
Give permissions
sudo chown -R www-data:www-data /mnt/hdd_moc
sudo chmod -R 775 /mnt/hdd_moc
ls -l /mnt
total 4
drwxrwxr-x 2 www-data www-data 4096 Feb 12 10:06 hdd_moc
Get the gid, uid and uuid
id -g www-data
33
id -u www-data
33
ls -l /dev/disk/by-uuid
total 0
lrwxrwxrwx 1 root root 15 Feb 12 09:49 9a7608bd-5bff-4dfc-ac1d-63a956744162 -> ../../mmcblk0p2
lrwxrwxrwx 1 root root 15 Feb 12 09:49 B60A-B262 -> ../../mmcblk0p1
lrwxrwxrwx 1 root root 10 Feb 12 10:12 b9b47e44-db76-40de-a0ed-940c9699799a -> ../../sda1
Give the instruction to fstab
sudo nano /etc/fstab
proc /proc proc defaults 0 0
PARTUUID=ed7ab5b3-01 /boot vfat defaults 0 2
PARTUUID=ed7ab5b3-02 / ext4 defaults,noatime 0 1
UUID=b9b47e44-db76-40de-a0ed-940c9699799a /mnt/hdd_moc auto nofail,uid=33,gid=33,umask=0027,dmask=0027,noatime 0 0
Automated mount test
sudo mount -a
mount: wrong fs type, bad option, bad superblock on /dev/sda1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
Normal mount test
sudo mount /dev/sda1 /mnt/hdd_moc -> It works, I can do a "ls".
So I think my problem is in the 4th line of fstab. I readed fstab info but I can not understand what I should do. (https://wiki.debian.org/fstab)
Thank you :)
|
You're overcomplicating things. The fact that the command
sudo mount /dev/sda1 /mnt/hdd_moc
works correctly shows you that your system is able to mount a ext4 filesystem without specific options. In fact, ext4 is one of the most common fs for Linux (if not the most one).
The mount options you're trying to use don't exist for ext4 fs. All you need to do is to rewrite the relevant /etc/fstab line as such:
UUID=b9b47e44-db76-40de-a0ed-940c9699799a /mnt/hdd_moc ext4 defaults,nofail,noatime 0 0
| Wrong fs type, bad option, bad superblock on /dev/sdaX |
1,642,003,952,000 |
I am trying to find the correct syntax for mounting a file share of nfs.
On the host I have the /etc/export file set like so: /mnt/externalHD 192.168.0.8(ro,sync) and the client fstab like so: 192.168.0.2/mnt/externalHD /home/Plex nfs auto 0 0
I have also installed nfs-common and nfs-kernel-server but have had no luck.
|
In the /etc/fstab file you are missing a ":", it should be:
192.168.0.2:/mnt/externalHD /home/Plex nfs ro,sync 0 0
19.2.1. Mounting NFS File Systems using /etc/fstab
The file is also /etc/exports and not /etc/export. You should start/restart the nfs service after changing /etc/exports.
I will also leave the link about the exports file:
21.7. The /etc/exports Configuration File
As for the actual mount point in the client, it has to exist. Do:
sudo mkdir -p /home/Plex
| NFS Debian Jessie server and client |
1,642,003,952,000 |
I was given a HDD that was encrypted using dm-crypt and I'd like to mount it as /disk2 preferably having the decryption password stored in a file, so I won't have to enter the passphrase when booting, but it's not that important.
When I try to open the disk in the file manager providing the encryption passsword, I get this error
Failed to mount "500 GB LVM2 Physical Volume".
Not a mountable file system.
lvdisplay gives
LV Path /dev/disk2/disk2
LV Name disk2
VG Name disk2
LV Status NOT available
ls /dev/mapper gives the following, while the desired result should be disk2-disk2 I guess
udisks-luks-uuid-.....-uid1000
dmsetup ls --tree returns
udisks-luks-uuid-.....-uid1000 (253:7)
└─ (8:17)
lvs returns
disk2 disk2 -wi----- 465,75g
lsblk (before decryption) returns
sdb 8:16 0 465,8G 0 disk
└─sdb1 8:17 0 465,8G 0 part
lsblk returns
sdb 8:16 0 465,8G 0 disk
└─sdb1 8:17 0 465,8G 0 part
└─udisks-luks-uuid-.....-uid1000 (dm-7) 253:7 0 465,8G 0 crypt
mount /dev/mapper/disk2 /mnt returns
mount: unknown filesystem type 'LVM2_member'
fdisk -l /dev/sdb returns
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System /dev/sdb1 1 976773167 488386583+ ee GPT
Partition 1 does not start on physical sector boundary.
|
This sequence allowed me to access the data
cryptsetup luksOpen /dev/sdb1 disk2
modprobe dm-mod
vgchange -ay
mount /dev/disk2/disk2 /disk2
So I offer the reward to the one who'll tell me how to make this change permanent.
| Mount encrypted volume in Debian |
1,642,003,952,000 |
If I want to mount a file system at /myname is it possible by just editing fstab file?
Or should I do more to be safer?
Is any other way to do it, or its not a good idea to do?
|
This is the way to do it. By editing the fstab file, you'll have options to maintain the mount across reboot. Also, you'll be able to ask mount to simply mount /myname and it'll work out.
So to answer your question, definitely yes.
| creating custom mount point in / for a file system |
1,642,003,952,000 |
I have accidentally fstab on my Ubuntu 16.04 LTS and now it loads in read-only mode.
After the accident there was line:
/dev/disk/by-uuid/556d8ecf-44cd-402b-8fd0-d120ccd61491 /mnt/556d8ecf-44cd-402b-8fd0-d120ccd61491 auto nosuid,nodev,nofail,x-gvfs-show 0 0
I changed it to
/dev/sda1 / auto nosuid,nodev,nofail,x-gvfs-show 0 0
and it fixed the read-only problem, but now I cannot run sudo
What is the proper line for root mount in fstab?
|
For Ubuntu the default line generally looks like this:
# <file system> <mount point> <type> <options> <dump> <pass>
UUID=eafe03c8-55fd-4f2c-b1eb-ed8e174f55e9 / ext4 errors=remount-ro 0 1
The file system UUID will be specfic to your system. Get it with sudo lsblk -o "NAME,FSTYPE,LABEL,UUID"
ext4 is usually the default but it will say in the FSTYPE column of lsblk.
| What is correct fstab line for root file system in Ubuntu 16.04? |
1,642,003,952,000 |
Silly me obliterated the contents of /etc/fstab via this:
echo xxxx xxxx xxxx xxx > /etc/fstab
Now the server is still up and running. How can I recover the contents of /etc/fstab before it fails upon the next reboot?
I remember something about anaconda which generated the file? Can it still be used to regenerate the file?
I want to recover a swap entry and a UUID entry (commented out or not) which I don't remember. Other entries than these 2 I can recover myself.
Are the 2 entries crucial in system reboot?
UPDATE
Here's the content of mount command:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=1931388k,nr_inodes=482847,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/net_cls type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/vda1 on / type ext4 (rw,relatime,data=ordered)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=31,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
/dev/vdb1 on /mnt type ext4 (rw,relatime,data=ordered)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=388232k,mode=700)
tmpfs on /run/user/1006 type tmpfs (rw,nosuid,nodev,relatime,size=388232k,mode=700,uid=1006,gid=1006)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
ls -l /dev/disk/by-uuid/
lrwxrwxrwx 1 root root 10 Jul 13 23:52 80b9b662-0a1d-4e84-b07b-c1bf19e72d97 -> ../../vda1
lrwxrwxrwx 1 root root 10 Jul 13 23:52 d5860b20-6f44-4731-a103-5ea4e1bd12e6 -> ../../vdb1
cat /etc/mtab
rootfs / rootfs rw 0 0
sysfs /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,nosuid,size=1931388k,nr_inodes=482847,mode=755 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,nosuid,nodev,mode=755 0 0
tmpfs /sys/fs/cgroup tmpfs ro,nosuid,nodev,noexec,mode=755 0 0
cgroup /sys/fs/cgroup/systemd cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
cgroup /sys/fs/cgroup/blkio cgroup rw,nosuid,nodev,noexec,relatime,blkio 0 0
cgroup /sys/fs/cgroup/net_cls cgroup rw,nosuid,nodev,noexec,relatime,net_cls 0 0
cgroup /sys/fs/cgroup/cpu,cpuacct cgroup rw,nosuid,nodev,noexec,relatime,cpuacct,cpu 0 0
cgroup /sys/fs/cgroup/freezer cgroup rw,nosuid,nodev,noexec,relatime,freezer 0 0
cgroup /sys/fs/cgroup/cpuset cgroup rw,nosuid,nodev,noexec,relatime,cpuset 0 0
cgroup /sys/fs/cgroup/hugetlb cgroup rw,nosuid,nodev,noexec,relatime,hugetlb 0 0
cgroup /sys/fs/cgroup/memory cgroup rw,nosuid,nodev,noexec,relatime,memory 0 0
cgroup /sys/fs/cgroup/perf_event cgroup rw,nosuid,nodev,noexec,relatime,perf_event 0 0
cgroup /sys/fs/cgroup/devices cgroup rw,nosuid,nodev,noexec,relatime,devices 0 0
configfs /sys/kernel/config configfs rw,relatime 0 0
/dev/vda1 / ext4 rw,relatime,data=ordered 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=31,pgrp=1,timeout=300,minproto=5,maxproto=5,direct 0 0
debugfs /sys/kernel/debug debugfs rw,relatime 0 0
mqueue /dev/mqueue mqueue rw,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime 0 0
/dev/vdb1 /mnt ext4 rw,relatime,data=ordered 0 0
tmpfs /run/user/0 tmpfs rw,nosuid,nodev,relatime,size=388232k,mode=700 0 0
tmpfs /run/user/1006 tmpfs rw,nosuid,nodev,relatime,size=388232k,mode=700,uid=1006,gid=1006 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0
blkid
/dev/vda1: UUID="80b9b662-0a1d-4e84-b07b-c1bf19e72d97" TYPE="ext4"
/dev/vdb1: UUID="d5860b20-6f44-4731-a103-5ea4e1bd12e6" TYPE="ext4"
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 118G 48G 65G 43% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 344K 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/vdb1 985G 226G 709G 25% /mnt
tmpfs 380M 0 380M 0% /run/user/0
tmpfs 380M 0 380M 0% /run/user/1006
cat /proc/swaps
Filename Type Size Used Priority
/swapfile file 1048572 0 -1
The /etc/fstab file was actually very simple with just a UUID entry (commented out maybe) and a swap file entry before I was trying to add more entries to it and accidentally erasing it.
Can you please help me reconstruct it from the info above?
|
Making this community wiki to invite contribution from someone familiar with CentOS/RHEL 7.2
The UUIDs come from your blkid output. The paths come from the other output, and the filesystem type and options from /etc/mtab. The dump and fsck order fields are guesses. (I used the same fsck pass because it's two different disks.)
# dev path fs opts dump fsck
UUID=80b9b662-0a1d-4e84-b07b-c1bf19e72d97 / ext4 relatime 0 1
UUID=d5860b20-6f44-4731-a103-5ea4e1bd12e6 /mnt ext4 relatime 0 1
/swapfile none swap sw 0 0
There are probably other things that need to go there (e.g., an entry for /proc or /sys). I don't have a CentOS 7.2 machine to check. Hence the community wiki for someone to complete this answer.
And once you've fixed your fstab, you next should fix your lack of backups. Even something as simple as installing etckeeper would have saved you here (though that's not really a backup, unless you git push it off the machine).
| Accidentally erased contents of /etc/fstab in Centos 7.2 |
1,642,003,952,000 |
I'm not sure if this will make sense but I'm having a problem.
I have an external hard disk drive that I'm using for my torrent files.
It is mounted using fstab.
Now here's a problem, some times maybe because of power interruptions, my HDD is unmounted and (rarely) failed to remount. When this failure of remount happens, the torrent downloads are continued on the mount point (example: /home/user/Downloads).
My question is, is there a way to make /home/user/Downloads as read-only and then make it write-able only if the HDD is mounted?
Or any other better solutions are most welcome.
|
You could make /home/user/Downloads be a link to a directory deeper in on the mount, which is mounted elsewhere. That would probably cause the torrent download to fail.
E.g., if the target directory is /user/Downloads on the HDD, which is mounted on /HDD, then /home/user/Downloads should be a link to /HDD/user/Downloads, and that directory certainly doesn't exist unless the HDD is mounted.
| read only mount point |
1,642,003,952,000 |
Yesterday I moved my home directory from the root partition to another partition following the steps here. Basically copied all files to the new partition and added a new fstab entry with the partition UUID and /home mount point, and restarted the system.
Everything worked as expected, but my question is, what happen to the old folder and files? /home now points to a new partition and it seems the old files just "disappear".
Thanks!
|
If you copied the files to the new partition but didn't delete them from the root partition, the old ones are masked or hidden by mounting the new partition on top of them. In that case, you should still have the same amount of root partition being in use, no space being freed. Unless we both missed that part, deleting the old copies is not included in the instructions you linked.
I'm quoting here a good answer on the subject:
When you mount a filesystem on a directory /mount-point, you can no
longer access files under /mount-point directly. They still exist, but
/mount-point now refers to the root of the mounted filesystem, not to
the directory that served as a mount point, so the contents of this
directory cannot be accessed, at least in this way.
The most straight-forward way to straighten this out is, of course, by umounting the new /home (in order this to succeed, no files from /home must be in use, meaning only root can be logged in), then you'll see the old files (which occupy root partition) and can delete them to free space in the root partition (but do double-check that the new partition is not mounted before really deleting anything). You should probably delete everything under the old /home, not just the contents within the user directories.
| Fate of home folder after reallocation to other partition |
1,642,003,952,000 |
Highlighted in orange is the swap location. But it isn't relative to any path, where is it located?
|
The orange text isn't the location, that's identifying the entry as swap. It's located on the logical volume listed before it. You can get more information on that using lvdisplay.
| Where is the location of this swapfile in /etc/fstab |
1,642,003,952,000 |
I got 2 partition I want to mount
sdb1 which uses ext2 file system
sdc1 which uses ext4 file system
I added this 2 line on fstab
/dev/sdb1 /home2 auto auto,noatime,default 0 0
/dev/sdc1 /home3 auto auto,noatime,noload,data=ordered,commit=10,default 0 0
Looks like it's not correct because I fail to mount. How to correct it?
Yet I can't mount them due to some error.
This is my fstab and some command to show that
root@host [/etc]# cat fstab
#
# /etc/fstab
# Created by anaconda on Tue Jan 8 10:16:53 2013
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=eb3b9431-7964-47fd-a497-e4ddcd3f9d05 / ext4 defaults 1 1
UUID=ed11681c-9343-41ac-ac8b-a29bf4d13fbd /boot ext4 defaults 1 2
#UUID=191a3af4-c48a-4779-974a-c55dc290543d /home1 ext4 defaults 1 2
#UUID=eca46a9a-6666-40d0-bbe5-e35b54295264 /home2 ext4 defaults 1 2
UUID=475f3ba3-6459-42ac-b441-1daa95acb2b3 swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
/usr/tmpDSK /tmp ext3 defaults,noauto 0 0
/dev/sdb1 /home2 auto auto,noatime,default 0 0
/dev/sdc1 /home3 auto auto,noatime,noload,data=ordered,commit=10,default 0 0
root@host [/etc]# parted -l
Model: ATA WDC WD15EADS-00R (scsi)
Disk /dev/sda: 1500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 525MB 524MB primary ext4 boot
2 525MB 34.4GB 33.8GB primary linux-swap(v1)
3 34.4GB 1500GB 1466GB primary ext4
Model: ATA SAMSUNG SSD 830 (scsi)
Disk /dev/sdb: 256GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 256GB 256GB primary ext2
Model: ATA M4-CT256M4SSD2 (scsi)
Disk /dev/sdc: 256GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 256GB 256GB primary ext4
root@host [/etc]# mount /home2
mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
root@host [/etc]# mount /home3
mount: wrong fs type, bad option, bad superblock on /dev/sdc1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
root@host [/etc]#
|
I wonder if I should delete this question. The problem is I use default while I should have written defaults. It's too localized I guess.
| How should I fix my fstab |
1,642,003,952,000 |
When is the right time to mount /tmp (on Debian)? For /home I would not feel bad just to echo "/dev/foo /home type defaults 0 0" >>/etc/fstab - but can I be sure that /tmp is not used by any programs when the fstab is applied?
I am using either Ubuntu or plain Debian or Debian/Grml - this would not make much difference I guess.
What I have read so far:
The internet is full of advice to just add tmpfs /tmp tmpfs <optionns> 0 0 - but I am unsure.
I found this answer on what to do when /tmp is full without rebooting (in short: It's best to reboot anyway, except maybe for a union mount).
The [Deban policy] does not explain where to add the mount, or when the first access to /tmp may happen. More helpful are /etc/init.d/README and /etc/rcS/README on my Ubuntu (read them online).
Background: I am going to use some Debian flavor on my Netbook (no HD, 8 GB SSD, 1 GB RAM - will double the RAM when neccessary). I am not low on memory. Some tasks are much too slow (building medium-sized C programs or compiling PDF from TeX both take 5+ seonds), but they take no time on a tmpfs. I want to mount a tmpfs on /tmp to accelerate them.
|
This doesn't appear to be explicitly specified by the Debian policy, but Debian does support making /tmp a separate filesystem (as well as /home, /var and /usr). This is traditionally supported by unix systems. And I can confirm that making /tmp a tmpfs filesystem, and mounting it automatically via /etc/fstab, does work on Debian.
There is some difficulty in transitioning to /tmp on tmpfs on a live system, because of the files that are already in /tmp and cannot be copied. But no file in /tmp is expected to be saved across reboots. It is safe to mount a different filesystem to /tmp at the time the partitions in /etc/fstab are mounted.
| When to mount /tmp (and other temporary directories) |
1,642,003,952,000 |
I'm trying to mount an SMB-share after a wireguard connection has been established. Therefore I did the following things:
created a wireguard config
made systemd start the connection on startup
systemctl enable [email protected]
added the following entry to fstab
//192.168.0.10/home /mnt/smb cifs [email protected],credentials=/home/user/.smbcredentials,vers=3.0,uid=user,pid=user,users,_netdev 0 0
After rebooting the network share is not mounted. With the knowledge that every entry in fstab is converted into a systemd-unit I checked the status of the unit systemctl status mnt-smb.mount.
● mnt-smb.mount - /mnt/smb
Loaded: loaded (/etc/fstab; generated; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2022-02-09 16:55:28 CET; 1min 17s ago
Where: /mnt/smb
What: //192.168.0.10/home
Docs: man:fstab(5)
man:systemd-fstab-generator(8)
Process: 496 ExecMount=/bin/mount //192.168.0.10/home /mnt/smb -t cifs -o [email protected],credentials=/home/user/.smbcredentials,vers=3.0,uid=user,gid=user,users,_netdev (code=exited, status=32)
Feb 09 16:55:28 homeserver systemd[1]: Mounting /mnt/smb...
Feb 09 16:55:28 homeserver systemd[1]: mnt-smb.mount: Mount process exited, code=exited status=32
Feb 09 16:55:28 homeserver systemd[1]: Failed to mount /mnt/smb.
Feb 09 16:55:28 homeserver systemd[1]: mnt-smb.mount: Unit entered failed state.
A look into dmesg gave the following information:
[ 17.612210] Key type cifs.spnego registered
[ 17.612253] Key type cifs.idmap registered
[ 17.758816] wireguard: loading out-of-tree module taints kernel.
[ 17.775249] wireguard: WireGuard 0.0.20191206 loaded. See www.wireguard.com for information.
[ 17.775273] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <[email protected]>. All Rights Reserved.
[ 27.752548] CIFS VFS: Error connecting to socket. Aborting operation.
[ 27.752576] CIFS VFS: cifs_mount failed w/return code = -115
Based on the following question on stackoverflow, i assume that 115 means "in progress". I have seen the same behaviour if the wireguard vpn connection was not up.
Having a look into the generated unit file:
# Automatically generated by systemd-fstab-generator
[Unit]
SourcePath=/etc/fstab
Documentation=man:fstab(5) man:systemd-fstab-generator(8)
Before=remote-fs.target
[Mount]
What=//192.168.0.10/home
Where=/mnt/smb
Type=cifs
[email protected],credentials=/home/user/.smbcredentials,vers=3.0,uid=user,gid=user,users,_netdev
If I run mount -a after login, everything works as expected. So I think it is a timing issue between the units. Therefore I also created an own systemd unit and removed the entry from fstab:
[Unit]
Description=Homeserver SMB
Before=remote-fs.target
[email protected]
[email protected]
[Mount]
Type=cifs
What=//192.168.0.10/home
Where=/mnt/smb
Options=credentials=/home/user/.smbcredentials,vers=3.0,uid=user,gid=user,users
[Install]
WantedBy=multi-user.target
Moved it to /etc/systemd/system/mnt-smb.mount and activated it via systemctl enable mnt-smb.mount. This worked for one reboot, but stopped working after the next reboot.
Questions:
How could this timing issue be resolved?
What systemd options could be used in the unit file or in fstab?
|
I suspect the interface setup by wireguard isn't ready just because the service started. Your issue may be related to this in which case the solution is to wait for the virtual device.
After=network.target [email protected]
Requires=sys-devices-virtual-net-wg0.device
| mount smb share after wireguard with fstab or systemd |
1,642,003,952,000 |
I am trying to follow the example partition scheme in https://www.debian.org/doc/manuals/securing-debian-manual/ch04s10.en.html
Somehow, the fstab file doesn't specify a root partition. Why?
|
Nowadays, on Linux, the root partition is not strictly needed in /etc/fstab because it is mounted on / at boot time owing to the root= boot parameter.
To know your current boot parameters, just cat /proc/cmdline (details on the output with man kernel-command-line).
If you don't have a line for / in /etc/fstab, you can still figure out what is your root partition with this command:
awk '$2 == "/"' /proc/mounts
| No root partition in the Debian example partition scheme |
1,642,003,952,000 |
I'm working on a Linux-like operating system for aarch64, based on a 5.6.4-v8+ kernel for Raspberry Pi 3 (Model B+).
The Kernel configuration options include:
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
It is possible to verify that the system is effectively mounted.
dmesg | grep devtmpfs
[0.071] devtmpfs: initialized
[2,653] devtmpfs: mounted
And it can also be confirmed with that the system is mounted on /dev:
df -h
Filesystem Size Used Avail Used% Mounted on
devtmpfs 424M 0 424M 0% /dev.
On the other hand, in the file /etc/fstab I have the following line:
devtmpfs /dev devtmpfs mode=0755,nosuid 0 0
I've done the test to remove this line, and the result has been that devtmpfs mounts just as well without any problems in /dev. So it seems that it is not necessary to ask for the file system to be mounted via fstab, since it seems that the kernel takes care of it.
Is it really necessary to include the devtmpfs mount in fstab?
Thank you!
|
To my knowledge, the kernel does not automatically mount devtmpfs. It has to be done from userspace, either "manually" (one of the start scripts contains something like: mount -t devtmpfs none /dev), or via fstab.
On my custom linux systems (raspberry zero/4, and 86_64), I do not rely on a mounting mechanism based on /etc/fstab. If I remove the command mount -t devtmpfs none /dev from my /etc/profile, my /dev directory remains empty.
So, to answer your question: you do not have to include the devtmpfs mount in fstab, but you (the user, not the kernel) have to mount it yourself.
| Is it necessary to mount devtmpfs with /etc/fstab? |
1,642,003,952,000 |
we have rhel server version 7.5
and from lsblk we can see only the following disks , and all disks are with ext4 filesystem
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 278.9G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 278.4G 0 part
├─vgN-lv_root 253:0 0 50G 0 lvm /
├─vgN-lv_swap 253:1 0 16G 0 lvm [SWAP]
└─vgN-lv_var 253:2 0 100G 0 lvm /var
sdb 8:16 0 1.7T 0 disk /gr/sdb
sdc 8:32 0 1.7T 0 disk /gr/sdc
sdd 8:48 0 1.7T 0 disk /gr/sdd
sde 8:64 0 1.7T 0 disk /gr/sde
but the interesting thing is that:
when we performed mount -a we get
mount -a
mount: special device /dev/sdf does not exist
mount: special device /dev/sdg does not exist
we not understand from where mount -a gives this disks because they not appears from lsblk and also not from /etc/fstab and not also from /etc/mtab
so why mount -a is complaint about this disks , how we can fix this?
|
Perhaps your /etc/fstab specifies some mounts by either UUID= or LABEL= (causing mount to loop through all block devices it finds) and you have some garbage files as /dev/sdf and /dev/sdg that are not actual device nodes?
Run ls -l /dev/sdf /dev/sdg. If it displays anything, and the letter in the very first column of the permissions string is not b, those are not real block devices. They might have been created by an accidentally mistyped command or two earlier.
| mount + mount: special device /dev/sdX does not exist |
1,642,003,952,000 |
Fresh install Ubuntu Server 20.04. cat /proc/filesystems shows exfat in the output. Not installed any other packages for exFAT as it should work from kernel.
Mounted 2 internal HDDs on in fstab as below
#INT-1TB-4K Internal HDD mount to /mnt/INT-1TB-4K
UUID=0E7E-6579 /mnt/INT-1TB-4K exfat defaults, permissions 0 0
#INT-1TB-BAK Internal HDD mount to /mnt/INT-1TB-BAK
UUID=3037-96B0 /mnt/INT-1TB-BAK exfat defaults, permissions 0 0
/mnt ls-all gives
exharris@plexserv:/mnt$ ls -all
total 520
drwxr-xr-x 4 root root 4096 Jul 2 09:32 .
drwxr-xr-x 20 root root 4096 Jul 2 05:15 ..
drwxr-xr-x 9 root root 262144 Jul 3 03:49 INT-1TB-4K
drwxr-xr-x 7 root root 262144 Jul 3 03:49 INT-1TB-BAK
I get permission denied errors in the terminal when trying to create files in these folders (unless I use 'sudo', of course). This is because the 'others' write bit is set to -.
When running sudo chmod -R 777 INT-1TB-4K from /mnt, I get no errors, but when doing ls -all again, nothing has changed.
This is causing me problems also as I have set these up as Samba shares and also cannot write to them from other machines.
I also tried sudo chmod -R o+w INT-1TB-4K - same thing happened.
What is going on? I do not want to use exfat utils and fuse.
|
exfat behaves just like vfat and since it has no concept of permissions, chown and chmod both won't work.
You have to specify mount options such as uid, fmask and dmask, e.g.
defaults,noatime,nofail,uid=1000,fmask=0133,dmask=0022
(run id to find out what your ID is).
| Native exFAT support in 5.4 kernel - issues? |
1,642,003,952,000 |
we need to do e2fsck on all our disks ( redhat linux - 7.2 ). Since on each machines we have 22 disks ( ext4 file-system ) it will take time to do it on each disk as all know when doing e2fsck need to umount the mount point folder and then use the e2fsck on the disk. Example:
umount /grid/sdb
fsck /dev/sdb
mount /grid/sdb
But, I found option that can be much faster. We can use the fstab for this purpose and to change the sixth field from 0 to 1 and then reboot the machine
From my understanding during boot all disk will perform 2fsck automatically. Am I right here? /etc/fstab Example:
From
UUID=6f8debb3-aac9-4dfb-877f-463f5132d055 /grid/sdb ext4 defaults,noatime 0 0
UUID=203c24b2-8c07-4a9a-b4e0-1848ac5570d6 /grid/sdc ext4 defaults,noatime 0 0
UUID=941546ac-2168-4130-b51f-f5a255a4e43c /grid/sdd ext4 defaults,noatime 0 0
To
UUID=6f8debb3-aac9-4dfb-877f-463f5132d055 /grid/sdb ext4 defaults,noatime 1 0
UUID=203c24b2-8c07-4a9a-b4e0-1848ac5570d6 /grid/sdc ext4 defaults,noatime 1 0
UUID=941546ac-2168-4130-b51f-f5a255a4e43c /grid/sdd ext4 defaults,noatime 1 0
From the fstab(5) man page:
The sixth field (fs_passno).
This field is used by the fsck(8) program to determine the order
in which filesystem checks are done at reboot time. The root
filesystem should be specified with a fs_passno of 1, and other
filesystems should have a fs_passno of 2. Filesystems within a
drive will be checked sequentially, but filesystems on different
drives will be checked at the same time to utilize parallelism
available in the hardware. If the sixth field is not present or
zero, a value of zero is returned and fsck will assume that the
filesystem does not need to be checked.
|
That’s nearly right; you should use a pass number of 2 (since these aren’t the root file system), and it really has to be the sixth field, so
UUID=6f8debb3-aac9-4dfb-877f-463f5132d055 /grid/sdb ext4 defaults,noatime 0 2
UUID=203c24b2-8c07-4a9a-b4e0-1848ac5570d6 /grid/sdc ext4 defaults,noatime 0 2
UUID=941546ac-2168-4130-b51f-f5a255a4e43c /grid/sdd ext4 defaults,noatime 0 2
|<-------------- field 1 -------------->| |<- 2 ->| |<>| |<- field 4 -->| ^ ^
^ | |
field 3 -+ field 5 -+ |
field 6 -+
| short way to perform efsck when we have huge number of disks |
1,642,003,952,000 |
I don't have permission to chown the mounted directory /mnt/hdd. I am currently logged in as root. The ls -l output is:
rwxrwxrwx 1 root root 131072 Jan 1 1970 hdd
I am mounting it via fstab config:
/dev/sda1 /mnt/hdd exfat-fuse defaults 0 0
I am trying to assign the owner of that drive to www-data via that command:
root@owncloud:/mnt# chown -R www-data:www-data hdd
and it says I don't have permission to do that.
mount command output:
/dev/sda1 on /mnt/hdd type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
|
/mnt/hdd is an ExFAT filesystem, which does not actually have a concept of Unix-style file ownerships nor permissions, and so cannot store them. This is why your chown command is failing.
The ownerships and permissions displayed by ls -l are actually created on-the-fly by the exfat-fuse driver according to the mount options. Since the default list of mount options includes allow_other, the driver is currently allowing full access to all the files and directories in this filesystem to any user on the system.
You could use the id www-data command to display the user and group ID numbers of the www-data user. If the www-data has UID of 33 and primary GID of also 33, you could change your /etc/fstab line to:
/dev/sda1 /mnt/hdd exfat-fuse default_permissions,allow_root,uid=33,gid=33,nosuid,nodev,relatime,blksize=4096 0 0
Then unmount & re-mount the filesystem:
umount /mnt/hdd
mount /mnt/hdd
Now all the file and directory ownerships and permissions in the /mnt/hdd filesystem should have changed.
Note that this kind of Unix ownership and permission emulation for filesystems that don't have the capability to store Unix-style ownership/permission information is restricted to what you can specify with mount options: usually, it means that all the files and all the directories in that filesystem will have a single, fixed set of ownership/permission settings and they cannot be changed with chown/chmod commands at all. If this is too inflexible for you, I'm afraid the only option would be to use another filesystem type.
It this is a temporary setup, using an ExFAT filesystem to hold web server data (as indicated by the username www-data) might be fine. But if this is supposed to be a permanent setup, you should seriously consider reformatting /dev/sda1 to another filesystem type that allows native Unix-style file ownerships and permissions before starting to use it.
| No permission to chown /mnt/hdd |
1,642,003,952,000 |
I installed Arch Linux from arch linux evolution-image to a virtual device.
I tested the installation with GRUB MBR and GRUB efi.
Inside virtualbox, I can see the grub menu, but when I select Arch Linux it gives me a
Kernel panic - not syncing: VFS: unable to mount root fs on unknown block(0,0)
What is going wrong?
|
I had a wrong fstab generated by genfstab (as pointed out here). So the kernel (please correct me, if this is wrong) didn't find my root-partition.
I generated fstab with labels and had a partition with a space in it. In fstab this must be written with \040. genfstab wrote garbage for the space.
Other answeres suggest to run update-initramfs -u -k version, however this command is replaced by mkinitcpio.
To get the system running I did this:
I ran grub-mkcofnig -o /boot/grub/grub.cfg (probably not important in this case)
after that I booted into grub-menu and pressed c for the grub-shell
I started Arch Linux manually by entering the following commands:
insmod linux
insmod ext2 (this works for ext3 and ext4, too or enter your filesystem-driver)
set root=(hd0, 2) (enter your partiton-number starting from 1)
linux /boot/vmlinuz-linux root=/dev/sda2 (again select your partition)
initrd /boot/initramfs-linux.img
boot
correct /etc/fstab (make sure spaces are expressed as \040)
Finished!
| Arch Linux in virutalbox: kernel panic-not syncing: VFS: unable to mount root fs on unknown block(0,0) |
1,642,003,952,000 |
I have following line in /etc/fstab
192.168.1.10:/data /mnt/data fuse.sshfs rw,noauto,nosuid,nodev,noexec,_netdev
and following line in /etc/rc.local:
mount /mnt/data
During boot process, the share is mounted automatically from a remote server via sshfs.
Sometimes the server is offline and the connection times out indefinitely and my boot process stalls
How can I set a reasonable timeout, so that if server is unreachable, the mount skips after 5 seconds or so ?
|
sshfs allows ssh client options to be used.
You want to use the ssh option ConnectTimeout=5.
So in the 4th field of your fstab line, append ,ConnectTimeout=5.
| timeout when initiating a sshfs connection |
1,642,003,952,000 |
I installed an additional Linux installation into a separate partition set the /home directory into that partition as well and afterwards I modified /etc/fstab to point to the old partition.
How can I access the contents of the initial /home directory?
# initial configuration
UUID=001 /disks/disk1part1 ext2 auto,users,rw,exec,relatime 0 0
UUID=002 / ext4 defaults,relatime,errors=remount-ro 0 1
UUID=003 /disks/disk26 ext4 auto,users,rw,exec,relatime 0 0
UUID=004 none swap sw 0 0
# changed configuration
UUID=001 /disks/disk1part1 ext2 auto,users,rw,exec,relatime 0 0
UUID=002 / ext4 defaults,relatime,errors=remount-ro 0 1
UUID=003 /home ext4 auto,users,rw,exec,relatime 0 0
UUID=004 none swap sw 0 0
The initial system had no /home in /etc/fstab because it was under the root, and the second configuration added changed /home to /disks/disks26.
|
After a mount --bind / /mnt you can access the /home directory of your root partition as /mnt/home, even if /home is already mounted over.
| How do you access the contents of a previous mount after switching to a different the partition? |
1,642,003,952,000 |
I'm trying to execute a script located on an NTFS partition that I own.
I own the mount point, which is ~/Migration.
ls -l in the directory where the mount point is contained shows me
drwxrwxrwx 1 technomage technomage 4096 Sep 30 18:04 Migration
Despite being the owner of the entire structure, from the mount point and onwards, and having rwx privileges, it prevents me from executing this script, startup.sh. Bash gives me the following error:
bash: ./startup.sh: Permission denied
In the directory that contains the script, ls-la shows me:
drwxrwxrwx 1 technomage technomage 4.0K Oct 1 12:51 .
drwxrwxrwx 1 technomage technomage 4.0K Oct 1 12:51 ..
-rwxrwxrwx 1 technomage technomage 1.9K Oct 1 12:51 startup.sh
Still I cannot execute startup.sh.
I know that permissions on NTFS partitions in linux can be somewhat finnicky, so I went into the /etc/fstab and set the privileges, owners and masks as best as I could:
UUID=6F537BB96F6E0CBC /home/technomage/Migration ntfs-3g rw,exec,user,umask=000,uid=1000,gid=1000 0 0
I then proceeded to sudo umount Migration, followed by reloading the fstab file configuration with sudo mount -a. The remounting is successful.
Despite all of this, I still cannot execute the script despite even using root.
The mount | grep sda6 command shows me the following, which tells me somehow, that the partition isn't mounting properly or using the configurations I gave it:
/dev/sda6 on /home/technomage/Migration type fuseblk (rw,nosuid,nodev,noexec,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,user)
I'm running Debian Jessie, and even went into stretch's repository to get the latest version of the ntfs-3g driver, thinking it was some kind of bug.. no dice.
I'm not quite sure what's wrong. Please show me how I misconfigured how I mount my NTFS partition? I need total access to it.
|
You have your options right in /etc/fstab, but the order matters; exec has to come after user because user imposes noexec (among others). So your /etc/fstab entry should look like this:
UUID=6F537BB96F6E0CBC /home/technomage/Migration ntfs-3g rw,umask=000,uid=1000,gid=1000,user,exec 0 0
After the change to /etc/fstab, unmount the drive then sudo mount -a and try again.
Also, make sure your uid and gid are correct (by executing the command id when logged in with your user).
| NTFS Partition Not Mounting Properly, Cannot Execute Despite Ownership |
1,642,003,952,000 |
The following line:
/path1 /path2 posixovl none 0 0
fails with the error:
/sbin/mount.posixovl: invalid option -- 'o'
Usage: /sbin/mount.posixovl [-F] [-S source] mountpoint [-- fuseoptions]
This is because mount.posixovl uses a non standard mount syntax, and fstab will call it assuming default mount syntax, eg.
mount.posixovl /path1 /path2 -o [whatsoever_/etc/fstab_options]
EDIT #1:
Same problem, solved with an uglier hack in this linuxquestions.org Q&A titled: [SOLVED] How to get a fuse-posixovl partition mounted at bootup?
|
I wrote a wrapper for mount.posixovl that enables it to be used with fstab
First, rename /sbin/mount.posixovl to something else, like /sbin/mount.posixovl.orig
Finally, create a new file /sbin/mount.posixovl whith the following contents:
#!/bin/bash
# wrapper for mount.posixovl to conform with common mount syntax
# with this wrapper posixovl can be used in fstab
# location of the original mount.posixovl
origposixovl="/sbin/mount.posixovl.orig"
# gather inputs
while [ $# -gt 0 ]; do
if [[ "$1" == -* ]]; then
# var is an input switch
# we can only use the -o or -F switches
if [[ "$1" == *F* ]]; then
optsF="-F"
else
optsF=""
fi
if [[ "$1" == *o* ]]; then
shift
optsfuse="-- -o $1"
else
optsfuse=""
fi
shift
else
# var is a main argument
sourcedir="$1"
shift
if [[ "$1" != -* ]]; then
targetdir="$1"
shift
else
targetdir="$sourcedir"
fi
fi
done
# verify inputs
if [ "$sourcedir" == "" ]; then
echo "no source specified"
exit 1
fi
if [ "$targetdir" == "" ]; then
echo "no target specified"
exit 1
fi
# build mount.posixovl command
"$origposixovl" $optsF -S "$sourcedir" "$targetdir" $optsfuse
Naturally, set the newly created /sbin/mount.posixovl to be executeable (chmod +x /sbin/mount.posixovl)
It is useful mounting posixovl trough fstab
| Mount posixovl using fstab |
1,642,003,952,000 |
I had a Slack 13.1 machine with 2.6.36 kernel. Then, I updated the kernel to 3.12.1.
This machine has connected: a bootable disk with three partions (/dev/sda1 --> Linux OS files..., /dev/sda2 --> data, /dev/sda3 --> more data), a "dummy" SSD just to store things (/dev/sdb1) and USB ports.
The fact is that whenever I try to start Linux with a USB containing data (not a LiveUSB) connected to the machine, during the startup process there is something going on that assigns the sda device to the USB so it is not possible to mount the Linux partitions in the "bootable disk" due to a Kernel Panic:
VFS: Mounted root (vfat filesystem) readonly on device 8:1.
devtmpfs: error mounting -2
[...]
Kernel panic - not syncing: no init found. Try passing init=..
The bootloader I am using is LILO. I don't know if there is anyway to force the boot process not to change device names or pre-assign any of them to a certain device. This is its configuration:
# Linux bootable partition config begins
image = /boot/vmlinuz
root=/dev/sda1
append="panic=120"
label=3.12.20-smp
read-only
/etc/fstab:
/dev/sda1 / ext4 rw 1 1
As the USB device partition is considered as sda1, it obviosuly doesn't contain any kind of init process or application so I get the kernel panic.
I had tried with root="LABEL=myLabel" or root="LABEL=current" with no luck...I think because it searches for the label in the root node, not in all partitions :S
Any suggestion of what is going on? Is it possible to fix it?
Thanks in advance!
|
The problem is that the disk names are created sequentially; the first disk to be detected by the kernel becomes /dev/sda, the second is /dev/sdb etc.
The solution to your problem would be to disable use (i.e. detection) of USB disks (including USB drives) until after your system has completed booting. This could be done by configuring the kernel to not include the USB storage driver in the kernel itself but to build it as a module. That way, during booting only the "normal" disk is found, and only after the root filesystem has been mounted does it become possible to load the usb_storage.ko module.
This is assuming you have built the kernel yourself, and you're not using an initrd (initial ramdisk).
| Boot process - Dev sdX name changes |
1,642,003,952,000 |
What is the best way to point a folder in one of my websites directory to a folder in a second HDD I just had installed? I see things about fstab and symlink but am lost at what is the best way to do it. My main HDD (sda) has almost filled up so I would like to move the uploads folder of one of my sites, which contains a few hundred GB of files, to the 2nd HDD (sdb). I want this folder to still be read and written to the same way it always has. Any ideas? Thanx
|
In the following, LABEL can be anything you want, /dev/sdb1 is the partition you create and choose to use on your new HDD and /var/www/myfiles is where your files are currently located. Alter these to suint your scenario.
Partition the new HDD. You can have one partition that takes up the whole disk, or make a smaller partition which leaves you space on the HDD for other partitions at a later date. gparted is probably the easiest way to create partitions.
Create a filesystem on the new partition. Name the filesystem. The command needed to do this depends on which filesystem you choose to use. If it's ext2/3/4 then use the e2label command - eg e2label /dev/sdb1 WebFiles. Alternatively, gparted can add labels to a partition.
Mount the new partition on /mnt - mount /dev/sdb1 /mnt.
Move the data from the old directory to the new HDD - mv /var/www/myfiles/* /mnt. Note - move the files; don't copy them; as the copy command (cp) can change owners of files.
Unmount the new partition - umount /mnt.
Mount the new partition on the directory where the files should reside - mount /dev/sdb1 /var/www/myfiles.
If everything works, make this permanent by adding an entry to /etc/fstab:
LABEL=WebFiles /var/www/myfiles ext4 defaults 1 2
Unmount it - umount /dev/sdb1; then check it mounts automatically using the fstab entry - mount -a.
Hopefully, everything should work ;-)
| Point folder on main HDD to newly mounted 2nd HDD |
1,642,003,952,000 |
I want to enable quotas.
My fstab currently has:
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# / was on /dev/sda1 during installation
UUID=97439827-cdb6-4406-8403-76ab1de7a3b0 / ext4 errors=remount-ro 0 1
# swap was on /dev/sda5 during installation
UUID=194bd177-cba0-415d-b45d-bd87b7bf446e none swap sw 0 0
durrantm:~/Dropbox/96_2013/work/code
$
If I want to add usrquota,grpquota to enable user and group quotas, do I put them after errors=remount-rov, e.g.
errors=remount-rov,usrquota,grpquota
|
As per man page of mount
We can define only three option in errors i.e continue|remount-ro|panic
errors={continue|remount-ro|panic}
Define the behaviour when an error is encountered. (Either ignore
errors and just mark the filesystem erroneous and continue, or
remount the filesystem read-only, or panic and halt the system.)
The default is set in the filesystem superblock, and can be changed
using tune2fs(8).
So you just need to add like this :
/dev/sda1 /mount_point ext4 usrquota,grpquota,errors=remount-ro 0 1
then just remount partition :
mount -o remount /mount_point
then check in mount command
| How to enable quotas in the fstab file? |
1,642,003,952,000 |
I have an external USB drive which my system recognizes as /dev/sdb1. I want to have it automounted with 755 permissions on boot and shared over the network with samba. I created the mount point /mnt/mybook for it, and I've mounted it manually with no problems. If I do mount /dev/sdb1 /mnt/mybook, it mounts correctly and I can access the contents.
I figured this would be simple enough, so I read up on fstab and came up with the following line for it:
UUID=C252-9CA3 /mnt/mybook vfat defaults,mode=755 0 0
I got the UUID from blkid.
When I reboot, the drive is not automounted, much less with the 755 permissions I want. How can I make it so the drive gets correctly automounted with the desired permissions?
|
You could try an alternate approach, which is to recognize your device at the udev level and use /dev/mybook-partition in /etc/fstab. Put something like the following in /etc/udev/rules.d/dwilliams.rules:
KERNEL=="sd*", PROGRAM=="/sbin/blkid %N", RESULT=="C252-9CA3", SYMLINK+="mybook-partition"
The section on Auto mounting USB devices in the Arch wiki for udev might help you further.
| Why is my fstab entry for an external USB drive not working? |
1,642,003,952,000 |
I need to read and write to an usb ntfs pendrive through www-data group (that has uid 33) so I have added
UUID=34A0456D004536A0 /home/mypath ntfs-3g rw,defaults,uid=1000,gid=33,dmode=770,fmode=660,dmask=007,fmask=117,auto 0 0
the disk is mounted but with generic permissions applied to all USB drivers ignoring everything I placed in fstab but mount path that is correct.
I have also used sudo ntfsusermap to generate a mapping file to place in .NTFS-3G folder in the drive.
What could be the reason?
How can solve this problem?
|
Edit udisk2 mount options with:
sudo nano /etc/udisks2/mount_options.conf
and add
[defaults]
ntfs_defaults=uid=$UID,gid=$GID,windows_names
ntfs_allow=uid=$UID,gid=$GID,umask,dmask,fmask,locale,norecover,ignore_case,windows_names,compression,nocompression,big_writes
if still doesn't work:
sudo nano /etc/udev/rules.d/90-usb-disks.rules
and add this
ENV{ID_FS_TYPE}=="ntfs", ENV{ID_FS_TYPE}="ntfs-3g"
| Permissions and groups in fstab ignored |
1,642,003,952,000 |
This is a CentOS 7 system
This actually starts with the kafka service. Kafka is failing to start due to a dependency on remote-fs.target
When I try to manually run remote-fs.target:
sudo systemctl start remote-fs.target
A dependency job for remote-fs.target failed. See 'journalctl -xe' for details.
So I run journalctl -xe, and get this:
Jun 09 15:33:10 lobo2 systemd[1]: Mounting /home/AAI33947/h...
-- Subject: Unit home-AAI33947-h.mount has begun start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit home-AAI33947-h.mount has begun starting up.
Jun 09 15:33:10 lobo2 mount[118920]: error 2 (No such file or directory) opening credential file /home/AAI33947/.cifspwd
Jun 09 15:33:10 lobo2 systemd[1]: home-AAI33947-h.mount mount process exited, code=exited status=2
Jun 09 15:33:10 lobo2 systemd[1]: Failed to mount /home/AAI33947/h.
-- Subject: Unit home-AAI33947-h.mount has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit home-AAI33947-h.mount has failed.
--
-- The result is failed.
Jun 09 15:33:10 lobo2 systemd[1]: Dependency failed for Remote File Systems.
-- Subject: Unit remote-fs.target has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit remote-fs.target has failed.
--
-- The result is dependency.
So I go look in fstab and find that there is indeed a cifs mount command for /home/AAI33947/h
But that user is no longer with us, and the remote target for that mount and the local mount point no longer exist. So I removed that line from fstab
However, when I attempt to run remote-fs.target I still get the same error. It's still trying to mount that remote filesystem, even though it isn't in fstab anymore.
What's going on? How can I get it to recognize that this remote filesystem is no longer a thing it needs to do?? I suppose a reboot would might do it, but I'm trying not to disrupt some other stuff going on at the moment.
Thanks!
|
If you are running systemd, it will create mount units from your /etc/fstab on boot. If your fstab changes, you need to run sudo systemctl daemon-reload to refresh these units. The command mount -a will actually show a warning about this if it's run after a change to fstab without reloading the daemon.
| systemd remote-fs.target trying to mount remote filesystem that has been removed from fstab |
1,642,003,952,000 |
I'm seeing apparently conflicting information on the proper way to auto-mount USB flash drives at boot. Most instructions on how to do it say to use an entry in fstab. Gnome Disks has a built-in feature to automate this entry. It seems to recognize a flash drive as a flash drive and know how to properly make an entry for it in fstab, and the entry works.
On the other hand, I've read that pluggable drives should be handled by uDev rather than fstab, including essentially permanently plugged devices. Consistent with this, Disk Manager (a utility bundled with MX Linux), opens on my system (containing a working fstab entry for a flash drive), with an error message:
I cannot find any existing block devices corresponding to the following devices:
/dev/disk/by-id/usb-Samsung_Flash_Drive_<id> on <mount point>
It is advisable to remove them to avoid failed mount at start-up.
Once that message is bypassed, Disk Manager excludes the (properly mounted) drive from its display. It has an issue with the fact that it isn't a block device, let alone pluggable.
What I assume is a backup for fstab made by Disk Manager at some point, /etc/fstab-disk-manager-save, begins with the comment:
# Pluggable devices are handled by uDev, they are not in fstab.
An observation: auto-mounting a flash drive is a commonplace requirement. As such, one would expect there to be tools to assist in setting this up. The existing tools all seem to do it by creating an entry in fstab. Using uDev appears to require writing your own custom program, and there are many questions on Stack Exchange from programmers needing help with this (so it doesn't appear to be a method for novice users).
There's the old saying, "If it ain't broke, don't fix it", and the fstab entry method appears to work. OTOH, the advice about using uDev and the warning about mount failure means there are some conditions in which fstab won't work for this, which suggests that fstab is the wrong tool for the job and shouldn't be relied on just because it works in some cases.
So should a "permanently" plugged-in flash drive be mounted via fstab or uDev, and what is the risk suggested by the Disk Manager warning?
|
Eduardo Trápani's comments pointed me in the right direction to research the gist of the issue. I'll close the loop with this self-answer for anyone else landing here.
Problems preventing a successful boot can leave the computer in a state that requires jumping through hoops to get it operational again since you don't have access to the distro's own tools to fix the issue. The basic risk with using fstab for USB flash drives is that booting can hang or go into recovery mode if the drive is considered essential and mounting cannot be completed.
A drive is considered essential if it is in fstab and has not been designated (via relevant options), as only wanted rather than required. A number of conditions can lead to inability to mount, including the drive being unplugged, the drive having failed (commonplace for USB flash drives), or an fsck check being designated in the mount parameters and the system being unable to complete that.
These problems can be mitigated by options specified in the mount parameters, but those options vary in their availability and implementation across distros. So using fstab to mount removable drives benefits from researching the mount options available in your distro, even when using automated tools, like Gnome Disks, to create the fstab entry.
The mount options include:
nofail: I've read varying descriptions of what nofail does. Some describe this option as simply causing fsck to skip the test if it can't be performed (the test is skipped automatically for missing drives if they have the auto option). Others describe nofail as more generally defining the mount as only wanted, not required. The implication is that the boot will continue regardless of whether the device can be mounted successfully.
nobootwait: The varying descriptions are similar to nofail. Some descriptions seem to limit its purpose to making the boot not dependent on the ability to start or complete that device's fsck check. If the device is available, it runs fsck concurrently in the background rather than sequentially. The potential side effect (which also applies to nofail), is that the boot could complete but that resource isn't (yet) available, potentially causing operational problems.
Other descriptions didn't limit nobootwait to fsck; they described it as preventing failure to mount the drive, due to any cause, from halting the boot.
One post indicated that a difference between these two options is that nofail waits for up to several minutes before deciding that the drive is unavailable, resulting in a boot delay if it is, whereas nobootwait moves on immediately.
My understanding is that nobootwait was never compatible with Ubuntu (don't know if that extended to some non-Ubuntu-based distros, and can't vouch for whether that still applies at this time).
x-systemd options: There are some options for directly controlling whether a device is required vs. merely desired, and how long boot will wait for the device. These are named with the pattern x-systemd.<option>, and are contingent on the distro using systemd.
| Which is the correct mechanism for auto-mounting a USB flash drive at boot? |
1,642,003,952,000 |
I have encrypted my external harddrives using cryptsetup and a key file. My goal is now to automatically decrypt and mount them upon plugin. I used to do so using this blog post (unfortunately in German). This used to work on my old Ubuntu 16.04 machine, but since I upgraded to Focal this does not work anymore.
What I have done specifically is:
Added /dev/mapper/extdrive /mnt/extdrive xfs defaults,noauto 0 2 to /etc/fstab.
Added ACTION=="add", SUBSYSTEM=="block", ENV{DEVTYPE}=="partition", ATTRS{serial}=="123456789", RUN+="/sbin/cryptsetup --key-file /root/.kf luksOpen $env{DEVNAME} extdrive" to /etc/udev/rules.d/85-extdrive.rules
Added ACTION=="add|change", SUBSYSTEM=="block", ENV{DM_NAME}=="extdrive", RUN+="/bin/mount /dev/mapper/$env{DM_NAME}" to /etc/udev/rules.d/85-extdrive.rules
It seems like the drive is opened via luksOpen but is not mounted, i.e., the "add|change" rule does not fire. How can I figure out why the automount fails? If I execute the respective commands manually, all seems fine. Bonus: Why did this approach used to work in 16.04 but does not anymore in 20.04?
Thank you!
|
mount won't work in UDev rules because UDev runs with its own mount namespace. You need to use systemd-mount instead, see this arch wiki article for details.
From udev manpage:
Note that running programs that access the network or mount/unmount filesystems is not allowed inside of udev rules, due to the default sandbox that is enforced on systemd-udevd.service.
This is relatively new change (about 3 years ago I think) so I guess this was not yet present in 16.04 .
| Automatic mount of encrypted external harddrives |
1,611,052,742,000 |
I currently have mounts that look like this:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 16G 7.7G 7.3G 52% /
/dev/sdb2 237G 20G 207G 9% /var/www
/dev/sdb1 16G 7.5G 7.4G 51% /var/lib/jenkins
Unfortunately, I don't have enough room on /dev/sdb1. I'd like to move things around to be like this:
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 16G 7.7G 7.3G 52% /
/dev/sdb2 237G 27G 200G 11% /var
Is it too naive to simply script this pseudo code?
systemctl stop apache2 jenkins
for each dir in /var:
if dir is 'www':
mv /var/www/* /var/www/www/
continue
mv dir /var/www/
mv /var/lib/jenkins /var/www/lib/jenkins
sed -i 's|/var/www|/var|' /etc/fstab
sed -i 'd|/var/lib/jenkins|' /etc/fstab
reboot
|
Your handling of /var/www and /var/lib/jenkins seems OK, but you’ve missed one important part of the exercise: you need to move anything in /var, stored on /, into the new /var.
To do that reliably, you’ll need to stop anything currently using /var. I suspect the easiest way to do that will be to reboot to a live environment.
| Safe-way to remount partitions |
1,611,052,742,000 |
I'm running on Debian. When I run mount | grep -i cgroup, I see,
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,size=4096k,nr_inodes=1024,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
However my /etc/fstab does not have any cgroup or cgroup2 mounts. Where are these mount points specified?
|
On systems using systemd, the cgroup mountpoints are mounted by systemd itself, based on its configuration. If the systemd.unified_cgroup_hierarchy option is specified, its value (true or false) determines whether a unified cgroup v2 hierarchy is used (true) or a hybrid or legacy cgroup hierarchy (false). If no option is specified, the compile-time default is used; if the kernel doesn’t support unified cgroup hierarchy, systemd will use the legacy hierarchy.
| What specifies the version of cgroups (cgroups or cgroups2) used by the distro? |
1,611,052,742,000 |
I have a Debian 10 machine which has a nfs mountpoint specified in fstab.
This is the line
10.0.0.2:/mnt/md0 /mnt/md0 nfs4 _netdev,auto,nofail 0 0
I thought nofail would prevent my boot sequence hanging for (precicely) 1:32 while a time out takes place while the system is looking for the nfs drive. However this doesn't appear to be the correct opion, as it is not mentioned in my systems man pages. A search suggested nobootwait might be an alternative but again this is not mentioned in the man pages. There doesn't appear to be any relevant option, unless I am looking in the wrong document?
Is there any way to specify that the drive should be automatically mounted, when it is present, and only when it is present. Both at boot time, and additionally, if the drive is "somehow seen" later on.
eg; If I boot my workstation, and the drive is not present (server not booted) it should not wait an additional minute and a half to boot.
then; If I boot the server at a later time, is there any way to automatically detect/mount the nfs drive? I guess this could be done with some kind of cron script which pings the network address 10.0.0.2? (My server IP.)
|
For automatically mounting NFS when present, autofs can be used (autofs)
As mentioned in man fstab(5)
nofail
do not report errors for this device if it does not exist.
AFAIK nobootwait was only for ubuntu-based distros (which is not a valid option anymore)
You can use x-systemd.device-timeout= (more info systemd.mount)
x-systemd.device-timeout=
Configure how long systemd should wait for a device to show up before
giving up on an entry from /etc/fstab. Specify a time in seconds or
explicitly append a unit such as "s", "min", "h", "ms".
Note that this option can only be used in /etc/fstab, and will be
ignored when part of the Options= setting in a unit file.
The default device timeout is 90 seconds, so a disconnected external device with only nofailwill make your boot take 90 seconds longer, unless you reconfigure the timeout as shown. Make sure not to set the timeout to 0, as this translates to infinite timeout.
| Debian 10 fstab - Is there an option to prevent boot sequence hanging when device does not exist? |
1,611,052,742,000 |
I'm running archlinux and have an ext3 10Tb internal harddrive that is failing to mount on boot through the fstab entry. If I use mount /dev/sdc1 /media, the mount succeeds however attempting to do a mount -a gives the result
mount: /media: wrong fs type, bad option, bad superblock on /dev/sdc1, missing codepage or helper program, or other error.
I used df -Th to confirm the filesystem and got
/dev/sdc1 ext3 9.1T 3.8T 4.9T 44% /media
I ran e2fsck to check the disk and got this
sudo e2fsck -f /dev/sdc1
e2fsck 1.45.3 (14-Jul-2019)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sdc1: 30662/305201152 files (50.0% non-contiguous), 1020563007/2441608704 blocks
Which as far as I understand is all pass.
The fstab entry at the moment is
UUID=239630fb-affe-4810-a3a1-6d7c7958af86 /media ext3 permissions,defaults 0 0
#/dev/sdc1 /media autofs permissions,defaults 0 0
I've tried ext3, auto, and autofs as the file system for both entries but neither seems to help.
|
The permissions option in your /etc/fstab isn't valid for an ext3 filesystem
| mount /dev/sdc1 /media works but fstab fails to mount |
1,611,052,742,000 |
Right now a system uses LVM on LUKS, with only 2 lvm partitions / and /home.
If I now want to make use of tmpfs for /tmp and /var/tmp, can I just
add the necessary changes to /etc/fstab and it will work without breaking anything? Or could this cause any problems?
tmpfs /tmp tmpfs size=16G,noatime 0 0
tmpfs /var/tmp tmpfs size=1G,noatime 0 0
|
Yes, it should work - assuming you have enough RAM for that 16GiB ramdisk and all your running applications! If you only have 16GiB RAM total, it doesn't really make sense to allocate so much memory to a ramdisk that it could push running applications - or itself - partially into swap space, because that would slow your system down to a crawl.
You probably don't want to have some existing applications using - or trying to use - temporary files "hidden behind" the mount point though, so I would reboot the system in order to activate this change, rather than just mounting /tmp and /var/tmp on the running system.
Also, if there are a large amount of temporary files already there, you could stop all running services (e.g. by going into single-user mode - but that won't work if you have to connect to the machine over ssh) and then remove the contents of /tmp and /var/tmp, before rebooting, to reclaim some disk space and inodes. But do not remove the directories themselves, because they are the mount points and must exist.
| Enabling tmpfs on an already installed system |
1,611,052,742,000 |
I mounted /usr from the nvidia TX1 dev board to an external SSD connected to the board.
I am wondering how I can restore it original state without re-flashing? If I power down,and disconnect the SSD, and start, there will be no /usr directory.
I was thinking of making a copy to /root/usr and updating fstab to point to that instead of the external SSD, but there has to be a better option, I just cant think of it at the moment.
If it were a regular x86 I'd just boot to a live CD and fix it, but this is an SoC with Arm, so it's not quite that easy
|
Use a bind mount of / to make the original /usr (which should probably be empty there if /usr was mounted over it before /usr was ever populated) available and copy the mounted /usr over it.
# mkdir /root/underlyingroot
# mount --bind --make-private / /root/underlyingroot
# cp -ax /usr /root/underlyingroot
# umount /root/underlyingroot
--make-private is to cancel the case where / is mounted with the shared option, which it is when running systemd. Else anything mounted (eg: automount of an inserted device etc.) between the mount and umount above will be reflected inside /root/underlyingroot and prevent the simple umount /root/underlyingroot working after.
Now that the copy is done at the final place you can edit /etc/fstab and remove the /usr mountpoint.
If nothing at all running is using /usr you might be able to also umount immediately /usr and be done. But nowadays it's hard to have things running not using /usr at all if not in single user or rescue mode and today not even even always (eg newer CentOS), so a reboot is probably needed anyway. You can also consider umount --lazy /usr which would allow to immediately get rid of /usr and have any new updates to /usr done on the internal storage instead of external, but external would still be required until next reboot.
| Unmounting /usr from an external drive [closed] |
1,611,052,742,000 |
Why does a copy operation to a directory that serves as a mountpoint not copy the data to the mounted drive?
I bought a 2 terabyte drive and mounted it in a subdirectory within my home directory.
Where /dev/sdb is my 500GB system drive and /dev/sda is my 2TB data drive:
Partition Mountpoint
/dev/sdb1 -> /
/dev/sdb3 -> /home
/dev/sdb2 -> swap
/dev/sda1 -> /home/data
This all seems to work, and even shows up in df -h properly (i.e., /dev/sda1 is mounted on /home/data [to regenerate the fstab I booted into my arch disk live environment and mounted the partitions to the folders in /mnt that I wanted to partition, running genfstab -U /mnt > /mnt/etc/fstab; it worked])
Last night I set my box to running a 650GB copy operation to /home/data. Imagine my surprise when tons of copy operations failed due to being out of diskspace.
df -h shows that /dev/sdb3 is full but /dev/sda1 is nearly empty (77MB). The mount point is functioning properly, so far as I can tell, but the copy operation put all the data in /dev/sdb3! Presumably, if I unmount the drive, the music will still be in /home/data.
Clearly there is something about mounting and fstab that I am not fully understanding.
The particular entry in fstab reads:
# /dev/sdb1
UUID=<UUID> / ext4 rw,relatime 01
# /dev/sdb3
UUID=<UUID> /home ext4 rw,relatime 02
# /dev/sda1
UUID=<UUID> /home/data ext4 rw,relatime 02
Before regenerating the fstab, I had a swap entry in fstab. I'm not sure why it didn't regenerate.
Update: I managed to get the output of mount:
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
dev on /dev type devtmpfs (rw,nosuid,relatime,size=4051032k,nr_inodes=1012758,mode=755)
run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
/dev/sdb1 on / type ext4 (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=44,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13569)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev)
configfs on /sys/kernel/config type configfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
/dev/sdb3 on /home type ext4 (rw,relatime)
/dev/sda1 on /home/data type ext4 (rw,relatime)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=811560k,mode=700,uid=1000,gid=1000)
|
You mention that your copy command is cp -r /mnt/music data/ when you're in ~.
This means that you're copying your data into /home/<username>/data, since ~ would expand to /home/username.
However, your external drive is mounted as /home/data, according to the mount output you supplied. To finish your goal, you need to do two things:
copy all the data from /home/<username/data to the /home/data directory.
rsync -avHP /home/<username>/data/* /home/data/ (or some variation)
this moves all the data out of your home partition and into the external drive.
fix up how you want to access the drive
leave it mounted as /home/data, and just access it that way
one option would be to create a symlink in ~ to access it: ln -s /home/data ~/data
another option would be to edit your fstab to set the mountpoint for the external drive to /home/<username>/data
| Why does a copy operation to a directory that serves as a mountpoint not copy the data to the mounted drive? |
1,611,052,742,000 |
#include <fstab.h>
struct fstab *getfsent(void);
http://man7.org/linux/man-pages/man3/getfsent.3.html
getfsent reads a line from /etc/fstab file and return a variable of type struct fstab*. Do I need to free it? Or it's managed by someone else? If it's managed by someone else, why isn't the return type const struct fstab*? I check the reference above but couldn't find anything useful.
|
At least for glibc, you shouldn't. The source indicates that the pointer is to a member of an internal state struct, so it's not something you can directly free.
The docs also hint at this:
To read the entire content of the of the fstab file the GNU C
Library contains a set of three functions which are designed in the
usual way.
The "usual" way here being something like getpwent:
The return value may point to a static area, and may be overwritten
by subsequent calls to getpwent(), getpwnam(3), or getpwuid(3). (Do
not pass the returned pointer to free(3).)
Also, the glibc docs specifically for getfsent:
The function returns a pointer to a variable of type struct fstab.
This variable is shared by all threads and therefore this function is
not thread-safe. If an error occurred getfsent returns a NULL
pointer.
That that variable is shared is a strong indication you should not mess with memory management.
If you want to free the resources, use endfsent(), which will clear the internal state.
| Should I free the fstab pointer returned by getfsent? |
1,611,052,742,000 |
We have the following disks and there mount point:
/dev/sdb /appTdb/sdc ext4 defaults,noatime 0 0
/dev/sdc /appTdb/sdd ext4 defaults,noatime 0 0
/dev/sdd /appTdb/sde ext4 defaults,noatime 0 0
/dev/sde /appTdb/sdb ext4 defaults,noatime 0 0
We want to enable fsck on disks - sdb - sde , ( I mean to run fsck during boot )
so we set "1" in this fstab:
/dev/sdb /appTdb/sdc ext4 defaults,noatime 0 1
/dev/sdc /appTdb/sdd ext4 defaults,noatime 0 1
/dev/sdd /appTdb/sde ext4 defaults,noatime 0 1
/dev/sde /appTdb/sdb ext4 defaults,noatime 0 1
first question - is it correct
second what are the ether values that we can set instead "1" ,
for example 3 or 4 etc ( and what each value mean ? )
|
IIRC, the numbers are just the order which disks get scanned before others. So, if 1 is used for all disks, then all the disks have the same priority for scanning. If one disk fails, then the boot fails, but it could be any of the disks that causes the failure. Using, say, 2 on some of the disks will cause those disks to be scanned after the ones given a 1. e.g.
/dev/sdb /appTdb/sdc ext4 defaults,noatime 0 1
/dev/sdc /appTdb/sdd ext4 defaults,noatime 0 2
/dev/sdd /appTdb/sde ext4 defaults,noatime 0 2
/dev/sde /appTdb/sdb ext4 defaults,noatime 0 3
In this case, disk /dev/sdb will be scanned first, then /dev/sdc and /dev/sdd, and finally /dev/sde. This could make a difference in your boot sequence, for example if /dev/sdb was the boot drive. A failure there would be a problem, whereas a failure on the the other drives could potentially be ignored if not critical.
Incidentally, why are your drives and mount points messed up? Usually, they match so that it is mentally easier to map mount point to device.
| How to Force fsck for all other non-root partitions |
1,611,052,742,000 |
I have a system that used to have 3 different Linux flavours running. I no longer wanted one of them so I moved and expanded with gparted and all is well, except I now have /sda3, /sda4, /sda7, /sda8 - I deleted /sda5 and /sda6 - so I have a gap in the sequence.
I've seen that gdisk offers a 'sort' function, which looks like it might work. I can perform the sort operation, and print results and I end up with a nicely sequenced bunch of GPT partitions. I am yet to be bold enough to (w)rite the changes to disk.
My concern is that I'll need to edit /etc/fstab and / or /boot/grub/grub.cfg following this, or can I simply update-grub, to fix any config file issues?
Can anyone advise?
Thanks.
|
To confirm, using 'sudo gdisk' and performing the (s)ort option works brilliantly with UUID disks under GPT disk type.
Running 'sudo grub-install /dev/sda' and then 'sudo update-grub' took care of all the tiresome '/etc/fstab' and '/boot/grub/grub.cfg' editing automatically.
Very easy overall.
| How to re-order partitions safely? Safe to use gdisk 'sort' option? Edit fstab + grub.cfg necessary? |
1,611,052,742,000 |
I have set several folders to mount at startup into fstab. This works fine.
However, I would like to be able to bypass the mounting process in some occasions.
Typically, when I know the remote folders are not available.
Is there a mean to bypass the auto mounting process at boot time?
I thank you for your help.
|
If you like to make a partition or network share optional, you can define the mount option nofail comma-separated to the other options you have defined.
What will happen is, the system will still attempt to mount the partition/share, but if it is not available or not accessible for whatever reason, it will silently fail and continue to boot the system.
The fstab entry would look something like this:
/dev/sdc2 /mnt/your_partition ext4 defaults,nofail 1 2
| Bypassing auto mount at boot time |
1,611,052,742,000 |
TL;DR: custom partitions and trash is not showing on Thunar via AwesomeVM.
XFCE:
Awesome:
My fstab is:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a device; this may
# be used with UUID= as a more robust way to name devices that works even if
# disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
UUID=619542b0-8ce0-4dd1-9b0b-2d6224aa4f98 swap swap defaults,noatime 0 0
UUID=c49099f0-a6d7-4732-b41d-c34a7246019c / ext4 defaults,noatime 0 1
# /dev/sdb1 (games)
UUID=01CF50ED2AA59680 /mnt/games ntfs defaults,rw,uid=1000,umask=003,x-gvfs-show 0 0
# /dev/sdb2 (data)
UUID=3F2BFCA2397DA8FB /mnt/data ntfs defaults,rw,uid=1000,umask=003,x-gvfs-show 0 0
# /dev/sda4 (docs)
UUID=56D6C95328FD7038 /mnt/docs ntfs defaults,rw,uid=1000,umask=003,x-gvfs-show 0 0
# /dev/sdb3 (extra)
UUID=57b79234-ae2a-4206-9e53-95e6a6009fd5 /mnt/extra ext4 defaults,rw,x-gvfs-show 0 1
BTW, I know that this is in some way related to gvfs, which I already have running when I logged through AwesomeVM, but I need some more? I don't understand why don't works.
|
Finally I found the reason. This bug is related to lightDM starting awesome without dbus-launch. I fixed the whole problem described here by editing by hand the file /usr/share/xsessions/awesome.desktop as:
[Desktop Entry]
Name=awesome
Comment=Highly configurable framework window manager
TryExec=awesome
Exec=dbus-launch --exit-with-session --sh-syntax awesome
Type=Application
This is not a very pleasant solution and neither a good one, since I editing that file this thing will be mess-up when I got a new update of awesome and /usr/share/xsession/awesome.desktop be overwritten.
Looking forward for better solutions, but for now, only for now, this is working pretty fine. Trash appears now on thunar, xfdesktop and x-gvfs-show-partitions it's working as expected.
| How to get the trash and x-gvfs-show partitions on Thunar via AwesomeVM? |
1,611,052,742,000 |
I encountered an issue where my Ubuntu 17.04 VM would enter maintenance mode on every boot. However, if I pressed Ctrl-D, or exited the maintenance shell and continued the boot, the system would start up fine, with no failed jobs.
I eventually narrowed it down to a my 9pfs Virtual filesystem mount. The job was hanging for no discernible reason. Running it manually from the maintenance mode would succeed. Enabling debugging for SystemD didn't produce any more helpful errors.
|
The solution is to edit the /etc/fstab mount options for the 9pfs mount, and append noauto,x-systemd.automount. This will delay mounting long enough to avoid whatever race condition causes the error.
Example fstab entry
4tb /mnt/4tb 9p trans=virtio,rw,noauto,x-systemd.automount 0 0
https://forums.freenas.org/index.php?threads/9p-mounts-in-linux-vm-fail-at-boot-but-succeed-moments-later.52413/
| SystemD fails to mount a Plan9 Filesysem (9pfs) when starting a VM |
1,611,052,742,000 |
Context: I want to enable a normal user to mount a certain cifs mount on his system (Debian Strech). I therefore added the following entry in /etc/fstab (note the added ,user in the options):
//server/share/ /home/user/mountpoint cifs defaults,user,uid=user,credentials=/home/user/.cifs-creds 0 0
Also the credential-file is owned by the user and is readable/writable/executable (700).
Subsequently mounting as root works (i.e. cifs-utils are available on the system, the credential-file exists and is correctly populated)! But mounting as a user does not, resulting in the following output:
user@system: ~$ mount mountpoint
mount error(22): Invalid argument
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs)
/var/log/kern.log states:
Jun 4 11:36:52 system kernel: [173283.233509] CIFS: Unknown mount option "defaults"
So, somehow, when executed as a user, the defaults option can not be used to mount? When the ,defaults option is removed from the fstab entry, users can mount (without error) but that would leave out a lot defaults (and using the defaults (other than: ,user) should be possible right?).
Hence my question:
What is the advised fstab entry to have users mount a CIFS/SAMBA share to prevent the mount error(22): Invalid argument caused by the CIFS: Unknown mount option "defaults"?
Should I simply leave out ,defaults or is there another method to do this (I tried Googling on this, but every tutorial / explanation I find seems to confirm the method used)?
|
The manual page describes defaults as refering to the defaults used for axes like ro/rw, suid/nosuid, when a value is not specified explicitly.
But the reason for using defaults, is when you don't have any option you want to explicitly specify. You still need some value to put in the options field, so that you can put something in the next field.
Therefore ,defaults should never be necessary.
| How to prevent: `CIFS: Unknown mount option "defaults"`? [duplicate] |
1,611,052,742,000 |
Is there a method for a process (without root) to hide or mount over a path of the filesystem for itself? It shouldn't affect the actual filesystem, only the process itself and perhaps its children?
Bit of an odd use case, but I need to build something on an OSX build server as if it were a vanilla OSX machine. However unfortunately there are a few libs installed in /usr/local/{include,lib} which mess up the build and I don't have root on the machine. So I would like to temporary hide /usr/local while running configure and make.
I do not have write access to /usr/local so I cannot actually modify it.
|
No, there's no direct way to do this. You can use chflags hidden to hide things from the Finder, but that doesn't affect the command-line.
The solution would depend on the configure script. It may simply look along PATH to notice the /usr/local, but more likely it will have a hardcoded list of directories to look at — including /usr/local. To work around the former (PATH-based), you could adjust your path. For the latter, the only thing that works is to modify the configure script.
The reason why you'll see hardcoded lists is that add-ons may not use the same search paths as the rest of the system. For instance, the various BSD ports may use /usr/pkg, /usr/local, etc., and partly rely upon the packagers to set these pathnames in their build-scripts. But programs that are not built as part of the ports system have to look for things in those places, to build with little user attention.
If you want to override the default search path for OSX, start with the ld manual page, which says
Search paths
ld maintains a list of directories to search for a library or framework
to use. The default library search path is /usr/lib then /usr/local/lib.
The -L option will add a new library search path. The default framework
search path is /Library/Frameworks then /System/Library/Frameworks.
(Note: previously, /Network/Library/Frameworks was at the end of the
default path. If you need that functionality, you need to explicitly add
-F/Network/Library/Frameworks). The -F option will add a new framework
search path. The -Z option will remove the standard search paths. The
-syslibroot option will prepend a prefix to all search paths.
and you can pass options to ld by prefixing them with -Wl (and using a comma where a space is needed). For your purpose, you would write a script which
uses -Z to remove the search paths, and
add back the parts you need using -L
The clang option -v shows the details of the compiler front-end, and passing -v to ld, e.g., using -Wl,-v shows the linker's details.
Something like this, for example:
#!/bin/sh
clang -Wl,-Z -L/usr/lib -F/Library/Frameworks -F/System/Library/Frameworks "$@"
It is not documented in the clang manual, but a quick check shows that it would pass a plain -Z option to the linker. On the other hand, clang does document options (mainly for gcc-compatibility) which suppress its searches of different categories of include-directories:
-nostdinc
Do not search the standard system directories or compiler
builtin directories for include files.
-nostdlibinc
Do not search the standard system directories for include files,
but do search compiler builtin include directories.
-nobuiltininc
Do not search clang's builtin directory for include files.
It does not have an option for showing the search-path for include files, but you can infer that by running the preprocessor.
Once you've gotten the script working, then running the configure script, you would set CC to that script's name, e.g.,
./configure CC=cc-system
I've been doing this a while, but haven't needed this particular combination (see Compiler wrappers).
| Hide or mask directory for a process on OS-X |
1,611,052,742,000 |
My /etc/fstab doesn't include the disk the system booted from, basically because I made some changes and forgot to include this. The initial /boot and / directories are on different drives.
I noticed this because when I upgrade the system and grub and kernel get updated the changes are made to the /boot directory under / which is not initial boot drive.
How can I tell after boot with device was booted from and its directory?
I want to mount it in /etc/fstab as /boot and delete or rename the boot directory under / to something else.
|
The root filesystem is passed to the kernel upon boot using the root argument. So you should be able to:
cat /proc/cmdline
and then look for root=/some/path, or perhaps root=UUID=longstring. For instance, I get:
BOOT_IMAGE=/boot/kernel-genkernel-x86_64-4.4.0-sabayon root=UUID=18f3b5a1-3994-43ef-ad6d-cb4c86ff5f95 ro quiet splash
If it's a path, it should point to something recognizable (like /dev/sdb3). If it's a UUID, copy the UUID, and run:
ls -la /dev/disk/by-uuid/[paste UUID here]
That should point to a symlink, like:
lrwxrwxrwx 1 root root 10 Apr 11 22:14 /dev/disk/by-uuid/06699502-fc90-48e4-86c2-cefdaf921e41 -> ../../sda4
Which should tell you which drive it was (in my case, the 4th partition of sda, iow, /dev/sda4)
| How to work out which drive and directory your system booted from if it is not mounted in /etc/fstab? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.