date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,611,052,742,000
following problem: I have a server which needs to mount a windows network share in order to copy a file on it. So I added the share in the fstab so that it will be mounted on startup. //192.168.1.xx/share /mnt/networkshare cifs noperm,username=user,password=****** 0 0 A script loops to copy the file on the share like that: while [ true ] do if [ -f /path/to/the/file ] then mv /path/to/the/file /mnt/networkshare fi done The problem is, that the windows computer is shutting down at night at starts in the morning. First, during this time, the load on the server is 100% on one core due to the while [true] script. Second, sometimes the mount is not working anymore after the startup of the windows computer. (a crontab * * * * * mount -a runs to mount all again) The files do not get copied and the mount is not accessible on the server. It needs to be restarted. How can I make sure that the mount is always there while the computer is on. Do I maybe need to somehow umount the share? I can umount it every night, but what if the windows computer gets restarted during the day? How can I pause the while [true] script if the mount is not there for lower cpu load on night? Thanks for your help!
you can check the mount before attempting to move: df | grep "/mnt/networkshare" |grep -v grep >/dev/null; r=${?} if [ ${r} -eq 0 ] then mv /path/to/the/file /mnt/networkshare fi also, adding a sleep command in the process might mitigate your 100% CPU utilization problem. Hammering a process without a break is not a good approach.
Auto mounting network share which is temporary offline
1,611,052,742,000
I just made a small mistake and reformatted my swap partition. It's still formatted as a swap partition - I was fortunate not to touch anything more important. However, I notice that the uuid has changed. Therefore, it no longer matches the uuid in /etc/fstab. This doesn't cause me any immediate problems, presumably because swap is semi-redundant with modern RAM. Still, I would like to fix the problem. First, is there a command that lets me verify my hypothesis - that my swap hasn't been detected by fstab after the uuid change? I looked at findmnt on a separate computer to see whether swap normally gets displayed - it doesn't. So what command shows you which partition, if any, is being utilised as swap? Second, I presume I can just manually edit the fstab and change the uuid it 'expects' to the new uuid. Is that the 'right' way to fix it? Perhaps there are tools for 'safe' editing of fstab entries (like for grub.cfg) which I should look at (even if, in my case, not much can go wrong editing manually).
In answer to your second question, there's no dedicated wrapper for the fstab file; just open it in a text editor.
Fixing fstab after reformatting swap
1,611,052,742,000
I thought it would be easier for me to mount flash drives automatically if I did the following to fstab: /dev/sd1i /mnt/usb (sd1i is found from sysctl hw.disknames) I rebooted the box with the USB 3.0 flash drive still inserted in the USB 3.0 port. During the boot process, the following errors were detected: /dev/rsd1i: BAD SUPER BLOCK: MAGIC NUMBER WRONG /dev/rsd1i: Unexpected inconsistency: Run fsck_ffs manually The following file system had an unexpected inconsistency: ffs: /dev/rsd1i (/mnt/usb) Automatic file system check failed; help! Enter pathname of shell or RETURN for sh: I checked out the article "How to use ed to edit /etc/fstab in single user mode" (http://www.openbsdsupport.org/ed_and_fstab.html) which discussed about how to use ed to modify lines but not to delete them. Some help would be much appreciated.
You don't need to use ed unless you really want to. Once you're at a single-user prompt (just hit Enter at the Enter pathname of shell or RETURN for sh: prompt, do the following: Mount the root filesystem as read-write, then mount the /var and /usr filesystems (this will allow you to run vi or any other editor of your choice) # mount -uw / # mount /var # mount /usr Once those are mounted, edit /etc/fstab and remove the offending line. Reboot. # reboot Your system should then restart correctly in multi-user mode.
Need to remove a line in fstab on OpenBSD
1,611,052,742,000
I am trying to mount a folder in Windows (shared by Everyone) to a centOS server using cifs. In /etc/fstab, I got: //192.168.x.x/DOUGSLAPTOP/hatest /mnt/fsr01 cifs guest 0 0 I have also tried doing this: //192.168.x.x/DOUGSLAPTOP/hatest /mnt/fsr01 cifs users,rw,user=Ryan,pass=Mon30 0 0 When I make this change, I get this for a response: mount error(110): Connection timed out Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I don't think this is has to do with my firewall because I turned it off temporarily. Does anyone else have any other suggestions?
Best way for non-user-name password is : sec=none Above configuration is mount option of mount.cifs
CIFS mountpoint timesout when attempting to Mount
1,611,052,742,000
I recently had a bad UPS lead to a sudden crash of several machines. One of them (running FreeBSD) didn't come back up until I replaced the power supply, but it's still not fully back. Both the BIOS and the OS complain about a disk being missing; swapping around power cables and data cables and such has convinced me that the problem is the disk itself. FreeBSD won't fully start up due to a problem mounting something from /etc/fstab: Mounting local file systems: mount: /fdesc: No such file or directory If I comment out the fdesc line in fstab, everything seems to come up OK. But... that can't be good, can it? I don't know anything about fdesc besides what I've read in the past few minutes, but it seems like a low-level thing dealing with stuff like stdin and stdout, which seem important to me. There does exist /dev/fd, which does contain /dev/fd/0, /dev/fd/1, and /dev/fd/2, and brief piping experiments at the command line seem to indicate that stdin, stdout, and stderr are all working OK. What might be the cause of it being unable to mount /fdesc? And what horrible things will happen if I just continue to run without mounting it? How might I be able to get /fdesc back? The contents of /etc/fstab, after I commented out the fdesc line: #Device Mountpoint FSType Options Dump Pass# /dev/ada0p2 / ufs rw 1 1 /dev/ada0p3 none swap sw 0 0 #/dev/fd fdesc fdesc rw 0 0 Plus a couple Samba mounts which seem to be working fine.
Your fdesc line in fstab appears be mislocated, it should be fdesc /dev/fd fdescfs rw 0 0 As the first comment noted, the first colum is device name, which is ignored by fdescfs(5), then the mount point, which should be /dev/fd to make it useful. Also the file system type is fdescfs, not fdesc See the man page fdescfs(5) for more information.
FreeBSD can't mount fdesc?
1,611,052,742,000
I have ArchLiinux Linux comp001 3.18.7-1-ARCH #1 PREEMPT Wed Feb 11 11:38:34 MST 2015 armv6l GNU/Linux for Arm installed on rPi and here is my /etc/fstab file: # # /etc/fstab: static file system information # # <file system> <dir> <type> <options> <dump> <pass> /dev/mmcblk0p1 /boot vfat defaults 0 0 /dev/mmcblk0p3 /mnt/data vfat noexec,rw,noatime,user,umask=022 0 2 Partition /dev/mmcblkop3 (microsd card fat32 partition) is mounted on mnt/data with rw options, but if I list /mnt directory, I get: total 20 4 drwxr-xr-x 3 root root 4096 Sep 18 13:27 . 4 drwxr-xr-x 18 root root 4096 Jan 9 11:08 .. 12 drwxr-xr-x 3 root root 12288 Jan 1 1970 data Why there is not write permission bit set on data?
You are confusing the rw option with the umask. The rw option merely dictates that the partition is not mounted read-only. The umask option dictates what permission that not set on files and directories. Your current umask of 022 sets the permission bits to 755 which translates to rwxr-xr-x. Change the umask to 000, which should give you 777 or rwxrwxrwx permissions. More info on umask is available on Wikipedia
/etc/fstab/ rw option is being ignored for mircosd card partition in ArchLinux
1,611,052,742,000
I've got a unix user "popolo" who is chrooted in /srv/ftp/ and I mount my two external drives by /etc/fstab in /srv/ftp so I have /srv/stp/dude and /srv/ftp/sweet. Popolo has access to those drives by sftp. In dude/ I've several directories: dude/music, dude/photos, dude/movies, and for some of them (like photos) I don't want that popolo can access to them. Is using /etc/fstab and a user chrooted via sftp is the best way to do this ? How can I restrict access to some directories ?
Use normal Linux/Unix permissions on your dude/photos to make sure that popolo can't access them. Assuming that popolo isn't the owner of those files and directories and isn't in the group, then a simple chmod -R o-rwx dude/photos should make sure that popolo can't access those files. Or: An alternative way would be to give popolo and empty chroot home and bind mount all the directories that you want that user to access into that empty chroot. Assuming (again) that popolo's chroot home is now /home/popolo then: mkdir /home/popolo/music /home/popolo/movies mount --bind /srv/ftp/dude/music /home/popolo/music mount --bind /srv/ftp/dude/movies /home/popolo/movies As you haven't bind mounted your dude/photos directory, popolo won't have access to them.
Limit access on external drive mounted and used by sftp
1,611,052,742,000
I always have to go to gparted and then turn my swap on. The swap space isn't used by default and if I turn the swap on, the swap is not used! How can I make the swap space be used by default at boot?
This information should be set in /etc/fstab You want a line something like - /dev/sdb3 none swap sw 0 0 with the first item set to match your device details. Any swap lines with noauto will be ignored. I believe this configuration is fairly consistent between *nix systems, see man fstab for more info.
Something happened to Swap; it is not used by default
1,611,052,742,000
I just did a Gentoo fresh install but when I boot it the root filesystem mounts as read only. Once I login I can remount it with mount -o remount,rw / but it's not even recognizing my hostname. Someone on irc told me it could be that for some reason fsck bombs as root is always mount ro first and then fsck remounts it rw. I found someone having the same problems and I tried what he did but it didn't work for me Root file system is mounted read-only on boot on Gentoo Linux This is my fstab. EDIT I already fixed it, it was a problem with an option not enabled in the kernel Pseudo Filesystems ---> [*] Virtual memory file system support (former shm fs)
I found this solution over on SuperUser titled: Root file system is mounted read-only on boot on Gentoo Linux, which sounds exactly like your issue. The solution was to make sure that the root service was enabled in your boot runlevel. These are the services that were suggested as needing to be started in the boot runlevel: bootmisc consolefont device-mapper dmcrypt fsck hostname hwclock keymaps localmount modules net.lo netmount network procfs root svscan swap sysctl sysfs termencoding urandom
Gentoo mounting root as read only, why?
1,611,052,742,000
I have a simple embedded setup as follows: x86 target, kernel and root file system built using buildroot. Syslinux is the bootloader configured to boot with an initramfs which points to a .cpio file generated from the buildroot generated root file system. My system boots and works as I am expecting, but I am confused as to what happens with the entries contained in /etc/fstab. I would like my system to be able to mount the /var directory as a partition on an attached compact flash device for various reasons (mainly because I want to be able to store and run an application on the compact flash and additionally store log data here). Is it possible to use an initramfs and yet mount /var on device outside of RAM? If that is the case, is /etc/fstab the correct place to configure this?
Any directory path can have any (valid) volume mounted to it. Whether or not /etc/fstab is the correct place to put it depends on whether or not your embedded setup even uses it.
What happens with /etc/fstab when using an initramfs?
1,611,052,742,000
I just decided to delete my Windows partition and only use Linux. My old partition table was: sda1: W7 boot partition sda2: W7 partition sda3: Linux sda4: start of logic partitions sda5: swap. I deleted sda1 and sda2, and then expanded sda3. Now my partition table is: sda3: Linux sda4: start of logic partitions sda5: swap I would like to change the sda3 to sda1, how? Also my fstab keep showing me the old Windows partition: # /etc/fstab: static file system information. # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 #Entry for /dev/sda3 : UUID=059c7142-b4d8-4ab0-8d0f-ee460fce905e / ext4 rw,errors=remount-ro 0 1 #Entry for /dev/sdb1 : UUID=5632BCEF32BCD569 /media/Datos ntfs-3g defaults,locale=en_US.UTF-8 0 0 #Entry for /dev/sda2 : UUID=60D8A6E5D8A6B8A4 /media/Windows ntfs-3g defaults,locale=en_US.UTF-8 0 0 #Entry for /dev/sda5 : UUID=53cd360a-1321-497f-8c3a-ff3adf4cf82c swap swap sw 0 0
First of all, if you have moved the beginning of the partition, chances are rather high, that you can only wave the filesystem there goodbye. The reason is, that the beginning of a filesystem usually contains a very important data structures (usually called supeblock) without which the data in the filesystem is inaccessible. Maybe some utility exists that could move the superblock and fix the filesystem (because sectors numbers, which are used for data addressing in the end, would change since these are counted from the beginning of the partition), but I would be very cautious about using any such thing. Especially if you intended to use it on a mounted partition. If you did it on a living system, the kernel still has the old partition table cached and will create a new one on reboot (it can reread it when no partitions are mounted on the device - you can request this e.g. with hdparm -z). If you still can get the old patition boundaries (sector-exact) somewhere I would recommend to reset it and retry as described below. If you don't have the information any more, there are utilities that try to find out the original partitions boundaries by scanning the disk for superblocks (or probably by checking the kernel cached data). That said, the correct way to do the resize is: copy filesystem from /dev/sda3 to /dev/sda1 - either file by file, or with a dump utility, or directly with dd if the destination is bigger than source with. In the last case, you should extend the filesystem as described later. fix all important references in the filesystem on /dev/sda1 from /dev/sda3 to /dev/sda1 - this includes: bootloader configuration where to find kernel to boot kernel option root= which tells the kernel what partition to mount as / /etc/fstab - you must do this by hand - again there could be a utility for that, but for this type of things, I wouldn't rely on it. boot from /dev/sda1 either extend /dev/sda1 to cover /dev/sda2 and /dev/sda3 or repartition the now unused space spanned by those. If extending, use the utility for your filesystem to grow it at the end (for EXT2/3 this would be resize2fs, for XFS xfs_growfs etc.). update /etc/fstab again if necesary. Renumbering: fdisk has fix partition order (in the extra functionality sub-menu), gdisk has sort partitions (in the main menu). Then you have to check /etc/fstab and possibly also the bootloader configuration again to see whether any intervention is needed.
Reset partitions numbers
1,611,052,742,000
I have copied my /var /opt /usr directories to a newpartititon and now I need to configure the fstab file. That is the new partition content: drwxr-xr-x 6 root root 4096 Dec 20 12:16 opt drwxr-xr-x 10 root root 4096 Dec 8 06:52 usr drwxr-xr-x 11 root root 4096 Dec 21 08:35 var This is how I want to change the fstab file: # <file system> <dir> <type> <options> <dump> <pass> tmpfs /tmp tmpfs nodev,nosuid 0 0 UUID=00e31411-0730-9903-c038-45c4014ce600 / ext2 defaults 0 1 UUID=4bbbd587-1439-427b-9584-5b36d904f4c3 /home ext4 defaults 0 1 UUID=5a694838-c110-4eb9-9703-c490792af400 swap swap defaults 0 0 UUID=7502c4a6-f13b-40e7-ab3c-aaaa630d6b4d /var UUID=7502c4a6-f13b-40e7-ab3c-aaaa630d6b4d /opt UUID=7502c4a6-f13b-40e7-ab3c-aaaa630d6b4d /usr Will the fstab file detect the subdirectories in each partition or should I put each directory in its own partition? Since in my /home partition, there is the home's contents and not another home directory, I think that the above configuration wouldn't work, since in the new partition i have three separate directories. What you think is the best way to do that via one partition ?
No, mount does not "detect" any directories under a filesystem. It is not its purpose. If you put /var, /opt and /usr all on a one partition, which is not the root partition of your system, you'll need to do two things: Mount the partition under some separate, special directory - let's say /mnt/sysdirs Bind-mount the directories at their proper places in the root filesystem. So the fstab in your case should look something like this: tmpfs /tmp tmpfs nodev,nosuid 0 0 UUID=00e31411-0730-9903-c038-45c4014ce600 / ext2 defaults 1 1 UUID=4bbbd587-1439-427b-9584-5b36d904f4c3 /home ext4 defaults 0 2 UUID=5a694838-c110-4eb9-9703-c490792af400 swap swap defaults 0 0 UUID=7502c4a6-f13b-40e7-ab3c-aaaa630d6b4d /mnt/sysdirs ext4 defaults 0 0 /mnt/sysdirs/opt /opt none bind,rw 0 0 /mnt/sysdirs/usr /usr none bind,rw 0 0 /mnt/sysdirs/var /var none bind,rw 0 0
config fstab file for a diefferent partition root directories
1,611,052,742,000
On an embedded device based on Yocto Linux my rootfs is RO, while I have an additional partition for RW data. Now I want to automount at boot an overlay onto /etc stored on a different partition. Here is my fstab: /dev/mmcblk0p6 /data_local ext4 defaults,sync,noexec,rw 0 2 [...] overlay /etc overlay defaults,lowerdir=/etc,upperdir=/data_local/overlayfs/upper/etc,workdir=/data_local/overlayfs/workdir,X-mount.mkdir,x-systemd.requires=/data_local,x-systemd.before=local-fs.target,x-systemd.before=systemd-networkd 0 0 However, this fails because the upperdir and workdir directories are missing on first boot. How can I let fstab or systemd.mount automatically create these directories?
I ended up using the overlayfs-etc.bbclass feature from Yocto instead, which is available since Yocto 4.0. Documentation at: https://docs.yoctoproject.org/ref-manual/classes.html#ref-classes-overlayfs-etc The bbclass patches the init process in /sbin/init to create the folders at runtime before mounting the overlay. See: https://git.yoctoproject.org/poky/plain/meta/files/overlayfs-etc-preinit.sh.in Adding it to you image is really simple: Add to your machine.conf: OVERLAYFS_ETC_MOUNT_POINT = "/data_local" OVERLAYFS_ETC_DEVICE = "/dev/mmcblk0p6" OVERLAYFS_ETC_FSTYPE = "ext4" OVERLAYFS_ETC_MOUNT_OPTIONS = "defaults,sync" Add to your image: IMAGE_FEATURES:append = " overlayfs-etc" Of course you must make sure your boot medium has an extra read-write mounted partition already available (in the image flashed to SD card) - in my case mmcblk0p6.
fstab and systemd automount overlay
1,611,052,742,000
According to manual I thought that systemd-remount-fs.service is responsible for parsing and applying /etc/fstab entries. So I tried to test it: I removed ExecStart part (ExecStart=/lib/systemd/systemd-remount-fs) and rebooted the system. After booting and logging in I still had fstab entries in mount. And now I am wondering if it's the kernel's job itself? How can I do a job before fstab entries get mounted (in case it's kernel's job)?
The kernel usually mounts the root filesystem at the very end of the boot sequence. It is usually mounted readonly and irrespective of whatever mount option set as part of the /etc/fstab file. Control is then, given to the init system. As specified in the manual you linked to, systemd-remount-fs.service : ignores normal file systems and only changes the root file system (i.e. /), /usr/, and the virtual kernel API file systems such as /proc/, /sys/ or /dev/. You can also read that this service : is usually pulled in by systemd-fstab-generator systemd-fstab-generator is in fact responsible for instantiating the initial mount of filesystems according to fstab entries. This will instantiate mount and swap units as necessary. It is therefore normal that if you inhibit the automatic execution of systemd-remount-fs.service and reboot, you'll still see your filesystems mounted according to the /etc/fstab entries.
What is parsing/applying /etc/fstab entries
1,611,052,742,000
I used to use the fstab file for mounting drives. This time i wanted to use Units instead and created a .mount file. However i wonder how i would set a file system check option and umask settings there. For example in a fstab file you would do that by adding (just as an example) umask=000 0 1 I'm not sure if i can just use the same options in a .mount file?
To start with, umask=000 0 1 is not a mount option; it's three separate fields, only the first of which contains mount options. The umask=000 part is the actual option list; it usually can be directly used in systemd's Options= parameter. All options that would be passed to the filesystem work the same way as they do in fstab. The only exception are pseudo-options such as user or X-mount.mkdir that would make sense to the 'mount' program rather than the filesystem itself. [Mount] Options=rw,fmask=0133,dmask=022 The 0 that follows it is the "dump" field, an indicator for the ancient dump(8) backup tool. It is not used when mounting (rather, 'dump' reads fstab on its own), so there is no systemd equivalent. The final 1 is the "fsck pass" field, used to activate fsck for this filesystem. The systemd equivalent for this is an explicit dependency on the fsck service instance: [Unit] Requires=systemd-fsck@dev-disk-by\x2dpartlabel-EFI.service After=systemd-fsck@dev-disk-by\x2dpartlabel-EFI.service Use systemd-escape --path [email protected] /dev/foo to conveniently generate the correct unit name for your device. If in doubt, add an fstab entry, reload systemd, then use systemctl cat to look at the .mount unit that systemd generated for you. (And then continue just using that fstab entry.) # echo "/dev/sdz1 /mnt/movies ext4 umask=077 0 0" >> /etc/fstab # systemctl daemon-reload # systemctl cat mnt-movies.mount
Mounting options with Systemd Mount Units
1,611,052,742,000
I have got two readonly root partitions (say roota and rootb) which operating system is installed. This is for a basic A/B partition update scheme and after updating my system these partitions are selected for boot in a roundrobin fashion. I have two other partitions (say data1 and data2) and I would like to mount on of these partitions based on the partition I boot. So, the scenario is like this: I boot from roota, automatically data1 is mounted. I updated system writing updated image to rootb. I boot from rootb and data2 is automatically mounted. Again I updated system writing updated image to roota, I boot from roota and data1 is mounted... etc. roota and rootb partitions are readonly (squashfs). data1 and data2 are rw partitions. How can I achieve this behavior in an elegant way for debian 11 bullseye?
No idea what your configuration is, but the script will essentially be something like this: #! /bin/bash default=/dev/partition1 root=`mount | grep -w / | awk '{print $1}'` # verify this works for you test "$root" = "partitionB" && default=/dev/partition2 mount $default /mnt/somewhere
Dynamically select which partition to mount based on root partition
1,611,052,742,000
I recently mount my home partition(my home partition is separated from root) of the Endeavour-OS from my Garuda-OS, that cause me to boot into emergency mode in my Endeavour(that should have mountpoint to my home partition). How should i proceed with this? Is the only way is to create user and add it to home dir with complete wipe the partition? i get this from running command "lsblk": > lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 238.5G 0 disk ├─sda1 8:1 0 10M 0 part ├─sda2 8:2 0 587M 0 part /boot/efi ├─sda3 8:3 0 62.5G 0 part / ├─sda4 8:4 0 167.6G 0 part └─sda5 8:5 0 7.8G 0 part [SWAP] the sda4 is the separate home partition. and from command "blkid" : >blkid /dev/sda4: UUID="0c89ef83-da81-442c-892f-71b3052b571a" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="48b27dc3-d86c-3d4c-9b1e-d78021cd98a0" /dev/sda2: LABEL_FATBOOT="BOOT" LABEL="BOOT" UUID="755B-69B5" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="f1e54924-905f-cf4c-bec1-10ab3c250c0f" /dev/sda5: LABEL="swap" UUID="52300be2-937a-418d-bad9-5242dc99145e" TYPE="swap" PARTUUID="4c30f20f-b803-244e-ac6d-278b140c5aad" /dev/sda3: LABEL="root" UUID="d3013930-9d00-4308-8151-554debf4459e" BLOCK_SIZE="4096" TYPE="ext4 PARTUUID="1ea51fa9-c68a-4149-b550-ae6a5ea06087" /dev/sda1: LABEL="grub" UUID="af50eeae-7c6e-4b27-be02-4ee679a30c31" BLOCK_SIZE="1024" TYPE="ext4 PARTUUID="d369f82d-e25c-9448-8149-519b41cb3db8" this is my /etc/fstab file: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a device; this may # be used with UUID= as a more robust way to name devices that works even if # disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> UUID=755B-69B5 /boot/efi vfat defaults,noatime 0 2 UUID=d3013930-9d00-4308-8151-554debf4459e / ext4 defaults,noatime 0 1 UUID=0c89ef83-da81-442c-892f-71b3052b571a /home ext4 defaults,noatime 0 2 UUID=52300be2-937a-418d-bad9-5242dc99145e swap swap defaults 0 0 tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 **Edit: ** I might have figured out the problem, after i use command mount -a it returns nothing. Then, i try lsblk again and the mountpoints of /home partition refer sda4: > lslbk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 238.5G 0 disk ├─sda1 8:1 0 10M 0 part ├─sda2 8:2 0 587M 0 part /boot/efi ├─sda3 8:3 0 62.5G 0 part / ├─sda4 8:4 0 167.6G 0 part /home └─sda5 8:5 0 7.8G 0 part [SWAP] But after i do startx since i'm using i3WM, it didn't get me to my display I did fsck -l and get these problem : [root@ifhonce /]# fsck -l fsck from util-linux 2.38.1 fsck.ext4: Unable to resolve 'UUID=d3013930-9d00-4308-8151-554debf4459e'
It is SOLVED now Before i use fsck -y /dev/sda3 on my / dir, instead it I should fsck my home partition using fsck -f -y /dev/sda4 Thanks to this, I have little knowledge about how mountpoints work and fstab in my machine. I'm trying to get the right tags and words in google so hard before I know that i get the boot problem before with Dependency failed for /home/$USER. source : Manjaro Forum
Home partition emergency mode
1,611,052,742,000
Disclaimer: I have no experience installing Ubuntu Server in UEFI boot mode. Context I'm setting up this new servers with RAID 10 via mdadm. AFAIK, ESP can't be RAIDed, so during Ubuntu installation I set my four disks to be used as Boot Device, which looks like this: After installation finished, lsblk output looks like this (sorry, long output): citilan@zitz:~$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 48M 1 loop /snap/snapd/17029 loop1 7:1 0 47M 1 loop /snap/snapd/16292 loop2 7:2 0 62M 1 loop /snap/core20/1587 loop3 7:3 0 79.9M 1 loop /snap/lxd/22923 loop4 7:4 0 63.2M 1 loop /snap/core20/1623 loop5 7:5 0 103M 1 loop /snap/lxd/23541 sda 8:0 0 3.6T 0 disk ├─sda1 8:1 0 1G 0 part /boot/efi ├─sda2 8:2 0 16G 0 part │ └─md2 9:2 0 32G 0 raid10 │ └─md2p1 259:1 0 32G 0 part [SWAP] ├─sda3 8:3 0 512M 0 part │ └─md0 9:0 0 1020M 0 raid10 │ └─md0p1 259:0 0 1018M 0 part /boot └─sda4 8:4 0 3.6T 0 part └─md1 9:1 0 7.2T 0 raid10 └─md1p1 259:2 0 7.2T 0 part / sdb 8:16 0 3.6T 0 disk ├─sdb1 8:17 0 1G 0 part /mnt ├─sdb2 8:18 0 16G 0 part │ └─md2 9:2 0 32G 0 raid10 │ └─md2p1 259:1 0 32G 0 part [SWAP] ├─sdb3 8:19 0 512M 0 part │ └─md0 9:0 0 1020M 0 raid10 │ └─md0p1 259:0 0 1018M 0 part /boot └─sdb4 8:20 0 3.6T 0 part └─md1 9:1 0 7.2T 0 raid10 └─md1p1 259:2 0 7.2T 0 part / sdc 8:32 0 3.6T 0 disk ├─sdc1 8:33 0 1G 0 part ├─sdc2 8:34 0 16G 0 part │ └─md2 9:2 0 32G 0 raid10 │ └─md2p1 259:1 0 32G 0 part [SWAP] ├─sdc3 8:35 0 512M 0 part │ └─md0 9:0 0 1020M 0 raid10 │ └─md0p1 259:0 0 1018M 0 part /boot └─sdc4 8:36 0 3.6T 0 part └─md1 9:1 0 7.2T 0 raid10 └─md1p1 259:2 0 7.2T 0 part / sdd 8:48 0 3.6T 0 disk ├─sdd1 8:49 0 1G 0 part ├─sdd2 8:50 0 16G 0 part │ └─md2 9:2 0 32G 0 raid10 │ └─md2p1 259:1 0 32G 0 part [SWAP] ├─sdd3 8:51 0 512M 0 part │ └─md0 9:0 0 1020M 0 raid10 │ └─md0p1 259:0 0 1018M 0 part /boot └─sdd4 8:52 0 3.6T 0 part └─md1 9:1 0 7.2T 0 raid10 └─md1p1 259:2 0 7.2T 0 part / /boot/efi is mounted because of this entry in /etc/fstab added by Ubuntu installation root@zitz:~# grep efi /etc/fstab # /boot/efi was on /dev/sda1 during curtin installation /dev/disk/by-uuid/3FD8-AF4F /boot/efi vfat defaults 0 1 Also, this is my efibootmgr -v output: # efibootmgr -v BootCurrent: 000B Timeout: 1 seconds BootOrder: 000B,000D,000E,000F,0003,0004,0005,0006,0002,0001 Boot0001 Hard Drive BBS(HD,,0x0)/VenHw(5ce8128b-2cec-40f0-8372-80640e3dc858,0200)..GO..NO..........S.T.4.0.0.0.N.M.0.0.0.A.-.2.H.Z.1.0.0...................\.,[email protected].=.X..........A...........................>..Gd-.;.A..MQ..L. . . . . . . . . . . . .S.W.3.2.M.L.A.V........BO..NO..........S.T.4.0.0.0.N.M.0.0.0.A.-.2.H.Z.1.0.0...................\.,[email protected].=.X..........A...........................>..Gd-.;.A..MQ..L. . . . . . . . . . . . .S.W.3.2.M.L.H.5........BO..NO..........S.T.4.0.0.0.N.M.0.0.0.A.-.2.H.Z.1.0.0...................\.,[email protected].=.X..........A...........................>..Gd-.;.A..MQ..L. . . . . . . . . . . . .S.W.3.2.M.L.C.7........BO..NO..........S.T.4.0.0.0.N.M.0.0.0.A.-.2.H.Z.1.0.0...................\.,[email protected].=.X..........A...........................>..Gd-.;.A..MQ..L. . . . . . . . . . . . .S.W.3.2.M.L.3.4........BO Boot0002* UEFI: Built-in EFI Shell VenMedia(5023b95c-db26-429b-a648-bd47664c8012)..BO Boot0003* (B2/D0/F0) UEFI PXE IPv4 Intel(R) Ethernet Controller X550(MAC:3cecefc7f71e) PciRoot(0x0)/Pci(0x1b,0x4)/Pci(0x0,0x0)/MAC(3cecefc7f71e,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO Boot0004* (B2/D0/F1) UEFI PXE IPv4 Intel(R) Ethernet Controller X550(MAC:3cecefc7f71f) PciRoot(0x0)/Pci(0x1b,0x4)/Pci(0x0,0x1)/MAC(3cecefc7f71f,1)/IPv4(0.0.0.00.0.0.0,0,0)..BO Boot0005* (B2/D0/F0) UEFI PXE IPv6 Intel(R) Ethernet Controller X550(MAC:3cecefc7f71e) PciRoot(0x0)/Pci(0x1b,0x4)/Pci(0x0,0x0)/MAC(3cecefc7f71e,1)/IPv6([::]:<->[::]:,0,0)..BO Boot0006* (B2/D0/F1) UEFI PXE IPv6 Intel(R) Ethernet Controller X550(MAC:3cecefc7f71f) PciRoot(0x0)/Pci(0x1b,0x4)/Pci(0x0,0x1)/MAC(3cecefc7f71f,1)/IPv6([::]:<->[::]:,0,0)..BO Boot000B* ubuntu HD(1,GPT,175deae0-cf0e-4637-8fd8-c358043eebae,0x800,0x219800)/File(\EFI\UBUNTU\SHIMX64.EFI) Boot000D* ubuntu HD(1,GPT,ad3a98c7-8a50-4fe3-abae-93aec5b080a0,0x800,0x219800)/File(\EFI\UBUNTU\SHIMX64.EFI)..BO Boot000E* ubuntu HD(1,GPT,b1c20b0c-c83e-4a8e-a1b8-210d1e1c5662,0x800,0x219800)/File(\EFI\UBUNTU\SHIMX64.EFI)..BO Boot000F* ubuntu HD(1,GPT,0bb71865-f415-4d7c-bc5a-6f30dbe9872a,0x800,0x219800)/File(\EFI\ubuntu\shimx64.efi)..BO Questions: Does Ubuntu takes care of ESP sync? Given the case that I lost currently mounted /boot/efi, are backup ESP up to date? Otherwise, do I have to manually mount and sync them all? If I remove /dev/sda (3FD8-AF4F in /etc/fstab) and boot the server, it starts logged in as root user. No login prompt. Just boot the server and you are root. /boot/efi is not mounted (see question 3) How do I automatically mount one of the backup ESP on /boot/efi? What is the best practice here? Thanks
Yes, if grub-efi-amd64 is configured to use those as the ESP:s then Ubuntu will automatically sync them all. And they don't need to be mounted to be synced, myself I don't have /boot/efi mounted in fstab at all. That config is handled by debconf.  The command sudo dpkg-reconfigure grub-efi-amd64 should take care of this and debconf-show grub-efi-amd64 will show which devices grub is configured to use. If this does not work, then the debconf file can be edited manually, with sudo (your_favorite_editor) /var/cache/debconf/config.dat then search for "grub-efi".  I have 5 ESP partitions so my entry there looks like this: Name: grub-efi/install_devices Template: grub-efi/install_devices Value: /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1Z2M3PT-part1, /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1Z2MACF-part1, /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1Z2MXKQ-part1, /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1Z1NEB7-part1, /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1Z1NWY6-part1 Owners: grub-common, grub-efi-amd64 Flags: seen Variables: CHOICES = RAW_CHOICES = Name: grub-efi/install_devices_disks_changed Template: grub-efi/install_devices_disks_changed Value: /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1Z2M3PT-part1, /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1Z2MACF-part1, /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1Z2MXKQ-part1, /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1Z1NEB7-part1, /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1Z1NWY6-part1 Owners: grub-common, grub-efi-amd64 Flags: seen Variables: CHOICES = RAW_CHOICES = I'm not sure if both entries are used, but I'd change both to match all your ESP partitions. Save and then do sudo dpkg-reconfigure grub-efi-amd64 and GRUB will install on all the correct places and keep on doing it going forward. This cannot happen.  Perhaps you are entering some type of recovery prompt that you mistake for root?  The system can not do autologin due to not having an entry in fstab. Not needed at all, but if you like you can just keep the 3FD8-AF4F entry there; it isn't used for anything anyway.
Does Ubuntu installation keep multiple ESP synced? How to setup /etc/fstab to fallback mount /boot/efi?
1,611,052,742,000
I have a laptop. It has two hard drives. One is an SSD with a normal Windows 10 install on it. The other is an mSata with a normal install of FreeBSD13 on it. To install FreeBSD I removed the SSD, booted from the FreeBSD installer on a USB stick, installed FreeBSD to the mSata using the auto options, then shut down my machine and put the SSD back in. When I look at gpart show it says this: => 63 468862065 ada0 MBR (224G) 63 1985 - free - (993K) 2048 1124352 1 ntfs [active] (549M) 1126400 466549872 2 ntfs (222G) 467676272 912 - free - (456K) 467677184 1179648 3 !39 (576M) 468856832 5296 - free - (2.6M) => 40 250069600 ada1 GPT (119G) 40 1024 1 freebsd-boot (512K) 1064 984 - free - (492K) 2048 4194304 2 freebsd-swap (2.0G) 4196352 245872640 3 freebsd-zfs (117G) 250068992 648 - free - (324K) I believe this is telling me that ada0 is my Windows 10 disc, and that ada1 is my FreeBSD disk. When I look in /etc/fstab I see this line (there are no other entries). /dev/ada0p2 none swap sw 0 0 Did my method of installing FreeBSD cause an error? Is this something I need to fix? How should I fix it - what should my /etc/fstab actually say? I'm guessing it should say /dev/ada1p2.
I'm guessing it should say /dev/ada1p2. You surmise correctly. So long as that particular disk is plugged into that particular controller slot (all other things being equal), your system will probably see it as ada1. So yes, your swap partition on ada1 is correctly referenced as ada1p2. But if you ever change your disk configuration, the device number may change, and then your /etc/fstab may break. Since you have a GPT partition on ada1, a better practice is to apply a GPT label to the swap partition: # gpart modify -l bsd-swap -i 2 /dev/ada1 and then mount it in /etc/fstab using the partition name (which is fixed) instead of the device/partition number (which is variable): /dev/gpt/bsd-swap none swap sw 0 0
What should be in my /etc/fstab if I have two discs?
1,611,052,742,000
I cannot mount the swap subvolue. -> sudo mount -av / : ignored /home : already mounted mount: /swap: mount(2) system call failed: No such file or directory. -> fstab # <file system> <mount point> <type> <options> <dump> <pass> /dev/mapper/cryptsystem / btrfs ssd,noatime,space_cache,compress=zstd,subvol=@ 0 0 /dev/mapper/cryptsystem /home btrfs ssd,noatime,space_cache,compress=zstd,subvol=@home 0 0 /dev/mapper/cryptsystem /swap btrfs ssd,noatime,compress=no,subvol=@swap 0 0 -> btrfs subvolumes ID 257 gen 427049 top level 5 path @home ID 272 gen 427049 top level 5 path @ ID 3194 gen 425853 top level 272 path @swap
Solution: Mount the btrfs volume at /mnt (e.g from a live iso) and then create the @swap subvolume as /mnt/@swap. Details of the Initial Problem: Turns out that the btrfs subvolume @swap has not been a top-level subvolume as needed for the mount operation. This is indicated by the integer 272 in the subvolume list. This is the result of creating the @swap subvolume under / while this was referring to the @ subvolume. For this reason, @swap was really created as @/@swap (don't know if that's a neologism).
mount(2) system call failed: No such file or directory
1,620,061,643,000
What's the best way to disable swap entirely on a fleet of GNU/Linux hosts, using systemd and Ansible? For whatever reason some of my virtual machines have a swap file configured in their /etc/fstab, which gets automatically picked up at boot by systemd-fstab-generator like this: $ cat /run/systemd/generator/swapfile.swap # Automatically generated by systemd-fstab-generator [Unit] SourcePath=/etc/fstab Documentation=man:fstab(5) man:systemd-fstab-generator(8) [Swap] What=/swapfile Some services running on those machines get terminally slow when using the swap file, for various reasons, so I need to prevent them from using it. My version of systemd doesn't yet include MemorySwapMax. I'd like to avoid messing up with /etc/fstab and I don't mind having those swap files left in place. For context, I'm using ansible-3.0.0, ansible-base-2.10.6; the machines are CentOS 7, with systemd 219.
You can let systemd do its thing by default and then just revert it with another systemd unit immediately afterwards. If you check with systemd show swapfile.swap, you'll see all the unit does is to run swapon on that file. When you issue swapoff manually, the swap will reappear at the next boot. Running swapoff immediately after swapon will only take a fraction of a second because there's nothing to move back from disk to memory. However you must make sure you run swapoff after swapon, and you can tell systemd to do so with After=local-fs.target. Place your unit file as a j2 template named noswap.service.j2 in the templates/ directory for your playbook or role: {{ ansible_managed|comment }} [Unit] Description=Disable swapfile Documentation=man:swapon(8) man:systemd.swap(5) After=local-fs.target [Service] Type=oneshot User=root ExecStart=/usr/sbin/swapoff -a [Install] WantedBy=default.target Have something like this in your playbook or role: --- - name: Your playbook tasks: - name: Write noswap systemd service config file template: src: noswap.service.j2 dest: /etc/systemd/system/noswap.service owner: root group: root mode: 0644 notify: Enable noswap service handlers: - name: Enable noswap service systemd: name: noswap state: started enabled: true daemon_reload: true After the first time, the service will be started during boot, so the state: started should prevent it from being issued again every time you run ansible.
How to disable a swap file configured by systemd, via Ansible?
1,620,061,643,000
I want to reorganize my file system. I have swap allocated that I don't use. My / partition is overflowing all the time, and because of that I've kept moving big directories to a separate partition /mnt/nvme0n1p4. It occurred to me that it might be smarter to move all those directories back to /home and mount /home from what is now /mnt/nvme0n1p4. I would also like to extend / with the space now on /nvme0n1p2. I don't do this kind of stuff every day. So I thought that I should ask for some feedback on my plan. My plan is to do the following: (I added some comments in bold after I actually executed my plan.) copy the content of /home to /mnt/nvme0n1p4 copy all directories on /mnt/nvme0n1p4 that are now symlinked to from /home to their correct location /mnt/nvme0n1p4/me sudo rm -rf /home/* <-- Edited after @raj advice sudo mount /dev/nvme0n1p4 /home change the folowing line in /etc/fstab: UUID=aaf7e7e2-d36b-4877-b862-612d403a15da /mnt/nvme0n1p4 ext4 defaults,noatime 0 2 to UUID=aaf7e7e2-d36b-4877-b862-612d403a15da /home ext4 defaults,noatime 0 2 backup the content of / to somewhere on /mnt/data. Just in case. use gparted to remove [SWAP] and add it in front of / <-- Worked fine for me Remove [SWAP] from /etc/fstab <-- I forgot this step initially. Causing an error during booting. So I had to do this from an bootable usb. finished? some system info me@mypc $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931,5G 0 disk └─sda1 8:1 0 931,5G 0 part /mnt/data sdb 8:16 0 111,8G 0 disk └─sdb1 8:17 0 111,8G 0 part /opt nvme0n1 259:0 0 931,5G 0 disk ├─nvme0n1p1 259:1 0 300M 0 part /boot/efi ├─nvme0n1p2 259:2 0 16G 0 part [SWAP] ├─nvme0n1p3 259:3 0 32G 0 part / └─nvme0n1p4 259:4 0 883,2G 0 part /mnt/nvme0n1p4 me@mypc $ df Filesystem Size Used Avail Use% Mounted on dev 16G 0 16G 0% /dev run 16G 1,7M 16G 1% /run /dev/nvme0n1p3 32G 29G 1,3G 96% / tmpfs 16G 324M 16G 3% /dev/shm tmpfs 4,0M 0 4,0M 0% /sys/fs/cgroup tmpfs 16G 50M 16G 1% /tmp /dev/sdb1 110G 26G 79G 25% /opt /dev/nvme0n1p4 869G 412G 413G 50% /mnt/nvme0n1p4 /dev/nvme0n1p1 300M 312K 300M 1% /boot/efi /dev/sda1 916G 113G 757G 13% /mnt/data tmpfs 3,2G 56K 3,2G 1% /run/user/1000 me@mypc $ ls /mnt/nvme0n1p4 docker Documents Downloads home lost+found R Repos 'VirtualBox VMs' VMs me@mypc $ ls -l ~/. total 32 drwxr-xr-x 3 me me 4096 5 dec 10:38 bin drwxr-xr-x 9 me me 4096 20 dec 21:48 CytoscapeConfiguration lrwxrwxrwx 1 me me 10 3 nov 16:03 Data -> /mnt/data/ lrwxrwxrwx 1 me me 25 4 nov 09:55 Documents -> /mnt/nvme0n1p4/Documents/ lrwxrwxrwx 1 me me 24 8 nov 00:36 Downloads -> /mnt/nvme0n1p4/Downloads drwxr-xr-x 3 me me 4096 10 dec 23:16 igv drwxr-xr-x 3 me me 12288 16 feb 15:57 Pictures lrwxrwxrwx 1 me me 16 13 nov 09:41 R -> /mnt/nvme0n1p4/R lrwxrwxrwx 1 me me 20 9 nov 14:06 Repos -> /mnt/nvme0n1p4/Repos drwxr-xr-x 3 me me 4096 4 nov 08:14 snap drwxr-xr-x 4 me me 4096 14 feb 20:22 tmp lrwxrwxrwx 1 me me 9 3 nov 16:58 Unsorted -> /mnt/tmp/ expected result me@mypc $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931,5G 0 disk └─sda1 8:1 0 931,5G 0 part /mnt/data sdb 8:16 0 111,8G 0 disk └─sdb1 8:17 0 111,8G 0 part /opt nvme0n1 259:0 0 931,5G 0 disk ├─nvme0n1p1 259:1 0 300M 0 part /boot/efi ├─nvme0n1p3 259:3 0 48G 0 part / └─nvme0n1p4 259:4 0 883,2G 0 part /home
Basically looks good, however: in step 3, instead of rm -rf /home, better do rm -rf /home/*. You should not remove the /home directory itself, only it's contents, because you need an empty /home directory to exist as a mount point. If you happen to delete the /home directory, you need to re-create it with the same ownership and permissions as the previous /home directory had. when performing steps 3 and 4, you should be out of the /home directory, ie. your current directory should be for example / or /root. It would be best to perform the whole operation being logged in directly as root, if it is possible in your system, in this way you won't use the /home directory at all. I'm also not sure about extending the root partition with the space that is before that partition. (I guess that your nvme0n1p2 is located before nvme0n1p3 on the disk). While there is no issue with extending the partition and filesystem past the end of the partition, I'm not sure if the same applies for extending it before the start of the partition. I'm not sure if gparted/e2fstools is able to move the inode table and all filesystem structures backwards, towards the new start of the partition. Maybe there's someone more experienced with such changes who can answer that.
reorganizing my filesystem
1,620,061,643,000
I tried to install Void-Linux as secondary Linux on my laptop. I created a new partition for it on LVM, installed base system using XBPS method copied kernel and initramfs to my /boot partition and created /etc/fstab. It booted almost correctly but with one exception: the rootfs is readonly by default, so I need to remount it every time with mount -o remount,rw / after booting. I tried to add rw option to fstab explicetely but it didn't help: # /etc/fstab # <fs> <mountpoint> <type> <opts> <dump/pass> /dev/SDD/void / ext4 rw,defaults,relatime 0 1 /dev/mapper/home /home ext4 defaults,noatime 0 0 /dev/mapper/var /var ext4 defaults,relatime 0 1 UUID=7720-4261 /boot vfat noatime,noauto 0 0 tmpfs /tmp tmpfs rw,nosuid,noatime,nodev,size=4G,mode=1777 0 0 # /boot/grub/grub.conf entry insmod gzio insmod part_gpt insmod fat set root='hd2,gpt2' echo 'Loading Void Linux 5.10.8_1 ...' linux /vmlinuz-5.10.8_1 root=/dev/SDD/void dolvm echo 'Loading initial ramdisk ...' initrd /early_ucode.cpio /initramfs-5.10.8_1.img I don't have a such problem with same configuration on other Linux systems, is it related to specific behavior of Void-Linux? How to make my rootfs read-write for that case?
I've fixed it by using rw kernel command line flag in grub config: - linux /vmlinuz-5.10.8_1 root=/dev/SDD/void dolvm + linux /vmlinuz-5.10.8_1 root=/dev/SDD/void dolvm rw /etc/fstab mount options seems to be not related to this issue.
Read-only rootfs on boot
1,620,061,643,000
Forgive me if I'm in the wrong Stack, this seemed like a more general Linux thing, so I posted here. Np if I need to take it elsewhere. Also I'm pretty new to Linux so please be patient. Hardware= Raspberry Pi 3 OS= Raspbian Buster, apt-get update and upgrade applied Application= PLEX server, NAS and networked TimeMachine target I have a 3 TB USB disk that I formatted gpt/EXT4 the issue that I'm having is that any file copied to it is instead taking up space on the internal SD card. I have created the directory /mnt/nas and set that as the mount point for the drive at boot using fstab: UUID=F00F00F00 /mnt/nas ext4 defaults,auto,users,rx,nofail 0 0,x-systemd.device-timeout=15 I get no errors- however, when I go to copy files I get some "no storage left" error because the files are trying to fill my SD card. I have attached a screenshot showing that after transferring a large folder /mnt/nas has the same Free Space/Total Space as Filesystem. What am I doing wrong that files aren't making it onto the external disk? Thanks in advance
As @xenoid suggested, it seems you've not actually mounted the USB drive that you've connected to your RPi. Perhaps the easiest way to confirm that is to check as follows: $ lsblk --fs NAME FSTYPE LABEL UUID MOUNTPOINT sda └─sda1 exfat SANDISK16GB 5B00-9E5C /home/pi/mntThumbDrv sdb └─sdb1 ext4 PASSPORT2TB 86645948-d127-4991-888c-a466b7722f05 /home/pi/mntPassport sdc └─sdc1 ext4 SANDISK8GB e5cb39a9-b041-4339-92f5-4172201a4b1a /home/pi/mntBackupDrv mmcblk0 ├─mmcblk0p1 vfat boot 5DB0-971B /boot └─mmcblk0p2 ext4 rootfs 060b57a8-62bd-4d48-a471-0d28466d1fbb / You can plug your USB disk into your RPi, and then run the command as shown above. You will get a similar output. Let's decipher this: The lsblk command lists block devices. I prefer it because it's simple to use, and easy to read. man lsblk will give you all the details. As you can see, there are 5 columns in the output. Let's look in the NAME column at the one for sdb as this is likely to be similar to your drive. First know that the name sdb designates a device name that was assigned by the system, and is indicative of the media type. Immediately below sdb is the name of a partition; sdb1 in this case. So - partitions belong to devices. A device must have at least one partition to be usable, and it may have more than one. Subsequent partitions in this case would be called sdb2, sdb3, etc. Your USB drive (the device) should have a NAME like sdb, sdc, etc. Since you've said you created a partition, and formatted it with the ext4 filesystem, you should also see a numbered partition listed immediately below the device. In the row for that partition, the FSTYPE column should show ext4. The LABEL column may contain a string of characters that were assigned - perhaps by you when you formatted the drive. I'll assume you know how to change this label if you like. The UUID column will contain a UUID that may be used in your fstab entry. And finally, the "payoff": the MOUNTPOINT column will tell you if your drive is mounted, and where the mount point is located in your RPi's filesystem. Based on your question, I believe the MOUNTPOINT column for your USB drive partition will be empty/vacant - indicating that it is not actually mounted. If this is the case, you are writing your files to /mnt/nas/ which is just another directory in your RPi's file system - until your USB is actually mounted there! So, to answer your question: What am I doing wrong that files aren't making it onto the external disk? You have failed to mount the USB drive. You may want to first try using the mount command to mount your drive manually; for example: sudo mount /dev/sdb1 /mnt/nas Once you've done that, try writing files as before & note the difference. Then, construct an entry in your /etc/fstab following the instructions in man fstab. You may also find this "how-to" on GitHub helpful. Otherwise, or if you're still having problems, edit your question to include the output of your lsblk --fs command, and we'll go from there.
Files Written to External USB Disk are taking up Internal Storage Space
1,620,061,643,000
I want to mount all text files without execute permission to eliminate the (Run in Terminal - Display - Run) message, which appears every time I open a text file in Linux Mint. I have the following line in my fstab: /dev/sda7 /media/myname/Programs ntfs-3g defaults,uid=1000 0 0 I tried to add umask=111 but all files had the permission displayed as -????????? and I lost the access to all files.
MS-Windows sets the execute bit on every file. (One of the reasons for its poorer security). noexec is the option to disable excitability. Using the umask will stop directories from being traversable, because directories need execute permission. Therefore mount with option noexec.
Removing mount default execute permission of text files
1,620,061,643,000
I have created a loop device and added it to /etc/fstab I got its UUID from the output of the blkid command (it does print a UUID for the particular device after running mkfs.ext4 /path/to/loop) However despite the fact that after editing /etc/fstab the command mount -a was successful, the system after the reboot halted. Insted the following entry in /etc/fstab seems to do the job: /path/to/loop /mountpoint ext4 loop 0 0 Why replacing /path/to/loop with UUID breaks things?
Only block devices have UUIDs (that can be found). A file is not a block device, the loop device turns it into one. So for the UUID of an image file to be found, the loop device must exist first. However, your fstab entry is a loop mount, i.e. the loop device is only created when you mount it (and immediately removed on umount), so it does not exist before you mount it (and after you umount it), and so... the UUID is not found because the loop device does not exist. For loop mounts, it's completely fine to specify the file by path. Otherwise you'd need an init script that sets up loop devices before attempting to mount them (and then get rid of the loop mount option).
mounting loop not working with UUID
1,620,061,643,000
I use Arch Linux kernel 4.18.12-arch1-1-ARCH (november 2018). I use a SATA caddy (for a Thinkpad T400) which holds a hard drive from an old laptop. I'd like to decide on combining the contents and extending the logical volumes rootvol and lvhome or keeping the current setup (see below). I only use the ext4 filesystem and both volumes contain data. Although this question seems to be answered here, I'm not sure what to do to prevent data loss. So currently I boot from a luks encrypted SSD and I have a few symbolic links in $HOME pointing to directories on the lazy mounted hard drive to extend storage and which allows me to use my old $HOME on the hard drive. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT UUID sda 8:0 0 223.6G 0 disk └─sda1 8:1 0 223.6G 0 part 3d17c5b4-a603-4600-9f36-c598a7da783e └─root 254:0 0 223.6G 0 crypt PRGLfW-Q18M-pPu8-nr6a-tloV-SS4W-kK1ROX ├─matrix-swapvol 254:1 0 2G 0 lvm [SWAP] 38e862ef-e919-4388-810f-63ce187b342c └─matrix-rootvol 254:2 0 221.6G 0 lvm / c71a8292-c678-4a53-90da-3e4bf78cedbb sdb 8:16 0 232.9G 0 disk ├─sdb1 8:17 0 512M 0 part 14c635fb-6ee7-45c0-aefd-d3d7440116c0 └─sdb2 8:18 0 232.4G 0 part c36535d9-4098-4939-9ebe-6a2be950f3ea └─caddy 254:3 0 232.4G 0 crypt kTkSk4-oemR-1fJi-4brz-OXmW-DEZk-rqF2pN ├─vgarch-lvswap 254:4 0 4G 0 lvm a1932471-209e-4d47-85dc-c4ea1ce37de8 ├─vgarch-lvroot 254:5 0 15G 0 lvm 67d37f85-c2c0-40e7-88e9-afd4a6c1c561 └─vgarch-lvhome 254:6 0 211.2G 0 lvm dd89d271-776a-426a-826d-9f4d7056fc6a As can be seen, for whatever reason I decided on using lvm on luks twice. Note that the SSD has no /boot partition: it is decrypted with the help of a libreboot ROM image. During boot, an entry in crypttab for /dev/sdb2's UUID unlocks the harddrive using a key file in /. Then, I use systemd's automount service to mount or unmount it whenever needed: # /etc/fstab # /dev/mapper/vgarch-lvhome UUID=dd89d271-776a-426a-826d-9f4d7056fc6a /mnt/caddy ext4 rw,noatime,data=ordered,noauto,nofail,x-systemd.automount,x-systemd.device-timeout=20,x-systemd.idle-timeout=2min 0 0 I recursively changed the ownership of files in lvhome. As I have no need for lvroot and lvswap I'll be removing them along with /dev/sdb1 which contains /boot. So how can these be combined? Is that advisable? (because of different uses for SSD and HDD) It is suggested to copy the contents over to the other filesystem first, but doesn't this defeat the purpose of lvm? I thought it would've been easy to grow or shrink the filesystem, but I guess I imagined features from the zfs world.
LVM provides logical volumes, which are logical block devices, and makes it easy to grow, shrink, relocate, snapshot, etc. those block devices. You can then use these block devices any way you like... it could be a filesystem, or something else like a virtual HDD for a VM with its own partition table and everything. LVM does nothing on the filesystem level. So it's up to the filesystem to support handling those grown or shrunk block devices, or to the VM to resize their partition table. Most filesystems support growing (but sometimes not online, or not past a certain limit), but a few of them don't support shrinking. So although LVM has no qualms about shrinking the block device, you'd have to shrink the filesystem first and for some filesystems, that just isn't possible. Merging contents of two separate filesystems is usually not supported. So yes, in some cases, you have to copy files the old-fashioned way. And then abandon/remove the LV those files were on, and use the freed space to expand the LV and grow the filesystem you copied the files to. So how can these be combined? Is that advisable? (because of different uses for SSD and HDD) I would not create a block device that is backed half by SSD and half by HDD. I like to keep these separate. It might make sense in some other situations, e.g. you can do a SSD-HDD-RAID1 where the HDD is set to write-mostly, which means all reads will normally be served by SSD as it's faster. However with dropping SSD prices, that setup is less common as you can just use two SSD for regular RAID1 instead.
Can one combine logical volumes from different groups without copying the contents over?
1,620,061,643,000
With the latest update of KDE, I am seeing these errors: Sep 26 23:07:30 desktop sddm-greeter[709]: inotify_add_watch(/etc/fstab) failed: (Permission denied) Sep 26 23:08:18 desktop kdeinit5[819]: inotify_add_watch(/etc/fstab) failed: (Permission denied) Sep 26 23:08:19 desktop kgpg[878]: Error loading text-to-speech plug-in "flite" Sep 26 23:08:19 desktop org_kde_powerdevil[897]: inotify_add_watch(/etc/fstab) failed: (Permission denied) Sep 26 23:08:23 desktop plasmashell[856]: inotify_add_watch(/etc/fstab) failed: (Permission denied) My /etc/fstab permissions are: -rw-r----- 1 root root 7182 Jun 26 21:51 /etc/fstab Is that not correct?
No, that's not correct. /etc/fstab is supposed to be world readable. A LOT of programs depend on this, and it's world readable on every standard Linux distribution. You're not supposed to put credentials or anything else that's actually sensitive in this file (see Does /etc/fstab need to be world readable? for how to avoid this), and hiding anything else in it would just be security-through-obscurity, which isn't actually secure at all.
inotify_add_watch(/etc/fstab) failed: (Permission denied)
1,620,061,643,000
My /etc/fstab has only two lines: root partition and debugfs, while /etc/mtab has much more, in addition to these two, like (sysfs, proc, udev, devpts, tmpfs, cgroup, ...). Where do the additional mount points come from?
Those mounts are often performed by the initramfs/initrd scripts or other early-boot system initialization scripts, or on distributions that are fully using systemd, by .mount systemd unit files executed by either the real systemd or by the mini-systemd environment within the initramfs. For example, Debian 9 has the following .mount units by default: /lib/systemd/system/dev-hugepages.mount /lib/systemd/system/dev-mqueue.mount /lib/systemd/system/proc-fs-nfsd.mount /lib/systemd/system/proc-sys-fs-binfmt_misc.mount /lib/systemd/system/run-rpc_pipefs.mount /lib/systemd/system/sys-fs-fuse-connections.mount /lib/systemd/system/sys-kernel-config.mount /lib/systemd/system/sys-kernel-debug.mount
Why do I have mounted partitions that do not appear in /etc/fstab?
1,620,061,643,000
I have to use ext4 image file in btrfs, because Dropbox requires ext4 as file system. In fstab mount options I've set async, but I'm not sure about this. What are the pros and cons of async and sync flags for a disk image? Which one is preferable? I personally think that it is better to let host file system (btrfs in my case) handle handle sync by itself, so sync option is better. Am I right?
I personally think that it is better to let host FS (btrfs in my case) handle handle sync by itself, so sync ption is better. Am I right? If I understand you correctly, then no :-). (But it's not entirely clear, and maybe you meant to write "async option is better", not "sync ption"). The loopback device (used for mounting filesystem images) respects sync requests, effectively converting them to fsync(). Then the underlying filesystem will convert these back to sync requests on the block device, or whatever. So even for a filesystem image, adding the sync option will make all writes fully synchronous (and hence slower). Unless you have another reason, you can mount without these options and let it default to async. All fsync() requests inside the mounted image will be respected as normal.
sync vs async for images
1,620,061,643,000
I'm on Windows 7 and I run a Debian VM under Vbox. On my Windows 7 I've the c:\temp folder that I want to share. I added it : I had an hard time to permanently mount that folder with Virtual box (had to install Guest Additions, etc.. ) but now it is solved. What I need to do is edit /etc/fstab file and add the following line : temp /home/my_usr_name/an_existing_folder vboxsf defaults, _netdev 0 0 Now, after each boot, my folder is correctly mounted. But here is my question what and where is temp ? because, from my understanding, the line I added in fstab does the following : mount -t vboxsf temp /home/my_usr_name/an_existing_folder Right ? And as a matter of fact, it works, my folder will be correctly mounted if I use it. but normally, with mount you use a device ? like dev/cdrom for example. Here I can't find that temp anywere on my Debian VM. I understand that this is the name I gived when configuring the Shared folder of VM, but how it is handled by VBox ? and how Debian find it ? Note : For some reason, I don't have any folder like /media/sf_sharedfolder, I don't care and I don't want to solve that.
with mount you use a device? No. One provides a source, a.k.a. a "what" (alongside a "where" and a "vfstype", and some options). That does not have to be a block device name. It is something that only has meaning in conjunction with the type of filesystem. In the case of vboxsf mounts, the "what" is the name of the shared folder that you have specified in the VirtualBox configuration utility. The driver installed into Linux by the VirtualBox Guest Utilities knows how to reference that name, using a private communications channel between the guest environment and the host. It is not a network path. It is not something that you can locate in the guest operating system's filesystem.
How Virtual Box shared folder are handled in Debian ?
1,620,061,643,000
I recently attached a SSD to my system, with the old HDD in the DVD drive bay I set up the file mount option in the /etc/fstab file. The permissions are as follows for SSD: /dev/sda2 /home/arun/SSD/ auto rw,user,nodev,nofail,x-gvfs-show 0 0 for HDD /dev/sdb2 /home/arun/HDD/ auto rw,user,nodev,nofail,x-gvfs-show 0 0 The differnce is the HDD do not give me any write permissions unless i am operating as root. What I tried: using 'sudo chmod 777 /home//HDD' to change the permissions. command passes correctly but nothing reflects in action(root permission still needed), file permissions do not get a 'w' when i do 'ls -l' using 'sudo chown /home//HDD' , error: operation not permitted. I was able to write to this HDD, before I cleaned both my disks and Installed the operating system Ububntu 16.04 LTS. both the commands were run recursively and non-recursively. Can This be due to the fact that I installed it in the DVD drive bay? Do I need to change something in the BIOS setting?
run from root this command: chown -R arun:arun /home/arun/HDD and after try to write something in it.
Identical fstab options end up with different permission
1,620,061,643,000
I have a custom, non-default Ubuntu installation, where I use my other Linux distro's boot manager (rEFInd). As such, I don't want Ubuntu to see my EFI partition, on the principle that it has no business to what's there (which has already saved my ass last night when I did an rm -rf /*...). However, because I'm using btrfs as my file system, my /boot directory has to be in a UEFI-readable partition—like the EFI partition. So my solution to this conundrum is to shadow bind mount a subdirectory of the EFI partition, esp:\EFI\ubuntu, onto /boot through commands like these: mount /dev/sdb2 /boot mount --bind /boot/EFI/ubuntu /boot This works perfectly. Ubuntu has access to a /boot partition that it can freely drop its vmlinuz and initramfs into and my boot manager automatically detects the installation. Booting and updating works as expected. The only caveat is, so far, that I needed to use the commands to mount /boot. So like any responsible sysadmin, I made an fstab entry: UUID=XXXX-XXXX /boot vfat rw,relatime 0 0 /boot/EFI/ubuntu /boot none bind 0 0 Despite being analogous to the commands above, on boot the entire EFI partition remains mounted. The second line, performing the shadow bind mount, does not seem to execute. Is there a way to make this work in fstab and if not, what would be a reliable way to perform the bind mount as quickly as possible after the initial mount?
As @RamanSailopal suggested, the answer was (of course) in dmesg. The root of the problem was that systemd creates unit files from fstab entries, and for whatever reason, they must have a filename that maps to the mountpoint. In other words, multiple mounts per mountpoint are disallowed. I worked around this by creating a systemd service file that injects itself as a dependency of local-fs.target, by all means acting like a regular systemd mount unit: /etc/systemd/system/boot-shadow-mount.service: # Performs the shadow bind mount to hide the ESP at /boot # and instead expose the ubuntu subdirectory. [Unit] Description=/boot shadow bind mount Requires=boot.mount Conflicts=umount.target [Service] Type=oneshot ExecStart=/bin/mount --bind /boot/EFI/ubuntu /boot ExecStop=/bin/umount /boot RemainAfterExit=True [Install] RequiredBy=local-fs.target unattended-upgrades.service
Shadow bind mount in fstab
1,620,061,643,000
I have a partition mounted to /home and want to mount another partition as $HOME/Steam. The /home partition is encrypted and only mounted at login (not by fstab btw), while the Steam partition is not and fstab will mount it directly at boot. When I log in, the home partition will be mounted over it and hide its content. It will appear to be empty. How do I tell fstab to wait for the other partition?
You can't, sorry. The encrypted filesystem is mounted by something like pam_mount or pam_ecryptfs. This happens after the boot process. This mount unit won't be part of the boot "transaction", and therefore ordering dependencies on it will have no effect on boot. The best you can do is mount the partition, and then create a symbolic link file. (ln -s $HOME/Steam /mnt/Steam). If you want the Steam filesystem to be more private, make the real mount point something like /mnt/$USER/Steam, and set permissions on /mnt/$USER using chmod o-rwx.
fstab mount nested folders in order
1,620,061,643,000
I successfully mounted a directory to another path: [michael@vps ~]$ mkdir /home/michael/devicefilexxx [michael@vps ~]$ mkdir /home/michael/mountpointxxx [michael@vps ~]$ sudo mount --bind /home/michael/devicefilexxx /home/michael/mountpointxxx I see how it looks: [michael@vps ~]$ cat /etc/mtab | grep xxx /dev/mapper/centos-root /home/michael/mountpointxxx xfs rw,relatime,attr2,inode64,noquota 0 0 Well, the mount point looks correct, but not the device. I specified the device as /home/michael/devicefilexxx, not /dev/mapper/centos-root. So I look a little deeper: [michael@vps ~]$ mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,size=1009596k,nr_inodes=252399,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/mapper/centos-root on / type xfs (rw,relatime,attr2,inode64,noquota) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=34,pgrp=1,timeout=300,minproto=5,maxproto=5,direct) mqueue on /dev/mqueue type mqueue (rw,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) /dev/sda1 on /boot type xfs (rw,relatime,attr2,inode64,noquota) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=204060k,mode=700,uid=1000,gid=1000) /dev/mapper/centos-root on /home/michael/mountpointxxx type xfs (rw,relatime,attr2,inode64,noquota) Hmm, two devices at the same mount point? So, I look at my /etc/fstab: [michael@vps ~]$ cat /etc/fstab # # /etc/fstab # Created by anaconda on Fri Apr 8 14:15:42 2016 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/centos-root / xfs defaults 1 1 UUID=362355d4-e5da-44de-bf5c-5ce92cf43888 /boot xfs defaults 1 2 /dev/mapper/centos-swap swap swap defaults 0 0 If I wish to make the mount persistent after the machine is rebooted, surely I wouldn't want to add the following to /etc/stab: /dev/mapper/centos-root /home/michael/mountpointxxx xfs rw,relatime,attr2,inode64,noquota 0 0 Maybe the following, but I hesitate to do so as it differs from what /etc/mtab told me: /home/michael/devicefilexxx /home/michael/mountpointxxx xfs rw,relatime,attr2,inode64,noquota 0 0 How does one permanently mount a directory/file? Also, please explain how /dev/mapper/centos-root can have two mount points which are obviously different so must be mounted to different devices. EDIT. Backup info: [michael@vps ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 80G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 79.5G 0 part ├─centos-swap 253:0 0 2G 0 lvm [SWAP] └─centos-root 253:1 0 77.5G 0 lvm / sr0 11:0 1 1024M 0 rom [michael@lsblk ~]$
Let me try for the /dev/mapper/centos-root, your using what in Linux is called Logical Volume Management. This acts like a wrapper around your filesystem making it easy to adjust when compared to the normal partitions. You have three main mount points, root as seen from your /etc/fstab: /, swap and /boot So boot stuff is in /boot, and swap RAM in swap. Every other part of your filesystem is found in /, and this is mapped to dev/mapper/centos-root. Mine is: NAME FSTYPE LABEL UUID MOUNTPOINT sda |-sda1 xfs f86877f2-5099-483f-a56b-24a772cf4863 /boot `-sda2 LVM2_member uw2D4k-IsO3-0u2N-dKLz-utuC-tDn8-zwtaDT |-centos-root xfs e3faa70d-fc88-4951-8122-789e21a519f7 / |-centos-swap swap 95eaf3bb-7b78-418d-b14d-74206d89b3d9 [SWAP] |-centos-var xfs c35276a4-f8e2-4982-91fe-b0cd205601ff /var `-centos-home xfs c09e81c2-32e9-4ebd-a59b-caf57971a069 /home And as you can see I the same names to yours but I also created other partions to map to different areas of my CentOS. And my /etc/fstab: /dev/mapper/centos-root / xfs defaults 0 0 UUID=f86877f2-5099-483f-a56b-24a772cf4863 /boot xfs defaults 0 0 /dev/mapper/centos-home /home xfs defaults 0 0 /dev/mapper/centos-var /var xfs defaults 0 0 /dev/mapper/centos-swap swap swap defaults 0 0 So long story short, the mount your creating is in the / root partion hence it will be mapped to /dev/mapper/centos-root. That's the way it aught to be. To permanently mount those folders add this line to your /etc/fstab file. Of course, make a backup of the original in case you make a mistake. /home/michael/devicefilexxx /home/michael/mountpointxxx none bind 0 0 To see a more detailed mount point schema use the command: findmnt
Permanently mounting a directory with LVM
1,620,061,643,000
I've got a Drobo in three partitions on Linux Mint, and it periodically drops off the filesystem, losing its mount points. Upon return it disregards /etc/fstab and mounts as a new device under /media--as if I'd inserted a new USB stick. AFAICT, the fstab declarations are correct--they work manually--but maybe I've missed a key element: # drobo mount points UUID="d4af52ec-7734-4a43-91cf-ccea799b130e" /mnt/d1 ext3 rw,user 0 2 UUID="599456dd-3e9e-4f56-aa8e-957191099c6b" /mnt/d2 ext3 rw,user 0 2 UUID="94a0b9bf-6ae3-45cf-9a66-da228da64660" /mnt/d3 ext3 rw,user 0 2 The Drobo exits uncleanly, creating a ton of false duplicates. The only hardware is one internal drive and the Drobo. gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,user=zed) /dev/sde2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev) /dev/sdf2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev) /dev/sdg2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev) /dev/sdd2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdc2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev) /dev/sdb2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev) /dev/sdh2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev) /dev/sdi2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdk2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdj2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdn2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdm2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdl2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdo2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdp2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdq2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdt2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sds2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdr2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdz2 on /mnt/d3 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdy2 on /mnt/d2 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdx2 on /mnt/d1 type ext3 (rw,noexec,nosuid,nodev,user=zed) /dev/sdu2 on /media/zed/drobo1 type ext3 (rw,nosuid,nodev,uhelper=udisks2) /dev/sdw2 on /media/zed/drobo3 type ext3 (rw,nosuid,nodev,uhelper=udisks2) /dev/sdv2 on /media/zed/drobo2 type ext3 (rw,nosuid,nodev,uhelper=udisks2) When I (manually) unmount and re-mount, it follows the fstab declarations without issue. I never need to first type umount /mnt/d*. I don't need to be root to re-mount. The manual un-mount command works quickly. The first re-mount command takes a few seconds and the Drobo spins back up (this I expect is the Drobo allowing the drives to sleep, but the Drobo itself is still on the filesystem). The second and third mount commands always happen as quickly as I can type them. 0 [08:57:46 zed@linnicks doc 124] umount /media/zed/drobo* 0 [08:57:51 zed@linnicks doc 125] mount /mnt/d3 0 [08:57:56 zed@linnicks doc 126] mount /mnt/d2 0 [08:57:59 zed@linnicks doc 127] mount /mnt/d1 0 [08:58:01 zed@linnicks doc 128] Did I miss something obvious? My main concern is why /etc/fstab is disregarded, though I might be better advised to find the root cause for the dropoffs in the first place**. Just now it occurred to me that cron could umount and remount, but that's even more of a band-aid. It's easy to blame a 2008 Drobo for an occasional glitch. It seems completely random. The Drobo will work fine for a week or three and then simply be in the wrong place. It's always all three partitions. I've had less than stellar luck with other Drobos, so I'm quick to blame the drobo for the dropoffs--maybe I'm being too hasty there. It's certainly worth noting that my OS theoretically should recognize the hardware and not try and define it as three new devices each time. I don't think the Drobo is merely entering sleep mode, because I can go a day or two without using it and step right back into it. **This ambiguity may be a cause for deeper concern from a back-up-your-stuff perspective, but I'm planning a better and more traditional RAID that will serve as additional backup. Everything on "RealRaid" will be triplicated to Drobo, so when either one dies, I replace it and move on. On that note if anyone has found a specific device (Qnap, Lacie...) to be highly satisfying at the consumer (possibly even prosumer) level, lemmeno. I'm probably thinking in the 15-30TB range.
My main concern is why /etc/fstab is disregarded ... The manual mount immediately put them right back where they should be The auto-mounting you refer to is performed by udisks. As you desire, it's supposed to defer to the entry in /etc/fstab, if there is one. But if there isn't one, it mounts under /media. It sounds like udisks gets confused by the failed (but still existing) mounts... I would call this a bug in udisks. If you are interested in seeing it improved then please report it to the project :). Udisks has actually been tested with device removal, as this is something real users do :). If udisks mounts a filesystem itself, and the device is removed, it attempts to unmount the filesystem and clean up. This unmount occurs regardless of whether a mount point is specified manually in /etc/fstab. However, udisks does not unmount automatically if the device was mounted "manually", using /sbin/mount. Hence, your scenario would not necessarily have been noticed when developers of udisks did their initial coding/testing. Note that manually running mount /dev/sdu2 behaves differently to the automount that happens when the "new" device is plugged in. /sbin/mount does not call in to udisks. (udisks might be implemented in terms of /sbin/mount though).
Drobo filesystem ignores /etc/fstab, automounts in the wrong place after connection is interrupted
1,620,061,643,000
For over a year, I've been able to backup numerous Windows servers using Ubuntu Server 16.04, but this all stopped working on Tuesday May 9th 2017. Here's how I'm mounting these windows file systems using fstab: sudo nano /etc/fstab \\192.168.1.1\c$ /mnt/win2012r2 cifs credentials=/home/user/.smb,iocharset=utf8,sec=ntlm 0 0 \\192.168.1.2\d$ /mnt/win2008r2 cifs credentials=/home/user/.smb,iocharset=utf8,sec=ntlm 0 0 \\192.168.1.3\c$ /mnt/win2012 cifs credentials=/home/user/.smb,iocharset=utf8,sec=ntlm 0 0 \\192.168.1.4\d$ /mnt/win2008 cifs credentials=/home/user/.smb,iocharset=utf8,sec=ntlm 0 0 The /home/user/.smb file contains only this: username=administrator2 password=s3cr3tPW domain=company1 After a reboot, if I attempt to do a mount command, it shows that all of theses server's drives are already mounted to the linux file system: sudo mount -a --verbose -vvv /mnt/win2012r2 : already mounted /mnt/win2008r2 : already mounted /mnt/win2012 : already mounted /mnt/win2008 : already mounted However, if I try to list the directory where these mount-points are, it takes forever and eventually says these hosts are down: ls /mnt ls: cannot access 'win2012r2': Host is down ls: cannot access 'win2008r2': Host is down ls: cannot access 'win2012': Host is down ls: cannot access 'win2008': Host is down Above, is essentially the same error that I also see in my cron rsync logs: failed: Host is down (112) Again, this all started on Tuesday May 9th 2017. And, it is not just happening on this one network; its the same story at a completely different company where I'm using the same method for backup. Lastly, no settings have been changed recently on these backup servers. I don't even recall explicitly doing any updates between May 8th and 9th.
Temporary hack. I have encountered the same error when mounting from the command line. sudo mount -t cifs //ls2/jc /mnt/ls2 -o username=jc I did not get an error, "Host is down", until I tried to access both the share directory /mnt/ls2 AND /mnt. ls /mnt/ls2 ls /mnt I then unmounted the share sudo umount /mnt/ls2 then remounted using the very same command as before sudo mount -t cifs //ls2/jc /mnt/ls2 -o username=jc. Everything worked. Important note: The share at //ls2/jc is not on a Microsoft box, but on Ubuntu 14 server updated current running smbd Version 4.3.11-Ubuntu. and uname -a output: Linux ls2 4.4.0-75-generic #96~14.04.1-Ubuntu SMP Thu Apr 20 11:06:56 UTC 2017 i686 i686 i686 GNU/Linux Client where mount command executed uname -a output: Linux tec3 4.4.0-75-generic #96~14.04.1-Ubuntu SMP Thu Apr 20 11:06:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux Client mount version: mount from util-linux 2.20.1 (with libblkid and selinux support)
Can No Longer Mount Windows File Systems (Since May 9th 2017)
1,620,061,643,000
I have a mini home server running Debian 8.7 that during the initial installation had a 1tb hard drive mounted to / and a 60gb ssd mounted to /home. I would now like to remove the ssd for use in another project but am at a loss for how exactly to do so. I would like to have my home folder which has a bit of stuff from one account in it essentially migrated over to the 1tb drive. My fstab currently reads. # / was on /dev/sdb1 during installation UUID=1159719b-3f5b-482a-99c1-4dd05e9c1cc7 / ext4 errors=remount-ro 0 1 # /home was on /dev/sda1 during installation UUID=e39ea57f-7d07-4e53-8f2a-1571b23d06fe /home ext4 defaults 0 2 # swap was on /dev/sdb5 during installation UUID=2ff79462-458d-429f-9b56-8bb6540ffa32 none swap sw 0 0 sda is the 60gb drive and sdb is the 1tb. Is this easy to do? or would I be better backing up and setting everything up again?
You could (change <editor> to you text editor of choice): sudo cp -Rp /home /home-copy sudo <editor> /etc/fstab In the editor, change: UUID=e39ea57f-7d07-4e53-8f2a-1571b23d06fe /home ext4 defaults To # UUID=e39ea57f-7d07-4e53-8f2a-1571b23d06fe /home ext4 defaults Then: sudo mv /home /home-old sudo mv /home-copy /home sudo shutdown -P now Remove the drive and reboot.
Removing hard drive mounted to /home
1,620,061,643,000
OS: Parabola GNU/Linux Libre, a GNU version of Arch. I have managed to encrypt my root partition, but I'm unsure about how to encrypt my swap partition. I know swap partitions are becoming old-fashioned and that swap files are preferred, btrfs still does not support this. lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 223.6G 0 disk ├─sda2 8:2 0 221.1G 0 part │ └─cryptroot 254:0 0 221.1G 0 crypt / ├─sda3 8:3 0 2G 0 part │ └─cryptswap 254:1 0 2G 0 crypt └─sda1 8:1 0 512M 0 part /boot /etc/fstab # /dev/mapper/cryptroot UUID=0126cb9b-d3aa-4f05-a39a-71682fa847bb / btrfs rw,relatime,ssd,space_cache,subvolid=5,subvol=/ 0 0 # /dev/sda1 UUID=6F37-84A2 /boot vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 0 2 # /dev/mapper/cryptswap UUID=aef00636-0183-48d1-ab87-8f6653a30dd8 none swap defaults 0 0 /boot/loader/entries/parabola.conf title Parabola GNU/Linux-libre linux /vmlinuz-linux-libre initrd /initramfs-linux-libre.img options rd.luks.uuid=c6b69115-15c6-4561-9691-fc4a05ac9622 rd.luks.name=c6b69115-15c6-4561-9691-fc4a05ac9622=cryptroot rd.luks.options=quiet rw root=/dev/mapper/cryptroot /etc/crypttab # crypttab: mappings for encrypted partitions # # Each mapped device will be created in /dev/mapper, so your /etc/fstab # should use the /dev/mapper/<name> paths for encrypted devices. # # The Parabola specific syntax has been deprecated, see crypttab(5) for the # new supported syntax. # # NOTE: Do not list your root (/) partition here, it must be set up # beforehand by the initramfs (/etc/mkinitcpio.conf). # <name> <device> <password> <options> cryptswap /dev/disk/by-id/ata-PH4-CE240_511160905070017677-part3 /dev/urandom swap journalctl -b Dec 22 23:35:54 MyComputer mkswap[341]: Setting up swapspace version 1, size = 2 GiB (2147459072 bytes) Dec 22 23:35:54 MyComputer mkswap[341]: no label, UUID=c965e98e-b011-4e40-aef3-bb84d58d7a08 Dec 22 23:35:54 MyComputer systemd[1]: Started Cryptography Setup for swap. Dec 22 23:35:54 MyComputer systemd[1]: Reached target Encrypted Volumes. Dec 22 23:35:54 MyComputer systemd[1]: Found device /dev/mapper/swap. Dec 22 23:37:23 MyComputer systemd[1]: dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device: Job dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device/start timed out. Dec 22 23:37:23 MyComputer systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device. Dec 22 23:37:23 MyComputer systemd[1]: Dependency failed for /dev/disk/by-uuid/aef00636-0183-48d1-ab87-8f6653a30dd8. Dec 22 23:37:23 MyComputer systemd[1]: Dependency failed for Swap. Dec 22 23:37:23 MyComputer systemd[1]: swap.target: Job swap.target/start failed with result 'dependency'. Dec 22 23:37:23 MyComputer systemd[1]: dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.swap: Job dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.swap/start failed with result 'dependency'. Dec 22 23:37:23 MyComputer systemd[1]: dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device: Job dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device/start failed with result 'timeout'. Dec 22 23:37:23 MyComputer systemd[1]: Mounting Temporary Directory... Dec 22 23:37:23 MyComputer systemd[1]: Mounted Temporary Directory. Dec 22 23:37:23 MyComputer systemd[1]: Reached target Local File Systems. Dec 22 23:37:23 MyComputer systemd[1]: Starting Create Volatile Files and Directories... Dec 22 23:37:23 MyComputer systemd[1]: Started Create Volatile Files and Directories. Dec 22 23:37:23 MyComputer systemd[1]: Starting Update UTMP about System Boot/Shutdown... Dec 22 23:37:23 MyComputer systemd[1]: Started Update UTMP about System Boot/Shutdown. Dec 22 23:37:23 MyComputer systemd[1]: Reached target System Initialization. Dec 22 23:37:23 MyComputer systemd[1]: Started Daily Cleanup of Temporary Directories. Dec 22 23:37:23 MyComputer systemd[1]: Started Daily verification of password and group files. Dec 22 23:37:23 MyComputer systemd[1]: Listening on D-Bus System Message Bus Socket. Dec 22 23:37:23 MyComputer systemd[1]: Reached target Sockets. Dec 22 23:37:23 MyComputer systemd[1]: Reached target Basic System. Dec 22 23:37:23 MyComputer systemd[1]: Starting Save/Restore Sound Card State... Dec 22 23:37:23 MyComputer systemd[1]: Starting dhcpcd on enp4s0... Dec 22 23:37:23 MyComputer systemd[1]: Starting Login Service... Dec 22 23:37:23 MyComputer systemd[1]: Started D-Bus System Message Bus. ... Dec 24 00:00:09 MyComputer systemd[1]: Started Update man-db cache. Dec 24 00:01:36 MyComputer systemd[1]: dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device: Job dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device/start timed out. Dec 24 00:01:36 MyComputer systemd[1]: Timed out waiting for device dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device. Dec 24 00:01:36 MyComputer systemd[1]: Dependency failed for /dev/disk/by-uuid/aef00636-0183-48d1-ab87-8f6653a30dd8. Dec 24 00:01:36 MyComputer systemd[1]: dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.swap: Job dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.swap/start failed with result 'dependency'. Dec 24 00:01:36 MyComputer systemd[1]: dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device: Job dev-disk-by\x2duuid-aef00636\x2d0183\x2d48d1\x2dab87\x2d8f6653a30dd8.device/start failed with result 'timeout'. [Update] New Information has come to light. Looks like what should have been the encrypted swap partition is not recognized. [Update] I've tried the following with the same result as above: parted rm 3 mkpart primary ext2 -2GiB 100% (Ignore) quit dd if=/dev/urandom of=/dev/sda3 bs=1M cryptsetup -v -y luksFormat /dev/sda3 YES cryptsetup open /dev/sda3 cryptswap mkswap /dev/mapper/cryptswap swapon /dev/mapper/cryptswap [Update] Encrypting the partition like above on the Live MATE version of Parabola returns an error. 1 root@parabolaiso / # cryptsetup -y -v luksFormat /dev/sda3 --debug :( # cryptsetup 1.7.3 processing "cryptsetup -y -v luksFormat /dev/sda3 --debug" # Running command luksFormat. # Locking memory. # Installing SIGINT/SIGTERM handler. # Unblocking interruption on signal. WARNING! ======== This will overwrite data on /dev/sda3 irrevocably. Are you sure? (Type uppercase yes): YES # Allocating crypt device /dev/sda3 context. # Trying to open and read device /dev/sda3 with direct-io. # Initialising device-mapper backend library. # Timeout set to 0 miliseconds. # Iteration time set to 2000 milliseconds. # Interactive passphrase entry requested. Enter passphrase: Verify passphrase: # Formatting device /dev/sda3 as type LUKS1. # Crypto backend (gcrypt 1.7.5) initialized in cryptsetup library version 1.7.3. # Detected kernel Linux 4.8.6-gnu-1 x86_64. # Topology: IO (512/0), offset = 0; Required alignment is 1048576 bytes. # Checking if cipher aes-xts-plain64 is usable. # Userspace crypto wrapper cannot use aes-xts-plain64 (-95). # Using dmcrypt to access keyslot area. # Calculated device size is 1 sectors (RW), offset 0. # dm version [ opencount flush ] [16384] (*1) # dm versions [ opencount flush ] [16384] (*1) # Device-mapper backend running with UDEV support enabled. # DM-UUID is CRYPT-TEMP-temporary-cryptsetup-10670 # dm versions [ opencount flush ] [16384] (*1) # Device-mapper backend running with UDEV support enabled. # Udev cookie 0xd4d2344 (semid 65536) created # Udev cookie 0xd4d2344 (semid 65536) incremented to 1 # Udev cookie 0xd4d2344 (semid 65536) incremented to 2 # Udev cookie 0xd4d2344 (semid 65536) assigned to CREATE task(0) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES (0xe) # dm create temporary-cryptsetup-10670 CRYPT-TEMP-temporary-cryptsetup-10670 [ opencount flush ] [16384] (*1) # dm reload temporary-cryptsetup-10670 [ opencount flush readonly ] [16384] (*1) device-mapper: reload ioctl on temporary-cryptsetup-10670 failed: Invalid argument # Udev cookie 0xd4d2344 (semid 65536) decremented to 1 # Udev cookie 0xd4d2344 (semid 65536) incremented to 2 # Udev cookie 0xd4d2344 (semid 65536) assigned to REMOVE task(2) with flags DISABLE_SUBSYSTEM_RULES DISABLE_DISK_RULES DISABLE_OTHER_RULES (0xe) # dm remove temporary-cryptsetup-10670 [ opencount flush readonly ] [16384] (*1) # temporary-cryptsetup-10670: Stacking NODE_DEL [verify_udev] # Udev cookie 0xd4d2344 (semid 65536) decremented to 0 # Udev cookie 0xd4d2344 (semid 65536) waiting for zero # Udev cookie 0xd4d2344 (semid 65536) destroyed # temporary-cryptsetup-10670: Processing NODE_DEL [verify_udev] # dm versions [ opencount flush ] [16384] (*1) # Device-mapper backend running with UDEV support enabled. Failed to setup dm-crypt key mapping for device /dev/sda3. Check that kernel supports aes-xts-plain64 cipher (check syslog for more info). # Releasing crypt device /dev/sda3 context. # Releasing device-mapper backend. # Unlocking memory. Command failed with code 5: Input/output error [Update] I actually solved it by using systemd-swap (better than nothing) instead and I'll wait for btrfs to support real swap.
It would be simpler to make one encrypted container and set up both / and swap on that with LVM. Like this: sda1 boot sda2 LUKS-crypt LVM root-LV swap-LV Then you only need one key to open it, letting you skip crypttab altogether.
Timed out error waiting for encrypted swap device
1,620,061,643,000
I am booting from my HDD not my SSD and that is a good thing. Can I just comment out the line (line 3) containing the /boot/efi until such time I actually change my mind and want to boot from this SSD? Did they put that there just in case? Can I make it go away until that case becomes true? OS is ubuntu 14.04LTS. Here is my /etc/fstab /dev/mapper/ubuntu--vg-root / ext4 errors=remount-ro 0 1 UUID=9b4fb887-5dd8-413c-b0b0-dd3c803cf4ab /boot ext2 defaults 0 2 UUID=69A1-BD52 /boot/efi vfat umask=0077 0 1 /dev/mapper/ubuntu--vg-swap_1 none swap sw 0 0 /dev/nvme0n1 /mnt/fastssd auto nosuid,nodev,nofail,x-gvfs-show 0 0 /mnt/fastssd/100GiB.swap none swap sw 0 0
The /boot/efi partition usually contains the instance of Grub that will be loaded if you are doing a UEFI boot. The other option is bios, which does not use grub-efi. I would make sure that you are not actually booting to EFI first before you remove that mount. Usually you can check in the BIOS and see if the drive you are booting to is listed as an EFI drive or just a normal drive. Also, make a note of the drive and partition number for your root fs, just in case you have to do a manual grub boot.
I do not boot from my ssd. Is it safe to comment out /boot/efi line in file /etc/fstab?
1,620,061,643,000
I bought a UPS this week and it came with the WinPower software that lets my PC (Linux Mint 18 XFCE) to communicate with the UPS, monitor it, and receive a shutdown signal in case the UPS battery is very low. The issue is that the software added the following line to my /etc/fstab file: usbfs /proc/bus/usb usbfs defaults 0 0 Once I restart the PC it shows a message in console saying that there is a problem, that I must execute journalctl -xb, it request my root password and it says that I can execute systemctl default or systemctl reboot, none of both systemctl commands fix the issue, and I don't understand the output of journalctl command. Once I go to /etc/fstab and comment the /proc/bus/usb line I can reboot normally to my graphic environment. I have almost zero knowledge about fstab so I don't know what all those paramenters affect the system nor how can I modify that line to keep the software and my graphic environment working.
The usbfs (USB filesystem) was removed completely from the kernel in kernel version 3.5. Similar files are available under /dev/bus/usb and /sys/bus/usb. You will need a newer version of the WinPower software that works with more recent kernels. Maybe try the one available from their website.
/proc/bus/usb in /etc/fstab prevents my PC to start the graphic session
1,620,061,643,000
I am trying to auto mount a remote resource through sshfs but is not working for me. I have read all of this, this, this and this before ask to see if I can get the solution for my issue but it didn't work. So here is what I have done so far: Added the following line to /etc/fstab: <username>@remote_host_ip_address:remote_path host_path fuse.sshfs delay_connect,_netdev,user,idmap=user,transform_symlinks,identityfile=/home/<username>/.ssh/id_rsa,allow_other,default_permissions,uid=1000,gid=1000 0 0 I have ssh into the remote host once so it gets added to the /home/<username>/.ssh/know_hosts file. I have checked after and the remote host is there I have run the command sudo mount -a When I change directory and check the local path something goes wrong since I can not cd into it and it looks like: What I am doing wrong here? What I am missing?
Rather than using system-wide /etc/fstab, I suggest using afuse. It's mentioned in passing in the Arch wiki you link, but it's also included in Fedora. This runs in your user session and can therefore either use ssh-agent or prompt for a password. It also will only mount on demand, and can be configured to unmount after a timeout, which is particularly valuable if your network isn't perfectly solid. afuse -o intr -o timeout=300 \ -o mount_template='sshfs -o intr -o follow_symlinks -o reconnect <username>@<remote_host_ip_address>:<remote_path>:%r %m' \ -o unmount_template='fusermount -u -z %m' \ ~/<localmount> ... making sure to replace the <things in brackets> with your local options. The afuse docs give a few other options that you can use - I like -o populate_root_command, but it's not necessary. There are a number of different ways to run this automatically on login; it depends on your desktop environment, but basically you'd have to add the afuse line to autostart like any other such command.
How to automount sshfs? [duplicate]
1,620,061,643,000
I already read some advices on other topics, that I should probably just change 'errors=remount-ro' to ignore it, however I am interested why did this message show up in the first place? Did it find some errors? Is it an indication of some other problems? The only thing I know, I downloaded an .iso from the interwebz on that day, however I did not try to mount it or touch it all, and then I get this message when starting my laptop the next day. Here's my fstab: # / was on /dev/sda5 during installation UUID=497875d4-0e1e-4ddd-bb92-66a7da7b93c1 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda6 during installation UUID=8baa7dbb-c8f0-4779-ac6b-fee56fa4bce6 none swap sw 0 0 /dev/sdc /mnt/sdc auto nosuid,nodev,nofail,x-gvfs-show 0 0 Here's my blkid: /dev/sda1: LABEL="System Reserved" UUID="BC64B04B64B00A62" TYPE="ntfs" /dev/sda2: UUID="A204D87704D85041" TYPE="ntfs" /dev/sda5: UUID="497875d4-0e1e-4ddd-bb92-66a7da7b93c1" TYPE="ext4" /dev/sda6: UUID="8baa7dbb-c8f0-4779-ac6b-fee56fa4bce6" TYPE="swap" /dev/zram0: UUID="86a4c525-55f2-45a2-a101-ece5950aa5da" TYPE="swap" /dev/zram1: UUID="958e5520-0395-4cfe-b334-159b7872f46c" TYPE="swap" /dev/zram2: UUID="b6142f83-db6d-4c70-b115-53908b7b7be1" TYPE="swap" /dev/zram3: UUID="287a2ce9-62f0-4203-8364-74783f2ad1bd" TYPE="swap" The main question, is it safe for me to to change 'remount-ro' to 'continue' (or find some other way to ignore the issue)?
Commenting out UUID=8baa... and /dev/sdc... lines helped, since it was an earlier connected phone that was no longer connected.
Press S to skip mount... why did it show up all of the sudden? [closed]
1,620,061,643,000
Upon installation, I have created an extra partition and mounted it as /data. The partition is visible, but I get a Permission denied error when trying to create a file or directory in it. Doing it with sudo does work. I am using ext4 filesystem. I have tried deleting the partition, then creating it again and setting up fstab to use a new partition. That changed nothing. How do I make the extra partition behave normally, e.g. be writable by users?
this should fix your problem: sudo chown -R $USER:adm /data chmod 0775 /data This will give you and all users in the adm group read and write access. all other users not in the adm group have only read access. Ihe group adm is one of the default groups for all users in Ubuntu. For another distro, you could check which groups are assigned to new users by default and use one of those. Alternatively, you could create a new group (i.e. data) and add the users that should get access to data to that group. If you want all users to have access to data, irrespective of the group they are in, then the chmod line should look like this: chmod 0777 /data
Permission denied when writing a file
1,620,061,643,000
I'm trying to mount my NAS server to my raspberry pi server, but without any luck. Mounting a NAS is pretty new to me. my fstab: //10.0.0.15/volume1/pie /home/nas/ cifs username=pie,password=pieserver,workgroup=WORKGROUP But when I try to run mount -a then I get this error: root@pi2:/home/pi# mount -a Retrying with upper case share name mount error(6): No such device or address Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) I can reach the NAS server without problems from my desktop, but the pi can't seem to find it. What can I do? I've tried searching the net for solutions, and tried most of them, but most of them are just different strings to insert into /etc/fstab. But again, I had no luck.
Problem found & fixed. Was wrong dir path on NAS server, I used the path for admin user which is full path, but since pie only has access to the pi folder, then it just "direct" connect to that folder.
mount error(6): No such device or address
1,620,061,643,000
To set up an svn repository (running on arch linux) I would like to use my NAS to store the repository. I can only mount it with CIFS (smb). At first there was an issue where the httpd user could not write to the file system which I solved by adding the options rm,file_mode=0777,dir_mode=777. The next error message that appeared when trying to commit something to the repository was Can't set permission on ... which comes from the fact that there are not permissions that can be set because it is not a unix file system. Now I am thinking whether it would be possible to mount the share such that it is owned by httpd already with the permissions already set correctly. So my question is now what file_mode and dir_mode must I chose for svn to accept it? Is it possible at all? And how would I mount a CIFS share with as a specific user. All information I could find on this topic so far did also contain the umask option which my OS does not want to accept because it has been replaced with file_mode and dir_mode if I understand correctly. The fall-back option would of course be to not use the NAS but a normal disk and sync to the NAS with a job. What further options do you need? I am using the latest (and updated) ARM version of Arch Linux and installed the apache svn as described here.
The solution in my case was to mount the disk with following fstab option //server/share /mnt cifs username=USER,password=PSWD,rw,file_mode=0755,dir_mode=0755,uid=http 0 0 The important thing seems to be the uid=http option.
Use CIFS share mounted in fstab for apache svn
1,620,061,643,000
I need to add 100MB of swap space to my machine. I was trying to use a logical volume. # lvcreate –name lv_swap2 –size 100M vg # mkswap /dev/vg/lv_swap2 # swapon /dev/vg/lv_swap2 # vi /etc/fstab /dev/vg/lv_swap2 swap defaults 0 0 It doesn't work.
You wrote everything right but missed something here: swap swap defaults # lvcreate –name lv_swap2 –size 100M vg # mkswap /dev/vg/lv_swap2 # swapon /dev/vg/lv_swap2 # vi /etc/fstab /dev/vg/lv_swap2 swap swap defaults 0 0 Now it should work.
How to add 100MB of swap space as a logical volume in CentOS?
1,620,061,643,000
I'm running nginx, PHP-FPM and MySQL on Debian Wheezy. I've set up chroot jails (with debootstrap) for each individual virtual host in /srv/. Everything is working like one would expect, but after each reboot I had to manually mount --bind /proc /srv/chrootjail/proc and mount --bind /run/mysqld /srv/chrootjail/run/mysqld. This is why I added the following lines to /etc/fstab: /proc /srv/chrootjail/proc none rw,bind 0 0 /run/mysqld /srv/chrootjail/run/mysqld none rw,bind 0 0 /srv/chrootjail/proc gets mounted properly, but /srv/chrootjail/run/mysqld does not and I can't find the reason why. /srv/chrootjail/run/mysqld just remains empty, although there are files in /run/mysqld. However, mount -a fixes the problem. For obvious reasons this is not the solution I was hoping for. Does anyone see what I'm doing wrong here?
I have very limited knowledge what mount --bind even does really, but I think I might have figured out why I'm facing this problem with /run/mysqld in particular. I've just noticed /run (previously /var/run) is a tmpfs and thus it gets emptied during a reboot. So my guess is that /run/mysqld doesn't exist when /etc/fstab gets parsed. It's the init-script /etc/init.d/mysql that checks for /run/mysqld and creates it if needed with this line: # Could be removed during boot test -e /run/mysqld || install -m 755 -o mysql -g root -d /run/mysqld As a workaround I simply added a mount -a after that line. I guess I could just have it create the mysqld folder anywhere(?) but inside of /run (or /var/run), too. However, if nobody can tell me a better way to do it, I'll stick to this workaround. Thanks for your time!
How do I properly bind directories inside chroot jails using fstab?
1,620,061,643,000
I'm on Debian Jessie and I have an external USB drive with NTFS. I plugged it into my Raspberry Pi, which then spontaneously restarted (probably the power consumption was too high for the adapter I'm using). Since then I cannot access my USB drive anymore. I tried to fix it on my regular computer with sudo ntfsfix /dev/sdb1 but it would only tell me Volume is corrupt. You should run chkdsk. I got hold of a Windows computer, but it cannot detect the drive either. Here's some more information: $ ll /dev/sd* > brw-rw---- 1 root disk 8, 0 Oct 28 12:07 /dev/sda > brw-rw---- 1 root disk 8, 1 Oct 28 12:07 /dev/sda1 > brw-rw---- 1 root disk 8, 2 Oct 28 12:07 /dev/sda2 > brw-rw---- 1 root disk 8, 5 Oct 28 12:07 /dev/sda5 > brw-rw---- 1 root disk 8, 16 Oct 28 12:16 /dev/sdb > brw-rw---- 1 root disk 8, 18 Oct 28 12:16 /dev/sdb2 > brw-rw---- 1 root disk 8, 19 Oct 28 12:16 /dev/sdb3 $ sudo fdisk -l > Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors > Units: sectors of 1 * 512 = 512 bytes > Sector size (logical/physical): 512 bytes / 512 bytes > I/O size (minimum/optimal): 512 bytes / 512 bytes > Disklabel type: dos > Disk identifier: 0x0007f3b4 > > Device Boot Start End Sectors Size Id Type > /dev/sda1 * 2048 472016895 472014848 225.1G 83 Linux > /dev/sda2 472018942 488396799 16377858 7.8G 5 Extended > /dev/sda5 472018944 488396799 16377856 7.8G 82 Linux swap / Solaris > Disk /dev/sdb: 1.8 TiB, 2000365289472 bytes, 3906963456 sectors > Units: sectors of 1 * 512 = 512 bytes > Sector size (logical/physical): 512 bytes / 512 bytes > I/O size (minimum/optimal): 512 bytes / 512 bytes > Disklabel type: dos > Disk identifier: 0x6e697373 > > Device Boot Start End Sectors Size Id Type > /dev/sdb1 ? 1936269394 3772285809 1836016416 875.5G 4f QNX4.x 3rd part > /dev/sdb2 ? 1917848077 2462285169 544437093 259.6G 73 unknown > /dev/sdb3 ? 1818575915 2362751050 544175136 259.5G 2b unknown > /dev/sdb4 ? 2844524554 2844579527 54974 26.9M 61 SpeedStor > > Partition table entries are not in disk order. $ cat /etc/fstab > # /etc/fstab: static file system information. > # > # Use 'blkid' to print the universally unique identifier for a > # device; this may be used with UUID= as a more robust way to name devices > # that works even if disks are added and removed. See fstab(5). > # > # <file system> <mount point> <type> <options> <dump> <pass> > # / was on /dev/sda1 during installation > UUID=4b0d4c23-d659-4d16-9396-b895c4964b12 / ext4 errors=remount-ro 0 1 > # swap was on /dev/sda5 during installation > UUID=2cc71c90-2d55-4f49-bdb0-b25166d77014 none swap sw 0 0 > /dev/sdb1 /media/usb0 auto rw,user,noauto 0 0 The partition should be /dev/sdb1, but as you can see it's not in /dev. Also, I don't understand why fdisk is saying its type is QNX4.x 3rd part. Any help how I can at least retrieve the files on the disk?
As can be seen by the fdisk command, the partition table was all messed up. This probably happened because the power on the drive was cut while it tried to access it. I installed testdisk, then ran sudo testdisk /dev/sdb After a quick analysis, the disk was properly recognized as being an ntfs disk with only one partition, as opposed to the four partitions suggested by fdisk. Rewriting the partition table with testdisk fixed the issue. I now have access to all files, as if nothing ever happened. Source: https://linuxacademy.com/blog/linux/ntfs-partition-repair-and-recovery-in-linux/
Cannot mount external USB drive
1,620,061,643,000
I'm running Debian Wheezy on an SSD, and in addition I have two 500GB hard disks in Intel software RAID 0 (fakeraid). Both the SSD and the RAID array have GPT partition layouts. I have set up my fstab to automatically mount one of the partitions on the RAID array, but the root filesystem is on the SSD. During boot, dmraid finds the array but does not automatically discover the partitions on it. This causes the boot fsck to fail and dumps me at a recovery shell. Running kpartx -a /dev/mapper/isw_xxx_Volume0 at the recovery shell automatically discovers the partitions and everything works great, but it's a bit irritating having to type it in every time I boot. Am I doing something wrong? Is there some way to make the partition probing automatic? Partition layout of /dev/sda (the SSD) Number Start (sector) End (sector) Size Code 1 2048 411647 200.0 MiB EF00 EFI System Partition 2 411648 117598207 55.9 GiB 0700 Debian root filesystem 3 117598208 250068991 63.2 GiB 0700 Not used yet Partition layout of /dev/mapper/isw_cddhbifacg_Volume0 (the RAID array) Number Start (sector) End (sector) Size Code 1 2048 937502719 447.0 GiB 0700 Debian extra stuff 2 937502720 976564223 18.6 GiB 8200 Swap 3 976564224 1953535999 465.9 GiB 0700 Not used yet Contents of /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> UUID=7f894df3-49f4-4119-bda9-f4734780eaab / ext4 errors=remount-ro 0 1 UUID=0B6C-A37C /boot/efi vfat defaults 0 1 /dev/mapper/isw_cddhbifacg_Volume0p1 /mnt/data ext4 defaults 0 2 /dev/mapper/isw_cddhbifacg_Volume0p2 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/sdd1 /media/usb0 auto rw,user,noauto 0 0 /dev/sde1 /media/usb1 auto rw,user,noauto 0 0 /dev/sde2 /media/usb2 auto rw,user,noauto 0 0
Solution to the original problem Install kpartx: sudo aptitude install kpartx Change these lines in /lib/udev/rules.d/60-kpartx.rules: ENV{DM_STATE}=="ACTIVE", ENV{DM_UUID}=="dmraid-*", \ RUN+="/sbin/kpartx -a -p -part /dev/$name" to this: ENV{DM_STATE}=="ACTIVE", ENV{DM_UUID}=="DMRAID-*", \ RUN+="/sbin/kpartx -a /dev/$name" Update the initramfs: sudo update-initramfs -u Restart and the partitions should have been detected properly. Alternative solution Use mdadm instead of dmraid. Set up the RAID array using the Intel configuration utility (Ctrl+I during boot), and Debian Installer 7 RC1 will detect and activate it automatically.
Automatically run kpartx during boot
1,620,061,643,000
I was running a small instance on Amazon EC2. I'm trying to migrate it to a micro as it requires very minimal processing power. One thing I just learned though, is that micro instances do not come with ephemeral storage like the other instance sizes. Here is the fstab file from the small instance. I just added the nobootwait for the /dev/sda3 line. /dev/sda1 / ext3 defaults 0 0 /dev/sdb /mnt ext3 defaults 0 0 /dev/sda3 swap swap defaults,nobootwait 0 0 none /dev/pts devpts gid=5,mode=620 0 0 none /dev/shm tmpfs defaults 0 0 none /proc proc defaults 0 0 none /sys sysfs defaults 0 0 /dev/sdf /apps/ ext3 defaults,acl 1 1 /dev/sdg /data/ ext3 defaults 1 1 Now when I launch this instance as a micro I get a warning: Mounting local filesystems: mount: special device /dev/sdb does not exist [FAILED] It still boots up fine and things seem to be working great, but is there anything I'm missing that I would need the ephemeral storage for that I'm not seeing or thinking about?
The AMI will work fine, as-is. The only reason you would need it is if your workflow/application needs it.
Do I need ephemeral storage?
1,698,408,682,000
I'm running Linux with systemd 249. The /etc/fstab entries are: # cat /etc/fstab /dev/root / auto defaults,x-systemd.growfs 1 1 proc /proc proc defaults 0 0 devpts /dev/pts devpts mode=0620,ptmxmode=0666,gid=5 0 0 tmpfs /run tmpfs mode=0755,nodev,nosuid,strictatime 0 0 tmpfs /var/volatile tmpfs defaults 0 0 /dev/mmcblk2p3 /data ext4 x-systemd.growfs 0 0 After starting systemd says that the /data partition was grown (which is what I expext) but the / partition is not even mentioned. Running the systemd-growfs / by hand grows the partiton as expected. What am I missing?? I just discovered an error message related to this problem: systemd-growfs[235]: Failed to open "/dev/block/179:1": No such file or directory
According to https://github.com/systemd/systemd/issues/21592 this seems to be a problem for before 252 versions. I wrote my own service calling systemd-growfs / to work around the problem.
Why does "x-systemd.growfs" in fstab not work for the rootfs?
1,698,408,682,000
I want to mount my cgroups from fstab instead of allowing systemd to mount them at /sys/fs/cgroup. I have my fstab written with targets such as /cgroup/cpu, /cgroup/blkio.. etc. When I boot up, the machine boots into emergency mode and I see messages such as: [FAILED] Failed to mount /cgroup/cpu. See 'systemctl status cgroup-cpu.mount' for details. Checking the status reports: mount: /cgroup/cpu: cgroup already mounted on /sys/fs/cgroup/systemd. So it looks like systemd is racing to mount its own cgroups before it mounts the ones I want in fstab. Is there any way to get systemd to yield to my cgroups in fstab?
After some digging, I have learned that systemd refuses to budge on where it mounts these. This page outlines which mounts can be prevented from automatically being mounted and which cant. The list specifies that /sys/fs/cgroup is always mounted automatically.
How do I use cgroups in fstab instead of default /sys/fs/cgroup from systemd?
1,698,408,682,000
I am attempting to mount a usb drive to a specific directory at boot time so that it's mapped to the same directory each time. I read this article, https://raspberrypi.stackexchange.com/questions/36824/automounting-usb-drive-on-boot, that says to add it to /etc/fstab proc /proc proc defaults 0 0 PARTUUID=bf444af9-01 /boot vfat defaults 0 2 PARTUUID=bf444af9-02 / ext4 defaults,noatime 0 1 UUID=b994a97c-027d-465e-8483-ad519866f87c /mnt/usb2 ext4 defaults,umask=000 0 0 # a swapfile is not a swap partition, no line here # use dphys-swapfile swap[on|off] for that I tried both PARTUUID and UUID, same results both times. Here's what I've tried: PARTUUID=b994a97c-027d-465e-8483-ad519866f87c /mnt/usb2 ext4 defaults,umask=000 0 0 PARTUUID=fc69e031-8593-4c67-9cf9-c364d0166117 /mnt/usb2 ext4 defaults,umask=000 0 0 UUID=b994a97c-027d-465e-8483-ad519866f87c /mnt/usb2 ext4 defaults,umask=000 0 0 UUID=fc69e031-8593-4c67-9cf9-c364d0166117 /mnt/usb2 ext4 defaults,umask=000 0 0 When I restart, it is giving this error: Cannot open access to console, the root account is locked. I got out of this by modifying the cmdline.txt and adding bash. I did a blkid to see my usb drive UUID. Here is what I got: pi@raspberrypi:~ $ sudo blkid /dev/mmcblk0p1: LABEL_FATBOOT="boot" LABEL="boot" UUID="6284-658D" TYPE="vfat" PARTUUID="bf444af9-01" /dev/mmcblk0p2: LABEL="rootfs" UUID="3a324232-335f-4617-84c3-d4889840dc93" TYPE="ext4" PARTUUID="bf444af9-02" /dev/sda2: UUID="b994a97c-027d-465e-8483-ad519866f87c" TYPE="ext4" PARTLABEL="Basic data partition" PARTUUID="fc69e031-8593-4c67-9cf9-c364d0166117" /dev/mmcblk0: PTUUID="bf444af9" PTTYPE="dos" /dev/sda1: PARTLABEL="Microsoft reserved partition" PARTUUID="4792d598-bd1e-4784-99a5-27db1f5d937b" What am I doing wrong? I cannot get this usb drive to mount at boot up to a specific directory. Any suggestions please?
TL;DR: Remove umask=000 from your fstab entry. This is not a valid mount option for an ext4 filesystem. The umask option is only available on filesystems like FAT and NTFS that do not support Unix permissions. Additional details: The error you're getting indicates that system startup failed, but root isn't allowed to log in with a password, so systemd won't start a recovery shell. First step would be to boot with init=/bin/bash added to the kernel command line (which it sounds like you've already done) to boot into a root shell, and then run passwd root to set a root password. Then reboot, and you should be allowed to log in to a recovery shell that you can use to debug. Once you're logged in to the recovery shell, you can inspect the logs to see what failed. journalctl -u mnt-usb2.mount and journalctl -b are probably going to be the most useful things to look at. You can also try mounting manually with mount /mnt/usb2. In your case, before removing the umask option, this should result in an error like this: mount: /mnt/usb2: wrong fs type, bad option, bad superblock on /dev/sda2, missing codepage or helper program, or other error. Remove umask=000 from the fstab entry, and try manually mounting again. Most likely it will work. I'd recommend you add nofail to the options for your USB filesystem. This will allow your system to boot normally if the filesystem can't be mounted for any reason. (You can also leave out defaults if you'd like. This is only necessary if you have no other options.) So in summary, here's what I suggest you put in /etc/fstab: UUID=b994a97c-027d-465e-8483-ad519866f87c /mnt/usb2 ext4 nofail 0 0
Mounting USB on boot causes error on boot on Pi4
1,698,408,682,000
I just upgraded my server to Debian Buster (Raspbian). However, when I now boot, my USB hard drives aren't mounting. I see something like the following on my splash screen: mount: /media/PiHDD: can't find UUID=<string> If I manually sudo mount -a, then all hard drives are mounted The following is /etc/fstab: proc /proc proc defaults 0 0 /dev/mmcblk0p1 /boot vfat defaults 0 0 /dev/mmcblk0p2 / ext4 defaults,noatime 0 0 UUID=<string> /media/PiHDD ext4 defaults,noatime 0 0 UUID=<string2> /media/PiHDD2 ext4 defaults,noatime 0 0 ... which worked fine before the update to Buster. I've also tried identifying the hard drives using PARTUUID or LABEL, based on the output of blkid, but these also fail on boot with can't find LABEL, etc. I'm not using systemd (PID 1 is init, and file /sbin/init gives an executable). /sbin/init --version gives SysV init version: 2.93. I've updated to the latest (testing) kernel 4.19.57-v7+. On boot, I think my system is seeing the USB devices before it tries to mount them. I can see New USB device found before the mounting fails. I also see Attached SCSI disk after the device is found, but I'm not sure if it's before or after the failed mounting. This is all in /var/log/syslog, but for some reason the mount… can't find UUID errors that I see on boot are not in any file in /var/log. How can I get my system to automatically mount my USB hard drives on boot? Here are the contents of /etc/inittab. # /etc/inittab: init(8) configuration. # $Id: inittab,v 1.91 2002/01/25 13:35:21 miquels Exp $ # The default runlevel. id:2:initdefault: # Boot-time system configuration/initialization script. # This is run first except when booting in emergency (-b) mode. si::sysinit:/etc/init.d/rcS # What to do in single-user mode. ~~:S:wait:/sbin/sulogin # /etc/init.d executes the S and K scripts upon change # of runlevel. # # Runlevel 0 is halt. # Runlevel 1 is single-user. # Runlevels 2-5 are multi-user. # Runlevel 6 is reboot. l0:0:wait:/etc/init.d/rc 0 l1:1:wait:/etc/init.d/rc 1 l2:2:wait:/etc/init.d/rc 2 l3:3:wait:/etc/init.d/rc 3 l4:4:wait:/etc/init.d/rc 4 l5:5:wait:/etc/init.d/rc 5 l6:6:wait:/etc/init.d/rc 6 # Normally not reached, but fallthrough in case of emergency. z6:6:respawn:/sbin/sulogin # What to do when CTRL-ALT-DEL is pressed. ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now # Action on special keypress (ALT-UpArrow). #kb::kbrequest:/bin/echo "Keyboard Request--edit /etc/inittab to let this work." # What to do when the power fails/returns. pf::powerwait:/etc/init.d/powerfail start pn::powerfailnow:/etc/init.d/powerfail now po::powerokwait:/etc/init.d/powerfail stop # /sbin/getty invocations for the runlevels. # # The "id" field MUST be the same as the last # characters of the device (after "tty"). # # Format: # <id>:<runlevels>:<action>:<process> # # Note that on most Debian systems tty7 is used by the X Window System, # so if you want to add more getty's go ahead but skip tty7 if you run X. # 1:2345:respawn:/sbin/getty --noclear 38400 tty1 2:23:respawn:/sbin/getty 38400 tty2 3:23:respawn:/sbin/getty 38400 tty3 4:23:respawn:/sbin/getty 38400 tty4 5:23:respawn:/sbin/getty 38400 tty5 6:23:respawn:/sbin/getty 38400 tty6 # Example how to put a getty on a serial line (for a terminal) # #T0:23:respawn:/sbin/getty -L ttyS0 9600 vt100 #T1:23:respawn:/sbin/getty -L ttyS1 9600 vt100 # Example how to put a getty on a modem line. # #T3:23:respawn:/sbin/mgetty -x0 -s 57600 ttyS3 #Spawn a getty on Raspberry Pi serial line T0:23:respawn:/sbin/getty -L ttyAMA0 115200 vt100
I looks like SysV is now poorly maintained. I moved to systemd and without any configuration change, my drives are now mounted on boot as expected. FWIW on Debian/Raspbian, I just did sudo apt-get purge sysvinit-core, which also automatically installed libnss-systemd and systemd-sysv.
mount -a works, but fails at boot with "can't find UUID"
1,698,408,682,000
I'm working with the DataDomain server in our Commvault solutions. Anytime this server is rebooted the network disk in use for the solution does not remount. Right now this means we need to stop some processes then run: boostfs mount -d datastore.company.com -s Commvault /cvdisk I didn't see a way to mount with fstab so I tried following a guide to run scripts at boot. Now I have two scripts. The first: cat /etc/systemd/system/remountboostfs.service [Unit] Description=Remount boostfs commvault drive After=network.target [Service] Type=simple ExecStart=/usr/local/sbin/cvdiskmount.sh RemainAfterExit=yes [Install] Wantedby=default.target That calls the second which should handle the mounting cat /usr/local/sbin/cvdiskmount.sh #!/bin/bash (cd /opt/emc/boostfs/bin && ./boostfs mount -d datastore.company.com -s Commvault /cvdisk) After some trial and error I can confirm that the second script will remount the drive when run manual, and that the first service file is running (not not generating anything in the messages logs (not since I fixt the counting script). Is there a better way to remount this file system on rebooting? What am I missing in the current scripts? Edit 1: As seen in the notes the fstab works as written: aemb01p.salemstate.edu:Commvault /cvdisk boostfs defaults 0 0 This works perfectly, but when I tried to convert this for automount/ autofs this doesn't appear to work. No errors, notices or logs I can find. $cat /etc/auto.master| grep cvdisk /cvdisk /etc/auto.cvdisk --timeout 120 $cat /etc/auto.cvdisk / -fstype=boostfs aemb01p.salemstate.edu:Commvault
See page 37 of the BoostFS for Linux Configuration Guide. In there, you will see a section on using mount. For your environment, the mount command would be mount -t boostfs datastore.company.com:Commvault /cvdisk. In /etc/fstab terms: datastore.company.com:Commvault /cvdisk boostfs defaults 0 0
Automatically remount boostfs drive
1,698,408,682,000
we need to add disks on more then 100 redhat machines, therefore we need to update also the /etc/fstab on each machine the problem is that some machines configured with UUID and other with ordinary dev in fstab so I want to make a bash script that will identify the fstab configuration as the following in case fstab configured with UUID add new UUID lines for each additional disk in case fstab configured with ordinary dev then add new dev lines for each additional disk so my question is - what is the best approach to identify what configured in fstab UUID or ordinary dev ? remark - not include the OS , we talk here only on additional disks in HW machines here is example of linux machine with UUID conf in fstab /dev/mapper/vg00-loov_root / xfs defaults 00 UUID=7de1dc5c-b605-4a6f-bdf1-f1e869f6ffb9 /boot xfs defaults 0 0 /dev/mapper/vg00-loov_var /var xfs defaults 00 /dev/mapper/vg00-loov_swap swap swap defaults 00 UUID="fcb73644-4ad3-4b19-85f8-dbb9ed53a871" /data/sdb ext4 defaults,noatime 0 0 UUID="5f56c1d6-266f-4ea2-a8f7-df06f08e01c0" /data/sdc ext4 defaults,noatime 0 0 UUID="4c908671-4045-41e8-a396-a5198978e3ac" /data/sdd ext4 defaults,noatime 0 0 UUID="d44fe62a-72dc-4674-91ac-5a1962797e22" /data/sde ext4 defaults,noatime 0 0 UUID="ee3d8fa8-e000-4abb-a26c-da99499e630c" /data/sdf ext4 defaults,noatime 0 0 UUID="61e9e16f-eb49-4c97-aaf0-0ed2dc3f3007" /data/sdg ext4 defaults,noatime 0 0 UUID="ada12394-0e0b-4657-a148-d85548d7bc75" /data/sdh ext4 defaults,noatime 0 0
You may have to define more precisely what counts as UUID configuration. If it is enough that a single volume is mounted via UUID then you can simply use if grep -q '^\s*UUID=' /etc/fstab; then : else : fi
how to identify UUID conf or ordinary dev conf from fstab
1,698,408,682,000
I have an internal backup drive (backup1) with fstab entry to mounts to /mnt/backup. Occasionally, I unplug this drive temporarily, connect another drive (backup2) and do a secondary backup. Once done, I'll remove backup2 drive, plug in backup1 drive, that brings my system to its usual state. My backup scripts are hard-coded to /mnt/backup, so I can connect any drive I want, mount it to /mnt/backup and backup my data. Today I unplugged backup1, connected backup2 booted ubuntu. fstab looks for backup1's UUID, not connected, but nofail flag is set, so it just skips mounting /mnt/backup. No fstab entry for backup2. It's connected as /dev/sdc but no partitions mounted. Good. I tried to mount it by mount --verbose /dev/sdc1 /mnt/backup I get a response saying that sdc1 is successfully mounted to /mnt/backup, but it's not actually. mount and lsblk don't show this mount, /mnt/backup is blank. However, if I mount sdc1 to some other directory, like /tmp/backup, it indeed mounts. I'm only unable to mount the new drive's partition to /mnt/backup. My questions: Is the system preventing me from mounting to /mnt/backup because fstab has entry for some other partition for that mount point? If so, how do I mount anything at /mnt/backup irrespective of what's defined in fstab? ubuntu 16.04, linux 4.4.0-97
This turned out to be a temporary issue. I was able to mount the drive in question to /mnt/backup after sometime. I couldn't attribute the resolution to any specific action from my side: I didn't alter fstab, didn't restart the computer. It should be a bug, if I'm able to reproduce the same situation, I'll try to collect more diagnostic details and add here. So, No, system shouldn't prevent mounting a drive to a mount point just because it is defined in fstab. But this does happen sometimes. Restarting the system might help. As blocking mountpoints defined in fstab is not the standard behavior, one should be able to mount any block device to any mount point normally.
fstab blocks mountpoint and prevents external drive from mounting
1,698,408,682,000
I'm trying to create an fstab entry for /dev/fd0 so that user can mount a floppy formatted either with VFAT or ext32. The simple fstab entry /dev/fd0 /mnt/floppy auto noauto,user,sync,gid=users,umask=000 0 2 can only mount DOS floppies. If I change the entry to /dev/fd0 /mnt/floppy ext2 noauto,user,sync 0 2 then I can only mount a floppy with ext2 filesystem. Obviously, I can issue a root mount command with appropriate -t option and mount either floppies. Is there a way to mount floppy as user with the simple command mount /mnt/floppy for floppies with either VFAT or ext2 filesystem?
From man 8 mount on Linux: If no -t option is given, or if the auto type is specified, mount will try to guess the desired type. Mount uses the blkid library for guessing the filesystem type; if that does not turn up anything that looks familiar, mount will try to read the file /etc/filesystems, or, if that does not exist, /proc/filesystems. All of the filesystem types listed there will be tried, except for those that are labeled nodev (e.g. devpts, proc and nfs). If /etc/filesystems ends in a line with a single *, mount will read /proc/filesystems afterwards. While trying, all filesystem types will be mounted with the mount option silent. So just create a file /etc/filesystems containing something like this: ext4 ext3 ext2 vfat msdos ntfs iso9660 ufs xfs Add more filetypes if you need. Then you can use type auto in fstab.
floppy fstab entry for vfat and ext
1,490,340,173,000
I have 3 nfs mounts that used to work but don't work from fstab any longer but do work on other servers. Also if I mount manually from said server they work mount Server:/backup01 /backup01 but in fstab, with flags, it is not working: Server:/nas/stage /u00/stage nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,vers=3,timeo=600,actimeo=0 0 0 Server:/backup /u00/backup nfs rw,bg,hard,intr,rsize=32768,wsize=32768,tcp,noac,nfsvers=3,timeo=600 0 0 Server:/backup01 /backup01 nfs rw,bg,hard,intr,rsize=32768,wsize=32768,tcp,noac,nfsvers=3,timeo=600 0 0 Also note it fails silently. Any ideas would be much appreciated. Update: I went through each option and found the issue is with TCP option. Im not sure why yet as that option works on all other servers. Will keep looking but if someone can save me sometime and shed any light on why the TCP option wouldnt be working that would be great :)
I found the issue was with the TCP switch and found out that the backup server had been rebuilt and they didnt bind the ports, so all nfs connections were defaulting to UDP, as MOUNTD had picked up a blocked port. If you dont bind the ports when the machine is rebooted it will change the port for the below nfs components. To Bind the Ports; Uncomment or add these lines to /etc/sysconfig/nfs with the ports you want to use: RQUOTAD_PORT=875 LOCKD_TCPPORT=32803 LOCKD_UDPPORT=32769 MOUNTD_PORT=892 STATD_PORT=662 dont forget too restart nfs service.
Mount works manually but not in fstab
1,490,340,173,000
I have exported a handful of shares on my Synology - e.g. /volume2/Home_Data/Downloads On my CentOS7 box I would like to mount this and have it available for all users of the system. This works fine when I mounted to /mnt/nfs/ /etc/fstab entry diskstation.davis.local:/volume2/Home_Data/ /mnt/nfs/ nfs4 user,nfsvers=4,nosuid,bg,noexec 0 0 However, I need it mounted to /mnt/nfs/downloads. When mounted here only root has the share mounted, other users cannot see it. /etc/fstab entry diskstation.davis.local:/volume2/Home_Data/ /mnt/nfs/downloads nfs4 user,nfsvers=4,nosuid,bg,noexec 0 0 I thought it could be a perms issue, but the perms on /mnt/nfs & /mnt/nfs/downloads are the same. Perms: /mnt/: total 4 drwxr-xr-x. 4 root root 26 Dec 15 12:28 . dr-xr-xr-x. 17 root root 4096 Dec 15 12:02 .. drwx------. 6 root root 64 Dec 15 12:38 nfs drwx------. 2 root root 6 Dec 3 11:30 tmp /mnt/nfs/: total 0 drwx------. 6 root root 64 Dec 15 12:38 . drwxr-xr-x. 4 root root 26 Dec 15 12:28 .. drwx------. 3 root root 18 Dec 15 12:37 downloads Any ideas what I can try?
go into the synology control panel and make sure NFS share is checked. And then prior to mounting it in CentOS, do a chmod -R 777 /mnt to make everything under /mnt read-write-execute for all. I have a few synology boxes and have them NFS mounted to my linux systems and they work well. This is for NFSv3. And if you cannot get it to work from the web browser log in to the synology, then open up an SSH connection to the Synology and use putty.exe. from here you can view the synology operating system, which is linux based and will look very familiar, and then you can dive deeper into how NFS-server is working on the synology box.
NFS share mounting issue
1,490,340,173,000
I'm trying to mount my external hard drive automatically on my server running Ubuntu 14.04. I've tried editing /etc/fstab directly, I've tried GNOME Disks and I've tried ssbmount. None of them work after my server automatically resets every morning. My fstab file looks like this: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=e08eb43c-11ba-40e1-bc9d-122c364960d7 / ext4 errors=remount-ro 0 1 /swapfile none swap sw 0 0 #/dev/sdb2 /mnt/ext/drive01 ntfs-3g errors=remount-ro 0 1 UUID=764A847D4A843C3D /mnt/ext/drive01 ntfs-3g nosuid,nodev,nofail,x-gvfs-show 0 0 The bottom most entry is the one relevant to the hard drive. It just simply won't auto-mount on startup. I have to go into it, unmount it and then remount it; otherwise I get the error Failed to mount "Backup Drive". Error when getting information for file /mnt/ext/drive01: Transport endpoint is not connected" If anyone could help clear up the problem that'd be great.
To verify that you have your /etc/fstab set up correctly, the mount point exists, you're not relying on a disk manager to mount the drive for you, and you have ntfs-3g installed, can you try rebooting the server, then mount it with this command? sudo mount /mnt/ext/drive01
Auto-mounting external hard drive not working
1,490,340,173,000
I'm on Ubuntu server 11.10 (x86) and I basically ran into this problem as this: https://serverfault.com/questions/56588/unmount-a-nfs-mount-where-the-nfs-server-has-disappeared The umount command didn't work, so I tried to just use the ol' reboot. Now the machine gets an error when booting: FS-Cache:netfs 'nfs' registered for caching <two minutes later> init: udevtrigger post-stop process (345) terminated with status 1 I tried booting in recovery mode so I could just comment out the bad NFS mount in my /etc/fstab file, but I wasn't able to write the changes. I basically just want to get the machine booting properly again so that I can erase the bad mount point, either order is fine. This assumes that the FS-Cache error is due to the bad NFS mount, and that might be a poor assumption. What are my options here?
I got into recovery mode and ran mount -o remount,rw / which allowed me to get rid of my bad NFS mounts in /etc/fstab (and I prevented further issues by adding the intr option to the NFS entries), but then my boot (in non-recovery) seemingly got worse -- I got no output on the screen after selecting normal boot from the GRUB menu! I then added nomodeset to the GRUB_CMDLINE_LINUX_DEFAULT boot option list and everything worked. I have no idea how those two were related, but that's what fixed the issue.
Is a bad NFS mount preventing a clean boot?
1,490,340,173,000
I have a problem automounting my NTFS partition I use for storing Data (/dev/sdb2). I've tried adding an entry to /etc/fstab but it doesn't work. /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=741be459-4010-4e6f-9ff3-928759f37131 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=0e7000d3-12fd-4900-a37e-0705f93fa3ce none swap sw 0 0 /dev/sdb2 /media/Data auto rw,user,auto 0 0 I kinda cannot see where is the problem, tried also with the UUID but failed. OS: CruchBang 11 Waldorf The reason I don't add it to the ~/.config/openbox/autostart is that it seems to run after some XDG stuff which I dont know what it is, so my media directories (music, videos, pictures...), which are links to /media/Data/, aren't recognized (I think because /dev/sdb2 isn't mounted at this time). EDIT: I've just encountered a weird thing which happens after I've added the fstab entry: Unprivileged user can not mount NTFS block devices using the external FUSE It happens when I try to mount the drive
Found a working way! Here is my line in fstab: device_uuid /media/Data ntfs-3g defaults,windows_names,locale=en_US.utf8 0 0 Doesn't show it in devices but mounts correctly. Also the mount folder must exist!
How to properly automount partitions? fstab
1,490,340,173,000
I am building Linux from Scratch to put on a usb drive but don’t know if Linux always calls the drive being booted from /dev/sda or not. I have two disks in the system, my SSD which is called /dev/sda in my Arch install, and my USB drive which is called /dev/sdb. Should my /etc/fstab file look something like this: # <device> <dir> <type> <options> <dump> <fsck> /dev/sda1 / ext4 noatime 0 1 /dev/sda2 none swap defaults 0 0 /dev/sda3 /home ext4 noatime 0 2 or something like this: # <device> <dir> <type> <options> <dump> <fsck> /dev/sdb1 / ext4 noatime 0 1 /dev/sdb2 none swap defaults 0 0 /dev/sdb3 /home ext4 noatime 0 2
You should not use sda or sdb. While in practice it is likely that the internal disk will be recognized first and become sda, you don't know for sure. You may also come across a computer with two internal disks, and then sdb will be wrong. To identify your USB drive, use either the UUID or the label of the partition you want to use. It will be something like /dev/disk/by-uuid/12345678-1234-1234-1234-123456789abc or /dev/disk/by-label/usb-drive The UUID is a random value, it should be uniq. If you use the label, make sure to use a uniq name.
Should I use /dev/sda or /dev/sdb in fstab when booting from USB?
1,490,340,173,000
I've created a remote mounted drive by adding this to my /etc/fstab: \\192.x.x.x\web /mnt/web cifs username=X,password=X,domain=X and mounting it with sudo mount /mnt/web (which works perfectly!) The problem is that I can only mount the drive as root. Running mount /mnt/web (without sudo) results in the error mount: only root can mount \192.x.x.x\web on /mnt/web I read this guide that suggests the following syntax //192.168.1.100/data /media/corpnet cifs username=johnny,domain=sealab,noauto,rw,users 0 0 When I change my entry to use this syntax like this: \\192.x.x.x\web /mnt/web cifs username=X,password=X,domain=X,noauto,rw,users 0 0 and run mount /mnt/web I get mount.cifs: permission denied: no match for /mnt/web found in /etc/fstab I then read this question along with it's highest voted answer, but the same error appears. I have checked that my web folder in the /mnt directory has CHMOD 775, which should be ok. What could be wrong?
UPDATE (see the discussion on the comments): You are typing \\ instead of //. For linux you must use // even if the network file system is running inside Windows. The old post: You are writing mount /mnt/web, but the directory you write in /etc/fstab was /media/corpnet so you need to write /mnt/web in /etc/fstab... So change /media/corpnet //192.168.1.100/data /media/corpnet cifs username=johnny,domain=sealab,noauto,rw,users 0 0 To /mnt/web: //192.168.1.100/data /mnt/web cifs username=johnny,domain=sealab,noauto,rw,users 0 0 Or if you can't edit fstab change your command to mount /media/corpnet (and you must create this directory too) Good lucky and if that works, please select this as the correct answer.
Mouting a remote drive with cifs
1,490,340,173,000
I'm trying to access the etc/fstab by the following command and getting the error. Bash command: sudo echo "swapfile none swap sw 0 0" >> etc/fstab Error: -bash: etc/fstab: No such file or directory Then when I trying to check the existence of etc/fstab by ls -l and getting it there. But why I got an error in the first entry point? Btw, I'm using Ubuntu server on my Virtual machine.
Two issues here The file path needs the leading / here, so it should be /etc/fstab You'll then find that despite your sudo command you get a "permission denied" error. This is answered at Redirecting stdout to a file you don't have write permission on
No such file or directory. But it exists [closed]
1,490,340,173,000
I am trying to set up a multi-boot on my machine with Ubuntu (my original OS, on /dev/sda2), Kali Linux and Debian. However, I got stuck halfway through my installation of Debian, and since Ubuntu took a lot of time to boot, I followed the steps of this post to make the boot process faster. But when I rebooted my machine, Ubuntu would only boot in emergency mode... The only thing I was able to notice was that in my /etc/fstab the line associated with my Ubuntu partition was gone. I would gladly post the contents of my fstab here but I don't know how to copy it from the emergency mode to here (I am using my Kali Linux on /dev/sda5 to write this post). Maybe there is a way to restore my fstab, to begin with? Edit 1 Here is the content of my /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # /boot/efi was on /dev/sda1 during installation UUID=95B2-5AED /boot/efi vfat umask=0077 0 1 # /home was on /dev/sda3 during installation UUID=69d6623e-0bcc-4cef-8b25-e46c98210d44 /home ext4 defaults 0 2 # swap was on /dev/sda4 during installation UUID=a8ee0943-0cd9-4dba-b018-ca00fc450e5d none swap sw 0 0 And here is thd result of blkid | grep UUID: /dev/sda1: UUID="95B2-5AED" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="f3ead83c-a7ca-453b-8317-a854080d37fc" /dev/sda2: UUID="7d4d2f18-146c-4d56-b5f3-0dc605eeb9e0" TYPE="ext4" PARTLABEL="Ubuntu" PARTUUID="94d6c9bd-30da-4abf-a784-41e20992fdd4" /dev/sda3: UUID="69d6623e-0bcc-4cef-8b25-e46c98210d44" TYPE="ext4" PARTLABEL="Home" PARTUUID="dd1299b6-adb1-45c0-99a6-94e922f4964b" /dev/sda4: UUID="a8ee0943-0cd9-4dba-b018-ca00fc450e5d" TYPE="swap" PARTUUID="228fa2d0-8b0c-4562-bb5a-ebb73bb00f04" /dev/sda5: UUID="489b70a2-db82-4b0c-bebd-cf19a403ade1" TYPE="ext4" PARTUUID="48ba997c-e595-45c1-93c0-b97e4f7ffbf5" /dev/sda6: UUID="9068da24-6073-45dc-a18e-29634daa3910" TYPE="ext4" PARTUUID="9033f352-349f-4cee-94bf-c686f462adea" Edit 2 I ran the e2fsck command on my Ubuntu, home and Debian partitions, and now instead of booting into emergency mode, Ubuntu starts to launch normally, but freezes after some time loading.
Since your Kali installation is working, you can use it to access your Ubuntu installation in a chroot. To do this, run the following commands as root: mkdir /ubunturoot mount /dev/sda2 /ubunturoot mount -o bind /dev /ubunturoot/dev mount -o bind /dev/pts /ubunturoot/dev/pts mount -o bind /proc /ubunturoot/proc mount -o bind /sys /ubunturoot/sys chroot /ubunturoot Now your command prompt window (note: this particular shell only!) should be accessing your Ubuntu root filesystem just as if you had logged onto Ubuntu and become root in Ubuntu. Take a look and ensure everything is as it should be. If your Ubuntu /etc/fstab is in error, now you can edit it. Once that is fixed, first make sure the /boot/efi filesystem is mounted in your Ubuntu chroot: mount /boot/efi Then run ls /lib/modules to see one or more directories named with kernel version numbers. Use update-initramfs -u -k <kernel version number> to update the initramfs file of the respective Ubuntu kernel. (Since you are now really running Kali's kernel, you must explicitly specify the version number of Ubuntu's kernel: trying to update the default kernel would result in an error message since Ubuntu's and Kali's kernel versions are unlikely to match.) Then check /etc/default/grub for boot options mentioning filesystem UUIDs or other things that may have changed on your OS installations. Fix as necessary, then run update-grub to update the configuration file of Ubuntu's GRUB bootloader. Once you've fixed all the problems you've found, undo the temporary chroot environment manually: umount /boot/efi exit # out of the chroot environment, back to Kali native view of the filesystem umount /ubunturoot/sys umount /ubunturoot/proc umount /ubunturoot/dev/pts umount /ubunturoot/dev umount /ubunturoot rmdir /ubunturoot
Boot in emergency mode, incomplete /etc/fstab
1,490,340,173,000
I have read this question, but it discusses syslog and my question is about journald. Can I mount /var/log/journal as tmpfs using fstab, or will journald be run (and therefore maybe write to the directory) before the kernel reads fstab?
If you don't want journald's logs stored on disk then use the Storage=volatile setting in /etc/systemd/journald.conf - there's no need to mess around with mounting /var/log/journal as tmpfs. From man journald.conf: Storage= Controls where to store journal data. One of "volatile", "persistent", "auto" and "none". If "volatile", journal log data will be stored only in memory, i.e. below the /run/log/journal hierarchy (which is created if needed). If "persistent", data will be stored preferably on disk, i.e. below the /var/log/journal hierarchy (which is created if needed), with a fallback to /run/log/journal (which is created if needed), during early boot and if the disk is not writable. "auto" behaves like "persistent" if the /var/log/journal directory exists, and "volatile" otherwise (the existence of the directory controls the storage mode). "none" turns off all storage, all log data received will be dropped (but forwarding to other targets, such as the console, the kernel log buffer, or a syslog socket will still work). Defaults to "auto" in the default journal namespace, and "persistent" in all others.
Can I mount `/var/log/journal` as `tmpfs`?
1,490,340,173,000
I have made an sshfs mount as follows (33 is the uid of the www-data). Listing the folders and files shows them as having read and execute access for www-data. But when my application running as www-data tries to access the mount I get a permission error. /etc/fstab example.com:/remote/folder/ /local/folder fuse.sshfs ro,uid=33,gid=33 0 0 I get the same permission error if I do the following: sudo -u www-data python >>> import os >>> os.listdir('/local/folder')
You need to add the allow_other permission to your sshfs mount declaration. Otherwise only the user who performs the mount can access it, even if the file permissions are correct. /etc/fstab example.com:/remote/folder/ /local/folder fuse.sshfs ro,uid=33,gid=33,allow_other 0 0 Source: sshfs mount, sudo gets permission denied (similar issue with root instead of www-data)
Why can't www-data access an sshfs mount?
1,490,340,173,000
I am trying to: mount my 2TB external USB hard drive as my home directory at /home/peter ensure that the home directory is owned by me (not root) do all this automatically at bootup Currently: my drive is formatted to ext4 my drive is empty I am running debian 7 I can reformat to another filesystem type if necessary, but I want to use the full 2TB on the drive. The following fstab line mounts the drive incorrectly owned by root: UUID=xxxx /home/peter ext4 nodev,nosuid 0 2 How can I mount the drive so that it is owned by peter (that's my login user on the PC)?
the solution was simply to chown the home directory after the mount took place: $ chown peter:peter /home/peter while using the following fstab settings: UUID=xxxx /home/peter ext4 defaults 0 2 this hadn't worked before with other fstab settings, but now /home/peter remains owned by peter each time i restart (previously root kept taking ownership of this directory on restart).
fstab mount drive as my /home
1,490,340,173,000
I have a folder under /mnt/ with drwxrwxrwx permissions and under root:root I then mount a USB drive (exFAT) to this folder and it becomes drwxr-xr-x The issue is that now I cannot scp to that folder via WinSCP since there is no permission for group to write to folder, and I am unable to scp as root user. I am mounting the drive via fstab with the following: /dev/sdb2 /mnt/USB exfat defaults,dmask=0000,umask=0000,rw 0 0 How do I either: 1) Give permission to group write or 2) Mount it as a non root user so that that user can write? ive attempted chown and chmod to no avail. Chown even when run as root returns Operation not permitted I am able to write to the mount as root user when in SSH (such as mkdir), so the mount is writable, but only by root.
ExFAT filesystems don't support Unix permissions. The Unix permissions are set at mount time. The ownership/permissions of the mountpoint (/mnt/USB) has nothing to do with whatever gets mounted over it. It's just a placeholder in the file tree. To fix it now, try: sudo mount -o remount,umask=0,dmask=0,fmask=0,uid=$(id -u),gid=$(id -g) /dev/sdb2 /mnt/USB Update your /etc/fstab entry to add the fmask=0 and uid= and gid= options. You'll have to hard-code your UID and GID, with the values from id -u;id -g.
Have drwxrwxrwx permissions on folder, but after mounting to it it becomes drwxr-xr-x which disalows members of the group to write. How do I fix it?
1,490,340,173,000
I have built a Debian linux machine. I have captured an 'image' of the entire hard drive in the form of a compressed tar file. I then unpack this onto another machine, setting up grub so that it can boot. The problem I encounter is that the disk UUID differs from that from the original and therefore, the disk is mounted as read-only. I can fix this afterwards by configuring the correct UUID in /etc/fstab However, I would like to avoid this. What can I do on the original machine BEFORE I capture an 'image' of it, so that when unpacked to another machine this problem is avoided? Update: Based on useful commentary to one of the answers, I should clarify that the initial machine and subsequent clone/s will only ever have a single disk.
What can I do on the original machine BEFORE I capture an 'image' of it, so that when unpacked to another machine this problem is avoided? instead of the disk reference being mount by-name, do it by mounted by-label or mounted by-name # for example when mounted by-name it would look like this in /etc/fstab /dev/sda3 / # and When mounted by-label it would like this in /etc/fstab LABEL=some_name / Doing it by-label would work in a new system with no extra work needed. Know that doing it by-name will only work in a new system if it is the only disk in the system such that it guarantees its reference to be sda. When there are other disks in the system, or whenever other hardware is treated as /dev/sd?, you cannot rely on your cloned disk always being sda and that's where the problem with by-name is. If all your /etc/fstab and grub references have /dev/sda but your disk comes in as /dev/sdb well then boot device not found. You have to know your Linux system a little bit, RHEL/CentOS, SLES/SUSE, Ubuntu, or whatever else, and find where all the disk references are. It's not just /etc/fstab. The Grub boot loader is most likely to be the other places, as it is the most popular among Linux's. Story: I used to use SLES 11 which used ELILO... which was an alternative to grub, which I thought was great... bring back ELILO ! ... but for that I only needed to worry about one other file besides /etc/fstab which was elilo.conf. Once your newly cloned disk is booted and running, then among your various tasks of setting up that new system (hostname, ip address, etc) just update /etc/fstab and the grub files to go back to mount by-uuid Also know that for mount by-label it is on you to guarantee no other disks use the same label. Seems simply enough, but it is easy to forget; for example I always label the /boot partition simply boot and my / partition simply root. If I go and clone disks then try to have two of those disks connected and try to boot, which does the system choose when more than one partition has the same label? So by-label can bite you if you're forgetful... the system can easily boot and work but you will not be running on the disk you think you are. Look under /dev/disk/ and you will see by-id/ by-label/ by-partlabel/ by-partuuid/ by-path/ by-uuid/ use that for reference, it should be very clear to understand. ls -l /dev/disk/by-label lrwxrwxrwx. 1 root root 10 Mar 2 15:46 boot -> ../../sdc2 lrwxrwxrwx. 1 root root 10 Mar 2 15:46 data -> ../../sda1 lrwxrwxrwx. 1 root root 10 Mar 2 15:46 root -> ../../sdc3 lrwxrwxrwx. 1 root root 10 Mar 2 15:46 scratch -> ../../sdb1 # Explanation of these 4 listings: when installing Linux I always do /boot ==> /dev/sda2 labeled as 'boot' / ==> /dev/sdc3 labeled as 'root' my other disks here on this specific system I labeled as data and scratch.
How can I prevent disk UUID mismatch when cloning a machine?
1,490,340,173,000
I have entered the following command: sudo mount UUID=17F30CD71ED138A1 -o gid=1000,uid=1000 ~/sandisk_external_drive It worked great, but now I'd like to mount it on boot, therefore I have added an entry into the table: /etc/fstab. the following entry is: UUID=17F30CD71ED138A1 /home/<my-user>/sandisk_external_drive nfs uid=1000,gid=1000,defaults 0 0 BUT I keep getting: > mount.nfs: no mount point provided usage: mount.nfs remotetarget dir > [-rvVwfnsh] [-o nfsoptions] options: > -r Mount file system readonly > -v Verbose > -V Print version > -w Mount file system read-write > -f Fake mount, do not actually mount > -n Do not update /etc/mtab > -s Tolerate sloppy mount options rather than fail > -h Print this help > nfsoptions Refer to mount.nfs(8) or nfs(5) What's wrong with this line?? I have tried multiple times and it fails. I think the reason for the problem is the options fourth column which for some reason is not configured correctly.
You are mounting it as nfs filesystem, knowing that it's a local drive. Just replace nfs with ntfs : the correct drive format or filesystem type in your fstab.
Mounting an external SSD with user privileges using fstab
1,490,340,173,000
some times we noticed about conflicts in /etc/fstab file as the following example /dev/sdg appears twice ! /data/sdb appears twice ! # # /etc/fstab # Created by anaconda on Wed Nov 9 13:26:03 2016 # # Accessible filesystems, by reference, are maintained under '/dev/disk' # See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info # /dev/mapper/vg00-OS-linux_root / xfs defaults 0 0 UUID=cc749f07-ad72-49e8-ab19-ec6532f5e9 /boot xfs defaults 0 0 /dev/mapper/vg00-OS-linux_var /var xfs defaults 0 0 /dev/mapper/vg00-OS-linux_swap swap swap defaults 0 0 /dev/sdc /data/sdc ext4 defaults,noatime 0 0 /dev/sdb /data/sdb ext4 defaults,noatime 0 0 /dev/sde /data/sde ext4 defaults,noatime 0 0 /dev/sdf /data/sdf ext4 defaults,noatime 0 0 /dev/sdd /data/sdd ext4 defaults,noatime 0 0 /dev/sdg /data/sdb ext4 defaults,noatime 0 0 /dev/sdg /data/sdg ext4 defaults,noatime 0 0 /dev/sdh /data/sdh ext4 defaults,noatime 0 0 /dev/sdi /data/sdi ext4 defaults,noatime 0 0 /dev/sdj /data/sdj ext4 defaults,noatime 0 0 /dev/sdk /data/sdk ext4 defaults,noatime 0 0 /dev/sdl /data/sdl ext4 defaults,noatime 0 0 we want to create simple verification to find conflicts on the first field or the second field in fstab file what is the best syntax for this purpose ? verification should find duplicate words in the first field or in the second field , ( syntax line should be short as possible ) expected output - fail / ok & ( should print all duplicated word from the first field / second field in case of fail )
awk ' !/^#/ { if (seendev[$1]++) { print; ++rc; } if (seenmnt[$2]++) { print; ++rc; } } END { exit rc }' < /etc/fstab The above awk one-liner will print any lines that duplicated column 1 (device) or column 2 (mount-point), and will also exit with a non-zero return code if the above occurs.
how to find conflicts in fstab file
1,490,340,173,000
I have a Virtual Machine setup on azure. I see basic disk was getting full so I attached disk to it and mounted /home/user/mydata folder to the new disk. I forgot to add configuration in fstab. My VM got restarted recently and after restart, I did manual mount but disk is not freeing after mount command. /dev/sda1 29G 28G 0 100% / none 4.0K 0 4.0K 0% /sys/fs/cgroup udev 3.4G 8.0K 3.4G 1% /dev tmpfs 697M 396K 697M 1% /run none 5.0M 8.0K 5.0M 1% /run/lock none 3.5G 0 3.5G 0% /run/shm none 100M 0 100M 0% /run/user none 64K 0 64K 0% /etc/network/interfaces.dynamic.d overflow 1.0M 0 1.0M 0% /tmp /dev/sdb1 281G 63M 267G 1% /mnt /dev/sdc1 1007G 118G 838G 13% /home/user/mydata
Mounting a disk over /home/user/mydata does NOT remove anything from the existing /home/user/mydata. It just 'covers up' the directory with the other disk. If you want to reclaim the disk space from /home/user/mydata, you need to manually delete/move those files to the new disk before mounting.
Disk space full not changing after mount folder on other disk
1,490,340,173,000
I have a dual boot system (Windows 10/Archlinux) and I have created a NFTS partition which is mounted at startup via /etc/fstab so I can access it from both OS'es. The fstab file shows that the partition is mounted with read and write permissions (rw) with user_id=0 and group=0, both values related to the root user, followed by the option allow_other which lets my regular user access the mounted file system. When files or folders are created by the regular user (non root) into the mounted partition those are created as if they were owned/created by root as shown by ls -l command. Even if I try to use chmod, the permissions are unaffected and no errors are shown. I've also tried changing in /etc/fstab both user_id and group_id to 1000, corresponding to the non-root user and reloading entries with sudo mount -av. After that, I created a file in the mounted partition but it keeps showing the root as the user owner. I suspect the issue could be the fstab configuration, but I'm not sure. Next I'll share some info related to the configuration of the partition inside the fstab file and the before mentioned commands and its outputs: /etc/fstab/ UUID=B23A2CB93A2C7C8B /mnt/Contenido ntfs-3g rw,nosuid,nodev,user_id=0,group_id=0,allow_other,blksize=4096 0 0 $ cd /mnt/Contenido $ whoami > joao $ touch random_file $ ls -l > -rwxrwxrwx 1 root root 0 Feb 19 16:45 random_file $ sudo chmod -v 700 random_file > mode of 'random_file' changed from 0777 (rwxrwxrwx) to 0700 (rwx------) $ ls -l > -rwxrwxrwx 1 root root 0 Feb 19 16:45 random_file >
As Chris Davies said in the comments section, the answer is in man ntfs-3g, in the Access Handling and Security section, to be precise. I could do ntfs-3g -o uid=1000,gid=1000,umask=700 /dev/nvme0n1p6 /mnt/Contenido/ to get my regular user as the owner. I don't know if it is worth mentioning, but it is important to rebuild ntfs-3g with integrated FUSE support so the regular user can mount the file system, as explained in the Arch Wiki.
Files created by user in a mounted partition show root as owner
1,490,340,173,000
I am having an issue where I can mount my disks manually using the mount command. I than add the disks to fstab. Once I restart: sda1 points at the correct mount point (/mnt/da), but the rest do not. Please help, I am out of ideas. Server setup: 2 x nvme drives in software raid1 10x 16tb drives without raid, stand alone disks (had to remove raid from these after initial setup) OS: Debian 12 xfs file system Using UUID, I get from the blkid command, to add the devices to fstab Tried: Manually mounted the disks Tried mounting one disk at a time Tried adding one disk at a time to fstab and reloading Tried adding all the disks to fstab and reloading df -h root@data7 ~ # df -h Filesystem Size Used Avail Use% Mounted on udev 63G 0 63G 0% /dev tmpfs 13G 896K 13G 1% /run /dev/md2 875G 1013M 829G 1% / tmpfs 63G 0 63G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/md1 989M 66M 873M 7% /boot /dev/sdb1 15T 104G 15T 1% /mnt/db /dev/sdd1 15T 104G 15T 1% /mnt/dc /dev/sda1 15T 104G 15T 1% /mnt/da tmpfs 13G 0 13G 0% /run/user/0 lsblk root@data7 ~ # lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 14.6T 0 disk └─sda1 8:1 0 14.6T 0 part /mnt/da sdb 8:16 0 14.6T 0 disk └─sdb1 8:17 0 14.6T 0 part /mnt/db sdc 8:32 0 14.6T 0 disk └─sdc1 8:33 0 14.6T 0 part sdd 8:48 0 14.6T 0 disk └─sdd1 8:49 0 14.6T 0 part /mnt/dc sde 8:64 0 14.6T 0 disk └─sde1 8:65 0 14.6T 0 part sdf 8:80 0 14.6T 0 disk └─sdf1 8:81 0 14.6T 0 part sdg 8:96 0 14.6T 0 disk └─sdg1 8:97 0 14.6T 0 part sdh 8:112 0 14.6T 0 disk └─sdh1 8:113 0 14.6T 0 part sdi 8:128 0 14.6T 0 disk └─sdi1 8:129 0 14.6T 0 part sdj 8:144 0 14.6T 0 disk └─sdj1 8:145 0 14.6T 0 part sdk 8:160 0 57.7G 0 disk nvme0n1 259:0 0 894.3G 0 disk ├─nvme0n1p1 259:1 0 4G 0 part │ └─md0 9:0 0 4G 0 raid1 [SWAP] ├─nvme0n1p2 259:2 0 1G 0 part │ └─md1 9:1 0 1022M 0 raid1 /boot └─nvme0n1p3 259:3 0 889.3G 0 part └─md2 9:2 0 889.1G 0 raid1 / nvme1n1 259:4 0 894.3G 0 disk ├─nvme1n1p1 259:5 0 4G 0 part │ └─md0 9:0 0 4G 0 raid1 [SWAP] ├─nvme1n1p2 259:6 0 1G 0 part │ └─md1 9:1 0 1022M 0 raid1 /boot └─nvme1n1p3 259:7 0 889.3G 0 part └─md2 9:2 0 889.1G 0 raid1 / blkid GETTING UUID root@data7 ~ # blkid | g sda /dev/sda1: UUID="cea5e8d9-1ddf-4502-a609-3a17af37082c" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="d0a89050-f533-7245-877e-5006d974516c" root@data7 ~ # blkid | g sdb /dev/sdb1: UUID="e3ae1145-d37b-41d7-ac1f-5c6a646bd5ed" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="da1f85c1-9054-d147-a9b8-0965020b4d67" root@data7 ~ # blkid | g sdc /dev/sdc1: UUID="47bf4e70-ec50-4369-88c2-9dfd8dd5d422" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="f458a5c3-6b6c-054d-a069-27930dcb02f2" fstab /etc/fstab - WAS CORRECT BEFORE RESTART .. CHANGED IT 5+ TIMES AND BREAKS AFTER EVERY RESTART proc /proc proc defaults 0 0 # /dev/md/0 UUID=e2f568f6-846b-4657-b88d-3c8108d5600c none swap sw 0 0 # /dev/md/1 UUID=a5868f6d-b7e5-43b1-ab81-4770a543d83a /boot ext3 defaults 0 0 # /dev/md/2 UUID=612c81e1-94e4-415e-863f-6dfcbe127dee / ext4 defaults 0 0 # /dev/sda1 UUID=cea5e8d9-1ddf-4502-a609-3a17af37082c /mnt/da xfs defaults 0 2 # /dev/sdb1 UUID=e3ae1145-d37b-41d7-ac1f-5c6a646bd5ed /mnt/db xfs defaults 0 2 # /dev/sdc1 UUID=2b28f001-d9a0-4759-8f29-4bf45a18aeb6 /mnt/dc xfs defaults 0 2 My Other Servers (in response to the comment that device names dont persist across reboots) root@data2:~$ df -h Filesystem Size Used Avail Use% Mounted on udev 126G 0 126G 0% /dev tmpfs 26G 1.3G 24G 6% /run /dev/sda3 5.5T 4.9T 246G 96% / tmpfs 126G 0 126G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 126G 0 126G 0% /sys/fs/cgroup /dev/sda2 923M 79M 781M 10% /boot /dev/sde1 5.5T 5.0T 236G 96% /mnt/de /dev/sdf1 5.5T 4.1T 1.2T 79% /mnt/df /dev/sdd1 5.5T 4.7T 468G 92% /mnt/dd /dev/sdc1 5.5T 5.1T 49G 100% /mnt/dc /dev/sdb1 5.5T 5.1T 74G 99% /mnt/db tmpfs 26G 0 26G 0% /run/user/1000 tmpfs 26G 0 26G 0% /run/user/0 root@data3:~# df -h Filesystem Size Used Avail Use% Mounted on udev 126G 0 126G 0% /dev tmpfs 26G 2.5G 23G 10% /run /dev/sda3 5.5T 4.2T 1.1T 81% / tmpfs 126G 0 126G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 126G 0 126G 0% /sys/fs/cgroup /dev/sda2 923M 79M 781M 10% /boot /dev/sdc1 11T 11T 660G 95% /mnt/df /dev/sdf1 11T 9.2T 1.8T 84% /mnt/de /dev/sdd1 11T 9.8T 1.2T 90% /mnt/dc /dev/sde1 11T 11T 191G 99% /mnt/dd /dev/sdb1 11T 11T 855G 93% /mnt/db tmpfs 26G 0 26G 0% /run/user/1001 tmpfs 26G 0 26G 0% /run/user/0 root@data4:~# df -h Filesystem Size Used Avail Use% Mounted on udev 126G 0 126G 0% /dev tmpfs 26G 2.5G 23G 10% /run /dev/sda3 11T 11T 249G 98% / tmpfs 126G 0 126G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 126G 0 126G 0% /sys/fs/cgroup /dev/sda2 923M 80M 781M 10% /boot /dev/sdc1 11T 9.3T 1.7T 85% /mnt/dc /dev/sdd1 11T 9.0T 2.0T 82% /mnt/dd /dev/sdb1 11T 9.8T 1.2T 90% /mnt/db tmpfs 26G 0 26G 0% /run/user/1002 tmpfs 26G 0 26G 0% /run/user/1000 tmpfs 26G 0 26G 0% /run/user/1005 tmpfs 26G 0 26G 0% /run/user/0
The device ID's are not guaranteed to be consistent across reboots... you are using wrong vocabulary. There are 6 ways to mount disks in linux, as shown under /dev/disk; this is using RHEL-7.9... by-id/ by-label/ by-partlabel/ by-partuuid/ by-path/ by-uuid/ by-id is a scsi identifier or wwn (world-wide-number) identifier. The unreliable (or inconsistent) way is mounting by-device-name which for example is doing just /dev/sdb1 /data in /etc/fstab, which is what you had been doing and are incorrectly referring to as device id's. The id's are consistent, the device-names (which are sda,sdb,sdc and so on) are not. You will see that everything under those six /dev/disk/ folders are all links that point up and above to the device name {/dev/sda2 for example} as mentioned in the comments, by device-name gets mapped out after boot by order the devices are recognized. Adding a new disk connected by SATA cable, does not put it at the end of the sda,sdb,sdc... list. It often times gets put at the front as sda and then everything shifts down, which is how the inconsistency comes about. Simply swap two disks and the SATA ports they are connected to on the motherboard - same issue. by-UUID is very consistent - hence the name universally unique id by-id as in the scsi id or wwn id, should also be very reliable; you will often find the wwn on the label on the disk by-label should be reliable, up until you do something like label multiple disks (partitions actually) with the same label name. I think by-path is inconsistent, for the reason if disks get connected onto different SAT/SAS ports then that is now a different path.
Disks mount points inconsistent after each reboot. Using UUID in fstab
1,490,340,173,000
When Linux boots, does it first read fstab and mount everything from it, or does it start systemd before that? I expect that fstab comes first, but I didn't know how to confirm it. So, even if you know the answer, please tell me where you learned it yourself so that I can inform myself better before coming to this forum. Particularly, I want to mount tmpfs on /var/log and, as I could deduce, all these logs are accessed and written after systemd started some services. I want to be sure it is mounted before any program tries to access it. I know this could be understood as a duplicate of this question, but there I kind of repurposed it so, in absence of better ideas, I simply asked again. This time with a clear statement.
When Linux boots, does it first read fstab and mount everything from it, or does it start systemd before that? Systemd is what mounts everything from it. Linux on its own doesn't know what fstab is; it lets the init system handle the entire system bringup. Usually the init system will start essential services first, followed by fstab, followed by the rest of the system. Systemd, however, does most things in parallel – it has several broad stages, but mainly relies on services specifying explicit dependencies on what they actually need. For example, services and mounts may actually be started in parallel. But if a service defines that it needs /var/log, it's guaranteed to be started only after /var/log is mounted. I want to be sure it is mounted before any program tries to access it. If you want to be sure, tell systemd to make sure. It's a dependency-based system and you can literally tell it that service A depends on mount B. So if one of your services requires this location, add Requires= and After= to its service unit accordingly – either for the specific mount, or for the "target" that groups all local fstab entries. [Unit] [Unit] Requires=sys-log.mount Requires=local-fs.target After=sys-log.mount After=local-fs.target (I assume you meant /var/log, though, not /sys/log? There is no /sys/log within the sysfs. And if your /sys is something else than the sysfs, then you shouldn't be asking this question...)
Which comes first, fstab or /var/log?
1,682,801,791,000
when I use lsof as regular user, I get following warnings: lsof: WARNING: can't stat() tmpfs file system /home/testuser/.cache testuser is another user on my systems, and my own user has no access to the tmpfs filesystem mounted at /home/testuser/.cache. I suspect, lsof found in /etc/fstab (or in /proc/mounts) that this tmpfs exists and tries to search it and fails on not having permissions to other user's home: $ grep /home/testuser/.cache /proc/mounts tmpfs /home/testuser/.cache tmpfs rw,nosuid,nodev,noexec,noatime,size=4194304k,mode=700,uid=1001,gid=1001 0 0 Anyways, how can I supress these warnings, or tell lsof not to search paths of other users, or something that would get rid of this warning?
You can disable warnings with -w: lsof -w
lsof: WARNING: can't stat() tmpfs file system
1,682,801,791,000
I have a inotify-based service that backs up my LAN's git directory to the Dropbox. I tried keeping the git directory in the Dropbox but I have multiple git clients so often get error files there. In this early stage of development, this is a fairly busy and chatty system service that wants to log to a ram drive. I don't want to use /tmp because other applications depend on having space there. To create the ram drive in my fstab I have this : tmpfs /mnt/ram tmpfs nodev,nosuid,noexec,nodiratime,size=1024M 0 0 I need to be sure that the ram drive is mounted before the backup service starts. I want to put a condition to the service that delays its start. I see suggestions that people use the *.mnt service as a precondition but I don't see any file in /lib/systemd/system that gives me the name of the service I need. How can I identify this mount? Is there another approach?
On Arch, at least, systemd mounts generated from /etc/fstab are deployed to /run/systemd/generator For example on my system, with the listing below I can add to my service file [Unit] Description=backup logging to temp After=mnt-ram.mount ls -la /run/systemd/generator :> ls -la total 32 -rw-r--r-- 1 root root 362 Jun 20 17:01 -.mount drwxr-xr-x 5 root root 260 Jun 20 17:01 . drwxr-xr-x 22 root root 580 Jun 21 04:40 .. -rw-r--r-- 1 root root 516 Jun 20 17:01 boot.mount drwxr-xr-x 2 root root 120 Jun 20 17:01 local-fs.target.requires drwxr-xr-x 2 root root 80 Jun 20 17:01 local-fs.target.wants -rw-r--r-- 1 root root 168 Jun 20 17:01 mnt-3T.automount -rw-r--r-- 1 root root 515 Jun 20 17:01 mnt-3T.mount -rw-r--r-- 1 root root 168 Jun 20 17:01 mnt-4T.automount -rw-r--r-- 1 root root 515 Jun 20 17:01 mnt-4T.mount -rw-r--r-- 1 root root 260 Jun 20 17:01 mnt-ram.mount -rw-r--r-- 1 root root 349 Jun 20 17:01 mnt-sda.mount drwxr-xr-x 2 root root 80 Jun 20 17:01 remote-fs.target.requires
systemd service to start after mount of ram drive
1,682,801,791,000
I've tried to mount my new partitions by editing /etc/fstab, but error appears. When I try to do it by mount command, all works fine and I can mount them. What's wrong? # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation /dev/disk/by-id/dm-uuid-LVM-XVpMjvfuIwUMG9eeZN2E09sODMkxF3I8j6u3WkZegGllXAx08ZPZROjo66HKfnG8 / ext4 defaults 0 1 # /boot was on /dev/sda2 during curtin installation /dev/disk/by-uuid/2d747eec-1c31-4c12-849c-efe362e3245e /boot ext4 defaults 0 1 /swap.img none swap sw 0 0 UUID=a6c59d0e-37a7-4532-b843-6025dabef69f /mnt/sdb1 ext4 default 0 2 UUID=b12d193a-6d04-4cbb-a8da-d8405b38dae0 /mnt/sdb2 btrfs default 0 2 UUID=55165d2b-f3b5-46b2-af04-7366861c82b6 /mnt/sdb3 xfs default 0 2 UUID=1B74-0C7D /mnt/sdb4 vfat default 0 2 user@ubuntu2:~$ sudo mount -a mount: /mnt/sdb1: wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error. mount: /mnt/sdb2: wrong fs type, bad option, bad superblock on /dev/sdb2, missing codepage or helper program, or other error. mount: /mnt/sdb3: wrong fs type, bad option, bad superblock on /dev/sdb3, missing codepage or helper program, or other error. mount: /mnt/sdb4: wrong fs type, bad option, bad superblock on /dev/sdb4, missing codepage or helper program, or other error. user@ubuntu2:~$ sudo lsblk -f sdb ├─sdb1 ext4 a6c59d0e-37a7-4532-b843-6025dabef69f ├─sdb2 btrfs b12d193a-6d04-4cbb-a8da-d8405b38dae0 ├─sdb3 xfs 55165d2b-f3b5-46b2-af04-7366861c82b6 └─sdb4 vfat 1B74-0C7D
You have typo in the options, it's defaults for the default set of mount options, not default. There's no such option default so mount fails because of that. Note, if you see similar error from mount in the future, you should always check the log, kernel will print additional information, in this case you should see something like: xfs: Unknown parameter 'default'
I can't mount partitions via /etc/fstab, the error appears
1,682,801,791,000
Let’s give some context to the issue: There is /foo/bar directory in read-write mode There is /bar bind mount that points to /foo/bar In /foo/bar there is bar directory that has to be in read-only mode (both /foo/bar/baz and /bar/baz) In order to make /foo/bar/baz be read-only I do this another bind: $ sudo mount -o bind,ro /foo/bar/baz /foo/bar/baz $ sudo touch /foo/bar/baz/test touch: cannot touch '/foo/bar/baz/test': Read-only file system $ mount | grep bar /dev/vda1 on /bar type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /foo/bar/baz type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /bar/baz type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) P.S. There is /foo/bar/baz and /bar/baz but the latter is not read-only. But /bar/baz is writable: $ sudo touch /bar/baz/test $ echo $? 0 Trying to make another bind: $ sudo mount -o bind,ro /bar/baz /bar/baz $ mount | grep bar /dev/vda1 on /bar type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /foo/bar/baz type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /bar/baz type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /bar/baz type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /foo/bar/baz type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /foo/bar/baz type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /bar/baz type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) What really confuses me: Why are there 3 the same mounts for /bar/baz now? There was none and after one bind I get three: /dev/vda1 on /bar/baz type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /bar/baz type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /bar/baz type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) Two are read-write and one read-only. Which one takes the priority? It seems the read-only one: $ sudo touch /bar/baz/test touch: cannot touch '/bar/baz/test': Read-only file system But it’s not the last one, the read-only one is in the middle. Why there are now 3 mounts for /foo/bar/baz, there was just read-only one and now I have 3: /dev/vda1 on /foo/bar/baz type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /foo/bar/baz type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /foo/bar/baz type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) One read-only and two read-write. What takes the priority? Turns out not the read-only one since it’s writable: $ sudo touch /foo/bar/baz/test $ echo $? 0 Why does it happen? And how to avoid it? What is the correct way to make both /foo/bar/baz and /bar/baz be read-only at the same time?
The explanation part is perfectly covered in aviro’s answer: https://unix.stackexchange.com/a/689950/513617 I found a good solution that doesn’t add any extra read-write mounts. It’s remount option. mount -o bind /foo/bar /bar mount -o bind,ro /foo/bar/baz /foo/bar/baz And then take the existing read-write /bar/baz propagated bind and remount it: mount -o bind,ro,remount /foo/bar/baz /bar/baz After that you get only these: $ mount | grep bar /dev/vda1 on /bar type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /foo/bar/baz type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota) /dev/vda1 on /bar/baz type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)
Linux: How to preserve read-only mode for layered bind mounts
1,682,801,791,000
I am on Linux Mint 20, and I used the Disks GUI in order to mount my 4 NTFS partitions to their mount points. They are mounted indeed, and I can create or delete (no rubbish bin though, complete deletion only) files, but everything is owned by root, and I'm not able to change the permissions. I have searched about this, and have found similar cases, with a lot of different suggested solutions. Being a beginner, and those hard drives being precious (I'm backing them up at the moment), I don't dare trying all these solutions without making sure they apply to my case. Here is the content of my /etc/fstab regarding those four hard drives: /dev/disk/by-uuid/9A50DD2D50DD10BD /mnt/foo auto nosuid,nodev,nofail,x-gvfs-show 0 0 /dev/disk/by-uuid/7EEE2DE6EE2D9803 /mnt/bar auto nosuid,nodev,nofail,x-gvfs-show 0 0 /dev/disk/by-uuid/26E81EF7256571FE /mnt/baz auto nosuid,nodev,nofail,x-gvfs-show 0 0 /dev/disk/by-uuid/52EEB3D0EEB3AA9D /mnt/qux auto nosuid,nodev,nofail,x-gvfs-show 0 0 What should I change to make sure I own those mounted directories and their content? Many thanks
If you want specifically you to be the owner, try replacing nosuid with gid=<group id>,uid=<user id>. If you're the only user, the gid and uid are both likely to be 1000. You can check both by calling id. NTFS doesn't have directly UNIX-compatible permissions, so when you mount an NTFS partition, I believe it takes on the permissions of whoever mounted it. If you sudo mount (or in this case, have the kernel mount them from /etc/fstab), the owner gets set to root unless you specify otherwise.
Mounted partition has the wrong owner
1,682,801,791,000
I mounted a new ext4 storage volume to my server after I had already installed an application that primarily uses /home. This application needs to take advantage of the additional storage, so I want to remount the volume so that it's used by the /home directory. Can anyone confirm my steps below? umount -v /mnt/volume_nyc1_01 # Edit /etc/fstab # Replace the second field of the mountpoint's entry with /home mount -av I appreciate the feedback. I'm asking in hopes to avoid messing up my system by overlooking important considerations.
I assume that /mnt/volume_nyc1_01 is the mountpoint for the new ext4 volume. There is a line in /etc/fstab in which this volume is mounted on /mnt/volume_nyc1_01. The steps you mention are not technically wrong, but if you follow them, you'll end up having an empty /home directory - since it's a new ext4 fs only lost+found will be there. The steps I follow on such cases are: Stop any service, daemon, app using /home. lsof |grep "/home" can help you with this. Leave /etc/fstab as is for the time being and copy all data from /home to /mnt/volume_nyc1_01. I'd use this command: sudo rsync -aHAXS /home/* /mnt/volume_nyc1_01 After everything is successfully copied, you can proceed with the steps you described. The new volume will be mounted as /home and will include all your data. If everything is up and running, at some point in the future you could unmount the new volume from /home mountpoint and delete the files in the /home directory which will still be there, and reclaim space on your first volume. Be cautious, /home is still a directory on the 1st volume, which is also used as a mountpoint for 2nd volume. So deleting files from /home directory is possible, without deleting files from the 2nd volume, if you umount the 2nd volume first. If all this seems complicated, just let the old files there.
Remounting /home in a new volume
1,682,801,791,000
I’m a semi-experienced Linux admin who’s trying to figure out how to automount an external hard drive to my Linux box. (ver 2.6.16.13-4-smp) (Its an older box, I know.) I can manually mount the drive just fine: me@linux:/> mount /dev/sdc1 /media/Seagate me@linux:/> meaning I want to mount the device located at /dec/sdc1 to directory /media/Seagate. This works just great. When I do this, I see the following: me@linux:/> df -h Filesystem Size Used Avail Use% Mounted on ...other stuff... /dev/sdc1 917G 13G 858G 2% /media/Seagate me@linux:/> Trouble is, I need the machine to do this automatically whenever it reboots. I’ve Googled around and learned a bit about the /etc/fstab file. To that end, I’ve added this line at the end of my /etc/fstab file: /dev/sdc1 /media/Seagate ext3 defaults 0 2 The previous admin had left a commented-out line in /etc/fstab which once worked. I cloned it, edited it for my external HD, and then let ‘er rip. The “ext3” comes from that line, I’m not sure what it does. The “defaults” and “0 2” were suggested as the simplest implementation from a few tutorials I found online. So when I rebooted my machine with the above line in /etc/fstab, the machine did not successfully reboot. When I checked the monitor, there were a number of error messages, including: Waiting for /dev/sdc1 error on stat() /dev/sdc1: No such file or directory fsck.ext3: No such file or directory while trying to open /dev/sdc1 /dev/sdc1: The superblock could not be read or does not describe a correct ext2 A photo of the full monitor screen is below. I’m not sure what’s going on here, but it looks like my Linux box tries to mount the external HD, the HD is not available, so the Linux box does not successfully boot? But the HD is plugged in at the time of reboot. If I remove the one line I added and reboot again, the system comes up fine, but then I have to manually mount the HD. So… any idea what’s going on here? Thank you.
check your /etc/fstab file. The last number on each line is fs_passno. If that is set to 1 (true) then it is required for a successful boot that fsck run and successfully complete on the given device.. If you have that /dev/sdc1 line in your /etc/fstab with the last number on that line a 1 then that device needs to be present during boot, otherwise boot will be halted as shown by your pic. Change that 1 to a 0 to allow that /dev/sdc1 line to be present in fstab without the usb device being plugged in, or go remove that line entirely from fstab if the usb device is not plugged in, or have the usb device plugged in and able to pass fsck if you are going to maintain that 1 on the end of the line in fstab. I'm not sure if the value for fs_passno has different affects over different linux distributions, other than making it a 0 means don't fsck. Also, your mounting by-name having /dev/sdc1 in the first column in fstab. I strongly recommend mounting by any other means, preferably by-uuid. Mounting by-name is not robust, and your external disk might be sdc now but can easily become something else if any other hardware changes causing other problems.
fstab prevents successful reboot, how to automate mounting external HD?
1,682,801,791,000
I'm trying to set up an sshfs mount in fstab for persistent mounting of a network directory that has to be accessed via an ssh tunnel. my .ssh/config looks like this: Host A Hostname outer.server User <user> IdentityFile /home/<user>/.ssh/id_rsa ForwardAgent yes Host B Hostname inner.server User <user> IdentityFile /home/<user>/.ssh/id_rsa ProxyCommand ssh -q A "nc %h %p" ForwardAgent yes This works fine: sshfs B:/home/<user>/ /mnt/B In fstab this does not sshfs#B:/home/<user>/ /mnt/B fuse.sshfs defaults,idmap=user,allow_other,reconnect,_netdev,users 0 0 and, when mount -a is run after updating fstab to put the changes into effect returns: read: Connection reset by peer I'd welcome any suggestions as to why the fstab version might not be working.
The root does not see your per-user ssh configuration file. You need to place it in the root home directory (/root/) or into the system-wide configuration file in /etc/ssh/ssh_config, assuming the authentication key is not encrypted (does not have passphrase). You can also save a lot of trouble by throwing away the netcat and using the -W switch for IO forwarding directly in SSH or if you have new-enough OpenSSH, you can use just the ProxyJump option (see the manual for details). And remove also the ForwardAgent yes. You do not need it for anything and it is just exposing your private keys to the server.
sshfs in fstab connection reset with ssh tunnel in ~/.ssh/conf when 'manual' command works fine
1,682,801,791,000
When I shut-down or restart the machine (VM) it won't log in to the system. I get the following error and then, continuous black-screen. This is my fstab, I don't know why uid=1000 is wrong. User with uid=1000 is the first user I've created when I installed Ubuntu. Its username is "laura". # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=9097fce3-f6e5-4708-9460-f40cf134d868 / ext4 auto,rw,nosuid,errors=remount-ro 0 1 # /backup was on /dev/sda4 during installation UUID=a7f3aa62-0c91-491d-b2c6-237c6376f526 /backup ext4 auto,rw,uid=1000,noexec,defaults 0 2 # /home was on /dev/sda3 during installation UUID=2892df69-b043-4087-bfe9-dc8acd17bfc5 /home ext4 auto,rwdefaults 0 2 # swap was on /dev/sda2 during installation UUID=7cf3a7d1-f9a3-484c-8a34-2ccd5df333d4 none swap sw 0 0
The filesystem to be mounted on /backup is declared of type ext4. There is no mount option uid= for Ext4 filesystems. Either the filesystem is not Ext4, and in this case the 3rd field of the fstab entry needs to be changed to reflect the correct filesystem type, or The option uid= needs to be removed.
Can't enter in Ubuntu because fstab error
1,682,801,791,000
today we notice about very strange issue the following partition ( from /etc/fstab) defined with "2" in the end of the line that means that fsck will be activate during boot since the data on that partition is 24T that means that reboot should be take around couple hours or more but reboot actually takes 4 min !!! from /etc/fstab “UUID=7eab43c-41ba-1331-8ab7-a538326a5b8e /BD_APP xfs rw,noatime,inode64,allocsize=16m 1 2” how we explian this strange thing? why fsck was not "on" during boot ?
according to man fsck.xfs XFS is a journaling filesystem and performs recovery at mount(8) time if necessary, so fsck.xfs simply exits with a zero exit status. as man suggest try xfs_repair There is no need to fsck as recovery is performed at mount time.
fsck not performed during boot in spite fstab configured correctly
1,682,801,791,000
I have an ubuntu server. Via cloud-init I make partitions. When I restart my server, it would not come up again. I am sure I miss one command to tell the system which partition should be used for booting. Before partitioning the sda1 was the boot disk and a mbr. cat /etc/fstab root@source ~ # cat /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=3f234dd2-63e6-4676-8ef3-0cde83e52484 / ext4 discard,errors=remount-ro 0 1 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 parted -l root@source ~ # parted -l Model: QEMU QEMU HARDDISK (scsi) Disk /dev/sda: 20.5GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 20.5GB 20.5GB primary ext4 boot fdisk -l root@source ~ # fdisk -l Disk /dev/sda: 19.1 GiB, 20480786432 bytes, 40001536 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x02d71cad Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 40001502 39999455 19.1G 83 Linux After partitioning the sda1 should stay the boot disk and should use gpt. But when I call parted -l or fdisk -l the boot flags wont show up? parted -l root@source ~ # parted -l Model: QEMU QEMU HARDDISK (scsi) Disk /dev/sda: 20.5GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 5121MB 5120MB ext4 2 5121MB 20.5GB 15.4GB xfs fdisk -l root@source ~ # fdisk -l Disk /dev/sda: 19.1 GiB, 20480786432 bytes, 40001536 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 8D6B03D7-1A3B-4BFC-8F8F-64EEF049CB9E Device Start End Sectors Size Type /dev/sda1 2048 10002431 10000384 4.8G Linux filesystem /dev/sda2 10002432 40001502 29999071 14.3G Linux filesystem cat /etc/fstab root@source ~ # cat /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sda1 during installation UUID=3f234dd2-63e6-4676-8ef3-0cde83e52484 / ext4 discard,errors=remount-ro 0 1 /dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0 /dev/sda1 / auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2 /dev/sda2 /data_disk auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 02 Here is my cloud-config which works: #cloud-config resize_rootfs: false disk_setup: /dev/sda: table_type: 'gpt' layout: - 25 - 75 overwrite: true fs_setup: - label: root_fs filesystem: 'ext4' device: /dev/sda partition: sda1 overwrite: true - label: data_disk filesystem: 'xfs' device: /dev/sda partition: sda2 overwrite: true runcmd: - [ partx, --update, /dev/sda ] - [ partprobe ] # asfaik partx and partprobe commands do the same - [ parted, /dev/sda, set, 1, on, boot ] # <<-- set boot flag here - [ mkfs.xfs, /dev/sda2 ] # format second partition with xfs mounts: - ["/dev/sda1", "/"] # mount boot disk on / - ["/dev/sda2", "/data_disk"] # mount data_disk What I am missing? Do I have to tell fstab something more?
I see you have changed the partitioning type from MBR to GPT. Is your firmware in legacy/CSM/BIOS mode, or did you also change the firmware type to UEFI? In any case, you will need to reinstall your bootloader. If you are using BIOS mode (not UEFI), you will need to add a GRUB BIOS boot partition, because the sectors that were used for storing GRUB Stage 1.5 are now occupied by the GPT. If you are using UEFI firmware, you will need to add a FAT formatted EFI System Partition (ESP) from the firmware to boot from.
How to boot after partitioning
1,682,801,791,000
In my Fedora I have some additional HDD with partition mounted as /media/dilnix/data witch contains the most of my huge files sorted in folders like "Music", "Downloads", "Video" etc. Those folders are targets for my symlinks in home folder. Like /home/dilnix/@Video to /media/dilnix/data/Video /home/dilnix/@Downloads to /media/dilnix/data/Downloads etc. My last 2 entries of fstab are following: UUID=355ba039-6126-4c36-ba6a-8ff4f2ee79e8 /media/dilnix/data ext4 defaults,noatime,user 1 2 UUID=24dd893c-07dd-4f52-85c5-066773f74c0f /home ext4 defaults,noatime 1 2 The problem is when I trying to run some application or script from "Downloads" folder (and from deeper) I getting error like following: bash: ./mktool: permission denied Permissions of files for example script I have used: [dilnix@localhost mktool-master]$ ll -Z загалом 36 drwx------. 3 dilnix dilnix unconfined_u:object_r:user_home_t:s0 4096 чер 8 2015 . drwxrwxr-x. 3 dilnix dilnix unconfined_u:object_r:user_home_t:s0 4096 січ 16 11:38 .. -rwxr-xr-x. 1 dilnix dilnix unconfined_u:object_r:user_home_t:s0 18448 чер 8 2015 mktool -rw-rw-r--. 1 dilnix dilnix unconfined_u:object_r:user_home_t:s0 612 чер 8 2015 README.md drwx------. 2 dilnix dilnix unconfined_u:object_r:user_home_t:s0 4096 чер 8 2015 tools [dilnix@localhost mktool-master]$ getfacl mktool # file: mktool # owner: dilnix # group: dilnix user::rwx group::r-x other::r-x What the thing that I missed in my configuration to make my additional folders work as part of my home?? I tried to temporary disable SELinux, but it's not a reason because of error continue to appear.
From man mount, the user mount option implies `noexec: user Allow an ordinary user to mount the filesystem. The name of the mounting user is written to the mtab file (or to the private libmount file in /run/mount on systems without a regular mtab) so that this same user can unmount the filesystem again. This option implies the options noexec, nosuid, and nodev (unless overridden by subsequent options, as in the option line user,exec,dev,suid). So you could remove the user option, or change the mount options to something like defaults,noatime,user,exec,suid.
Why permission denied from folder that is a symlink of a home's subfolder?
1,682,801,791,000
I have several external harddrives that I want to mount to the same point: /media/ext_hd So I have this in my fstab: # EXTERNAL HDS LABEL=Elements /media/ext_hd ntfs-3g defaults,user,noauto 0 0 LABEL=olddata /media/ext_hd auto rw,user,noauto 0 0 LABEL=Seagate%202T /media/ext_hd auto rw,user,noauto 0 0 UUID=335F-0049 /media/ext_hd auto rw,user,noauto 0 0 I would like to just type "mount /media/ext_hd" and have mount find which label or UUID matches whatever is currently connected, and mount that. But, instead it balks that label "Elements" can't be found - the first entry. Mount doesn't appear to search for a best match. Am I missing something? That would seem like a useful feature.
As far as I'm aware mount doesn't scan past the first match. One thing you could do (should consider?) is to set-up udev rules that create the same symlink for all your NTFS disks under /dev ... then a single line in fstab will do for any/all of them.
Will mount search fstab for a best match?
1,682,801,791,000
I've have both Ubuntu and Arch installed on my computer, and want to change the labels of the / and /home partitions (four in total), to make it clear which is which. Can this potentially break anything? The only thing that I can think of is /etc/fstab; this shouldn't be an issue in my case, since it defines partitions by UUID, not label.
Hard drives usually don't have labels, it's filesystems that do. Here are the main places where a filesystem label is likely to come up: In /etc/fstab. In your bootloader configuration (e.g. /boot/grub/grub.cfg). If your Grub configuration is automatically generated, run update-grub after changing your labels and verify that the result is what you wanted. Mostly for removable devices: in the configuration of automounting tools (in custom udev rules, as directory name under /media or /run/media/user_name (if not created on the fly), in /etc/pmount.*, in /etc/auto.misc and files referenced from /etc/auto.master, etc.).
Is there any danger in changing the labels of my hard drives?
1,682,801,791,000
I have a fstab with read-only root-fs and also a rw /var mounted on a USB reader with a µSD in it. Sometimes at boot time system fails to mount /var.  It looks like the system cannot find the partition on the µSD.  My best guess is that the USB reader might be failing or not being enumerated in time.  In this case, the system goes to emergency mode.  If I reboot, it will just boot fine and /var will be mounted fine. I was wondering if there is a way to force reboot if any of the mount points in fstab fail to mount, instead of going to emergency mode. I looked at fstab options and systemd mount options, but I couldn't find anything. BTW, I do not want to us nofail because I need /var to be mounted. /etc/fstab: PARTUUID=00e91e3a-01 /boot vfat defaults,ro 0 2 PARTUUID=00e91e3a-02 / ext4 defaults,noatime,ro 0 1 PARTUUID=90ddf375-01 /var btrfs defaults,x-systemd.mount-timeout=30s,x-systemd.device-timeout=30s 0 0 Boot error photo:
The emergency shell is started by the emergency.service unit. If you want different behavior, you can override the ExecStart value for this unit by placing an override file in /etc/systemd/system/emergency.service.d. E.g. something like: mkdir -p /etc/systemd/system/emergency.service.d cat > /etc/systemd/system/emergency.service.d/override.conf <<EOF [Service] ExecStartPre= ExecStart= ExecStart=/usr/bin/systemctl reboot EOF (We're overriding ExecStartPre here because the default behavior is to wait for plymouth, the boot ui, to exist -- but if we're going to reboot there's no point in doing that, since we won't be interacting with anybody at the console.)
Force reboot if `fstab` mount fails instead of going to emergency mode
1,682,801,791,000
my /etc/fstab UUID=12345abcdef /data xfs defaults 0 0 I thought the 0 0 part, one of those, meant skip disk checking so during boot if the disk wasn't there the OS would be ok with it and continue. In RHEL 8.8 I have that entry in /etc/fstab but I have that disk manually removed from the system. Redhat firsts does the systemd waiting on whatever for 1m30s then drops to hit Ctrl-D for maintenance or enter root password. I enter root password, comment out that entry in /etc/fstab and reboot and things are fine. Is there a way to configure RHEL-8 so it doesn't pause for the minute and 30 seconds, and does not drop to maintenance ? Can it just print a boot message, finish booting, don't delay for more than 2 seconds?
Yeah, add ,nofail to mount options.
RHEL 8 hangs on boot missing disk in fstab
1,682,801,791,000
when configuring an automount via fstab I made a mistake with defining the correct datasystem. Instead of ext4 I configured it to be ntfs. Whenever I copied files into the system I assume it copied it to my system drive instead to the configured hard drive. I noticed it when my hard drive was full, while I just started copying to my brand new high storage hard drive. So I freed up some space and reconfigured fstab. The new hard drive works now as expected. The problem I now have is that the system still somehow assumes the files are still occupying space. So I use the Disk Usage Analyzer. Seeing the main view of the application it tells me my hard drive is used by 420gb. When doing the analysis with the tool, it only shows me that about 150gb are used. This correlates in size with the data on the other hard drive which I copied when fstab was configured wrong. Now I am unable to find the data occupying space on my hard drive, such that I can free the space up again. How can I fix this?
Unmount the filesystem in question (this one you make mistake with the type) and check under the mountpoint. With high probability the files are there (under the mountpoint). And they use diskspace and you can't find them (kind of hidden)
data occupying space but cannot be found
1,682,801,791,000
How should I configure fstab to mount a specific file only? The file is in a directory which contains other directories and files, besides the file I intend to mount.
you don't mount files, you mount filesystems on directories. So, what you want isn't possible. You can mount the whole filesystem containing the file somewhere else, and use e.g. a symbolic link (as created using ln -s) to "alias" it into the place you want to have it. (this still sounds like a bit of a "strange" problem; maybe really ask about the problem you're solving by trying to mount a single file, or a directory without subdirectories!)
How to configure fstab to mount a specific file only? [duplicate]
1,682,801,791,000
I just installed the newest version of Manjaro ARM on an sd-card for my Raspberry Pi 4. Now I am trying to mount two directories located on my Synology NAS permantly (via network of course) using the fstab. On all my other system (including older Manjaro-versions), this works with the very same two lines in my fstab (here just one for example): //192.168.1.61/inventory /mnt/DS_216/inventory cifs users,vers=3.0,credentials=/mnt/DS_216/216credentials,uid=1000,gid=1000,workgroup=WORKGROUP,noauto,nofail,x-systemd.automount,x-systemd.device-timeout=0,_netdev 0 0 However, this time, it didn't seem to work at the beginning, until I found out that it actually did work, but with one major problem: It delays pretty exactly 1 minute and 40 seconds before it actually mounts, although everything else is already loaded. When I open a terminal just after boot, I see just a black screen, and it takes the said amount of time until the username@hostname$ line appears. If I press Ctrl+C in this timespan, it appears instantly. Looking at the journalctl output, you can kind of see what is happening, although it doesn't make a lot of sense: It says, that the network couldn't be configured, but if I ping anything just after boot, it works fine. Also, all of the stuff that is happening here according to journalctl, shouldn't all of this be AFTER fstab has been executed fully? As I said, this never happened on older manjaro versions. What am I missing out on here? Did I misunderstand the journalctl output? Or is there some delay hardcoded into systemd that I don't know about? Thanks in advance for anyone who can help me. Here is the relevant part of the journalctl output: Jun 03 19:49:54 Raspi4 systemd[1]: mnt-DS_216-FamilyTransfer.automount: Got automount request for /mnt/DS_216/FamilyTransfer, triggered by 880 (silver) Jun 03 19:49:55 Raspi4 guake.desktop[733]: Guake initialized Jun 03 19:49:57 Raspi4 org.moson.matray.desktop[682]: matray started. Jun 03 19:49:57 Raspi4 matray[682]: gtk_widget_get_scale_factor: assertion 'GTK_IS_WIDGET (widget)' failed Jun 03 19:49:59 Raspi4 kernel: cam-dummy-reg: disabling Jun 03 19:50:16 Raspi4 systemd[413]: Started Application launched by gsd-media-keys. Jun 03 19:50:16 Raspi4 gsd-media-keys[909]: Sending 'toggle' message to Guake3 Jun 03 19:50:16 Raspi4 gnome-shell[478]: Window manager warning: Buggy client sent a _NET_ACTIVE_WINDOW message with a timestamp of 0 for 0x60000c Jun 03 19:50:16 Raspi4 gnome-shell[478]: Window manager warning: Buggy client sent a _NET_ACTIVE_WINDOW message with a timestamp of 0 for 0x60000c Jun 03 19:50:18 Raspi4 gnome-shell[478]: updates_checker.vala:71: check updates Jun 03 19:50:18 Raspi4 kernel: logitech-hidpp-device 0003:046D:1028.0006: HID++ 1.0 device connected. Jun 03 19:50:18 Raspi4 upowerd[562]: treated changed event as add on /sys/devices/platform/scb/fd500000.pcie/pci0000:00/0000:00:00.0/0000:01:00.0/usb1/1-1/1-1.3/1-1.3.2/1-1.3.2:1.2/0003:046D:C52B.0003/0003:046D:1028.0006/power_supply/hid> Jun 03 19:50:19 Raspi4 guake.desktop[733]: Spawning new terminal at /home/benedikt Jun 03 19:50:19 Raspi4 systemd[413]: Started VTE child process 950 launched by guake process 733. Jun 03 19:50:20 Raspi4 gnome-shell[478]: updates_checker.vala:101: 0 updates found Jun 03 19:50:20 Raspi4 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 03 19:50:21 Raspi4 systemd[1]: systemd-localed.service: Deactivated successfully. Jun 03 19:50:46 Raspi4 geoclue[579]: Service not used for 60 seconds. Shutting down.. Jun 03 19:50:46 Raspi4 systemd[1]: geoclue.service: Deactivated successfully. Jun 03 19:50:48 Raspi4 dbus-daemon[430]: [session uid=1000 pid=430] Activating via systemd: service name='org.gtk.vfs.Metadata' unit='gvfs-metadata.service' requested by ':1.11' (uid=1000 pid=478 comm="/usr/bin/gnome-shell") Jun 03 19:50:48 Raspi4 systemd[413]: Starting Virtual filesystem metadata service... Jun 03 19:50:48 Raspi4 dbus-daemon[430]: [session uid=1000 pid=430] Successfully activated service 'org.gtk.vfs.Metadata' Jun 03 19:50:48 Raspi4 systemd[413]: Started Virtual filesystem metadata service. Jun 03 19:51:34 Raspi4 systemd-networkd-wait-online[289]: Timeout occurred while waiting for network connectivity. Jun 03 19:51:34 Raspi4 systemd[1]: systemd-networkd-wait-online.service: Main process exited, code=exited, status=1/FAILURE Jun 03 19:51:34 Raspi4 systemd[1]: systemd-networkd-wait-online.service: Failed with result 'exit-code'. Jun 03 19:51:34 Raspi4 systemd[1]: Failed to start Wait for Network to be Configured. Jun 03 19:51:34 Raspi4 systemd[1]: Reached target Network is Online. Jun 03 19:51:34 Raspi4 systemd[1]: Mounting /mnt/DS_216/FamilyTransfer... Jun 03 19:51:34 Raspi4 systemd[1]: Starting Samba NMB Daemon... Jun 03 19:51:34 Raspi4 kernel: FS-Cache: Netfs 'cifs' registered for caching Jun 03 19:51:34 Raspi4 kernel: Key type cifs.spnego registered Jun 03 19:51:34 Raspi4 kernel: Key type cifs.idmap registered Jun 03 19:51:34 Raspi4 kernel: CIFS: Attempting to mount \\192.168.1.61\FamilyTransfer Jun 03 19:51:34 Raspi4 systemd[1]: Mounted /mnt/DS_216/FamilyTransfer.```
Somehow, NetworkManager and systemd-networkd were both running, so that they were blocking each other somehow, and therefore a two minute timeout of systemd-networkd had to end before the fstab was executed. This also explains why a network connection was there all the time. Turning off the networkd service and sticking with NetworkManager (systemctl disable systemd-networkd-wait-online) fixed all my problems.
fstab mount is delayed
1,682,801,791,000
I've been working through these two tutorials, in the hopes of setting up a Raspberry Pi to run Owncloud and Resilio Sync in tandem. First, I installed Owncloud and gave it access to a mounted external HDD as recommended in the guide, and it works, but trying to "merge" privileges for the external HDD so that both it and Sync have read/write access to the directory has proven frustrating (i.e. it doesn't work on Sync's end). Sync is owned by the "pi" user. Owncloud is owned by the "www-data" user. I tried chown-ing Sync to run as "www-data" but that had no positive effect. Here's my current fstab entry: UUID=[UUID HERE] /mnt/ownclouddrive auto nofail,uid=33,gid=33,umask=0027,dmask=0027,noatime 0 0 Also, it appears I need to fstab the drive in order for Owncloud to work. What am I overlooking? I'm a novice to Linux, so any help is appreciated. Thank you!
Change the fstab to ...gid=www-data,umask=0007,dmask=0007... to allow group access to the drive. Read man umask. Then, add user pi to the www-data group: sudo adduser pi www-data. Read man adduser. Logout and login - groups are set up at login time.
How to share an fstab'd external HDD between users "www-data" and "pi"?
1,682,801,791,000
Can multiple instances of Unit= exist in a systemd.path or systemd.timer unit? Or, must one instead specify multiple instances of the path or timer unit, each with a single instance of Unit=? I haven't been able to find or derive any guidance elsewhere. The former obviously is easier. The specific application is to have a path unit activate two mount units. In particular, the path unit monitors a virtual machine's log file, which is quiet until the VM runs. The mounts are of shares on the virtual machine and are defined in the host's fstab entries, each of which uses the x-systemd.requires= mount option to specify the path unit, so that the mounts don't occur until the virtual machine is running. This works well with a single share. So, the more specific questions are (a) whether the path unit knows to simply propagate the mount units as instructed, leaving the mount units to mount the shares, or gets confused and can only propate a single mount unit; or (b) whether calling the same path unit twice in fstab creates conflicts or errors when the path unit has many Unit= directives (i.e., by re-creating all the mount points specified) or simply is an expression of a dependency. Many thanks.
man systemd.timer says: Unit= The unit to activate when this timer elapses. The argument is a unit name, whose suffix is not ".timer". If not specified, this value defaults to a service that has the same name as the timer unit, except for the suffix. (See above.) It is recommended that the unit name that is activated and the unit name of the timer unit are named identically, except for the suffix. man systemd.path similarly says: Unit= The unit to activate when any of the configured paths changes. The argument is a unit name, whose suffix is not ".path". If not specified, this value defaults to a service that has the same name as the path unit, except for the suffix. (See above.) It is recommended that the unit name that is activated and the unit name of the path unit are named identical, except for the suffix. Neither of these suggest that you can have multiple Unit= lines or multiple arguments per Unit= line. Even if you try it and find it works, it's not guaranteed that it will work in future releases of systemd because it would be undocumented behaviour. Therefore it's safest to create a single *.path/*.timer for each unit you need to trigger, even if it means identical *.path or *.timer units. There are probably already several *.timer units with OnCalendar=daily on your system. Hoestly, it would be a little scary to trigger two independent services if I touch a single path. It invites race conditions. You could consider changing your service to use multiple ExecStartPre= or ExecStartPost= to sequence the operations, ensuring they always happen in a deterministic order.
Multiple Instances of Unit= in Path or Timer Unit?