date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,682,801,791,000 |
I have following disk structure:
sda1 : Windows
sda2 : an old Linux distribution
sda3 : a new Linux distribution
sda4 : data partition
I have grub installed and I choose the system at boot time. I have now been using new Linux distribution only on sda3 and it is working all right. I had been tinkering a little with setting up /etc/fstab file so that it mounts CDROM and data partition at boot time.
I recently saw that the /etc/fstab file in new Linux system (sda3) looks like this:
/dev/sda2 / ext4 errors=remount-ro 0 1 #NOTE THIS ENTRY HAS SDA2!
/dev/sda4 /media/me_user/datapart ext4 defaults 1 1
/dev/sr0 /media/cdrom0 auto ro,user,noauto,unhide 0 0
It seems that the root entry is wrong: it should have been /dev/sda3 (I must have changed it by mistake). However, the system is working all right and when I boot, the home folder is on sda3 only, not on sda2.
I tried removing the root entry line from /etc/fstab. Then, on booting, I am left on a terminal prompt asking me to login. I can still login but graphics do not start.
I have corrected the fstab file so that root entry is for sda3 but I want to be clear about this issue. Why is my system working all right and I am reaching home folder on sda3 when the root entry in /etc/fstab is for sda2?
|
/etc/fstab does not directly control which filesystem is mounted as root. (Which makes sense. You have to mount a root filesystem before you can read /etc/fstab.)
The root filesystem is typically specified in the kernel command line parameters. If you run cat /proc/cmdline to inspect them, you'll probably see root=/dev/sda3 or root=UUID=<uuid of /dev/sda3>.
These parameters are generally configured in the bootloader configuration. The details here are dependent on the distribution you're using, but assuming you're using grub you'll probably find its config in /boot/grub/grub.cfg or /boot/grub2/grub.cfg. If this config is correct, then you should end up mounting the right root filesystem.
So why did your boot fail when you removed / from /etc/fstab? Part of the system start-up process remounts / with the options specified in /etc/fstab, and this is probably what failed.
| System booting all right despite wrong root entry in /etc/fstab |
1,682,801,791,000 |
I originally had a Linux swap partition on my computer, which I have removed. When I then tried to boot, I would get the error
ERROR: resmue: hibernation device 'UUID=f5eea.....andsoon' not found
which referred to the missing swap partition. So I commented out the line with the corresponding UUID (and which said "swap") in the /etc/fstab file (via a Live USB stick). Now my PC does in fact boot, but for a brief moment during bootup, I still get the same error message, along with the same UUID as before. I don't even know where else this UUID is stored on my computer anymore. What could be going on?
I'm on Manjaro Linux 5.8.18-1
|
The device is most propably referenced in a kernel-parameter for your bootloader.
So you propably have to update either the bootloader-info or manually remove that reference from your boot-configuration.
This is where Linux distributions differ a lot. Ubuntu/Debian ist different. So are systems using grub (like SLES 11) or grub2 (like CentOS 7).
According to this arch-linux article (Manjaro seems to be an Arch-fork) you should check:
/etc/default/grub and the setting of GRUB_CMDLINE_LINUX_DEFAULT
After correcting the settings you should run
sudo grub-mkconfig -o /boot/grub/grub.cfg
This is very similar to changing grub2-settings on other distros.
| "Hibernation device not found", although I have updated /etc/fstab |
1,682,801,791,000 |
Here's the problem: I wanted to create a file share between my laptop and my pc at home so that I have access to my files from both machines. The laptop is the server (as I might need the files when on the move) and the pc is the client.
Here's my attempt at a solution: having linux (debian) on both machines, I decided to use NFS. Everything worked fine until I rebooted both machines and I ran into a catch-22. Ideally, I wanted the server to automatically mount the NFS, so I added a line in /etc/fstab. Although, after rebooting the server, I noticed that I had to re-run exportfs -a to re-load my /etc/exports, which looks like this:
/nfs pc(rw,sync,no_subtree_check)
In order to do this, though, pc had to be reachable, otherwise I got this error upon
exportfs: failed to resolve pc
So, if pc had to be reachable before laptop was on, it defies the point of having a /etc/fstab for pc, unless I expected very few re-boots of laptop, which is not my case.
In short: pc wants laptop to be reachable to mount the NFS automatically via fstab, while laptop wants pc to be reachable in order to assign the correct permissions in exports. Is there a way to run exportfs -a at startup for laptop without having to have pc on?
|
The server shouldn't care if the client is reachable or not. It's just making a share available... whether it's used or not.
The error you are getting "exportfs: failed to resolve pc", isn't saying it can't "reach" pc. It's saying it cant "resolve" pc. The problem is that you are referencing a specific client by hostname, but the system can't resolve the hostname into an IP address.
Apparently, this is a known bug in the way exportfs works. I haven't' looked too deep into it, but in the cases I saw (simply google for the error message), it's a timing problem between when nfs starts and when dns services are available. There are also problems with DHCP defined clients and DNS not being updated fast enough.
Anyway, your problem is a name-resolution one. If "pc" has a static IP address, you could use the address in /etc/exports, or add "pc" to your /etc/hosts file. If either hosts are DHCP, you may be out of luck.
On the client side ("pc"), you might want to look into using automounter (autofs) to mount the remote filesystem instead of using /etc/fstab. Automounter will only mount the filesystem when you need it, so it's unlikely to hang on a system boot if the NFS server isn't available.
Some references to the problems with exportfs and name resolution:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=860264
https://forums.centos.org/viewtopic.php?t=69981
Lot's more out there showing similar problem.
| Set up file share with automatic mount using NFS |
1,682,801,791,000 |
I have tried using 3 different identifiers to mount my partition at /dev/sdb1 to /mnt/drive2, but every time I boot it doesn't seem to work and I have to mount it manually. I am connected remotely so I don't know console output during the boot process. It is a GPT drive and has only one partition of type ext4.
Here's my /etc/fstab:
The last three lines are my attempts to mount the same partition with PARTUUID, UUID, and the /dev/sdb1 path. None of them worked.
Curiously enough, my NTFS partition mounts successfully.
I'm running Arch Linux.
|
please do mount -a and if there is any error it will be printed to terminal.
Also by doing dmesg | grep "sd[a-z]" and dmesg | grep "mount" you can investigate mount errors during boot after booting your linux.
| gpt ext4 /etc/fstab doesn't work [closed] |
1,682,801,791,000 |
I have a cifs fileshare that I use. My organisation requires that it uses Kerberos so it's mounted with sec=krb5.
This is fine, but I'd like it to mount after login automatically without having to click on the icon on Nautilus.
I can create a .desktop entry to .config/autostart/ but it seems a bit clunky. I was hoping to be able to do it in the fstab but I cannot fins the right option. It's currently this;
$FILEPATH $DFS_MOUNT cifs _netdev,sec=krb5,users,rw 0 0
|
This is what works for me in Ubuntu 18.04:
I added an fstab entry for the fileshare directory on the file server:
//server.my.domain.name/directory /mount/point cifs noauto,users,_netdev,sec=krb5
Then I created a shell script with the file extension .sh in /etc/profile.d to mount the directory on login, but only for users who belong to the appropriate domain:
if [[ " $(groups) " =~ ' domain [email protected] ' ]]; then
mount /mount/point >/dev/null
fi
The paths above have been anonymized to protect the guilty ;-}
P.S. If your network takes too long to start, you may need something that takes that into account, such as:
for i in {1..30} # give up if server isn't reachable in 30 seconds
do sleep 1 # wait a second
if [ ping -c1 server.my.domain.name &> /dev/null ]; then
continue # loop if server can't be reached
elif [[ " $(groups) " =~ ' domain [email protected] ' ]]; then
mount /mount/point >/dev/null # mount share once server responds
fi
done
WARNING: This is untested; use at your own risk!
| mount in fstab with krb5 at login |
1,565,291,835,000 |
I know that a lot of questions have already been asked about emergency mode when booting a Linux distro. (Seemingly Mint, Ubuntu, Redhat all have it.) Does it have documentation? What entity does it belong to (i.e. the Linux kernel, the distribution, the library)? I am just trying to orient myself, and all the information I have been able to find is of the form "do this and it will go away."
Thanks in advance.
It looks like there is possibly more than one emergency mode that can be entered during boot. I am also interested in knowing how to tell which one is which and where to get documentation for it. In my particular case, the Linux Mint symbol appears for a while and then the message "Welcome to emergency mode! After logging in, type "journlctl -xb" to view system logs...Give root password for maintenance."
|
The most common "Emergency mode" is the one entered by your boot system (e.g. GRUB or the next stage, systemd) when the system cannot set up all the hardware it is supposed to set up (e.g. no matching graphics driver for the hardware, partition missing / cannot mount everything in /etc/fstab) etc.
The way to deal with the emergency mode is dependent on the stage the system is in, and the specific errors that caused it (it is very important to read all error messages here).
The prompt might tell you which system you are in ("open" prompt: likely GRUB, asking for root password: systemd or some other init variant).
EDIT: Your emergency message prompts you for your root password. This is systemd or a similar init process talking. Please carefully examine the messages that come before this prompt to find out what the problem is.
| How can I find the documentation for "emergency mode" when attempting to boot into Linux (Mint in my case)? |
1,565,291,835,000 |
Every single command I run gives me permission denied as root, this happened shortly after I changed /etc/fstab and remounted ext4.
The only commands I appear to be able to run is echo and cd, not that it is much help, all others I have tried show the following:
bash: /bin/ls: Permission denied
bash: /bin/bash: Permission denied
bash: /bin/mount: Permission denied
bash: /bin/chmod: Permission denied
All running programs are continuing to run, I also cannot SSH in as that also gives me permission denied so only my existing connection is working. I believe it may be caused by noexec but I am unable to run mount to fix it and it was on the root file system. Also I would rather not restart if at all possible and since it was changed in fstab it would likely happen again.
I have exhausted all my ideas and searching neither mount or chmod have helped as they are permission denied as almost every other command I have thought of.
|
echo and cd are shell builtin commands, which is why they still "run".
To fix fstab, run the command
while read x; do echo "$x"; done < /etc/fstab
which will display the contents of /etc/fstab, then run
while read x; do echo "$x"; done > /etc/fstab
which will clobber /etc/fstab (a very bad thing) but will allow you to replace it with what you type in at the terminal.
Then proceed to type in (or if you're lucky, copy&paste) the original contents of /etc/fstab, modified so that you can execute stuff on the root filesystem again.
Terminate input with Control-D (or whatever your tty eof character is) then reboot (or reset / power-cycle) the computer.
Rebooting cleanly could prove difficult because it will want to run stuff to do that, so you might be forced to reset / power-cycle - but that may be risky if the buffer cache has not been flushed to disk ... best you can do is give it a bit of time before doing it.
| Linux permission denied |
1,565,291,835,000 |
I use my computer with Debian 9 for Java Spark development.
Spark is a Big Data API and these kind of works use more temporary space than the 2G that by default the Debian installation sat.
marc@bouleau:/data$ df -h
Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur
udev 15G 0 15G 0% /dev
tmpfs 3,0G 9,7M 3,0G 1% /run
/dev/sda1 23G 9,7G 12G 45% /
tmpfs 15G 29M 15G 1% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 15G 0 15G 0% /sys/fs/cgroup
/dev/sda5 9,2G 2,3G 6,4G 26% /var
/dev/sda7 1,9G 6,0M 1,7G 1% /tmp
/dev/sdb1 1,8T 85G 1,7T 5% /data
/dev/sda8 171G 77G 85G 48% /home
tmpfs 3,0G 16K 3,0G 1% /run/user/115
tmpfs 3,0G 64K 3,0G 1% /run/user/1000
But currently I can't resize it by using a mount command : the relying filesystem isn't large enough. It appears that I have to reduce one filesystem and extends another, or give /tmp another mount point.
I've installed gparted. But using it in my GNOME session, it doesn't offer me to resize anything : I mean : for /dev/sdb1 for example where I have 1,8T the resize options show for constants :
minimal size : 1,8T, maximal size : 1,8T
and therefore I can't change anything.
What is happening that makes gparted unable to changes my filesystems sizes ?
What is the simpliest (and the safer way) to solve my problem ?
The output of lsblk is :
marc@bouleau:/data$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 238,5G 0 disk
├─sda1 8:1 0 23,3G 0 part /
├─sda5 8:5 0 9,3G 0 part /var
├─sda6 8:6 0 30G 0 part [SWAP]
├─sda7 8:7 0 1,9G 0 part /tmp
└─sda8 8:8 0 174G 0 part /home
sdb 8:16 0 1,8T 0 disk
└─sdb1 8:17 0 1,8T 0 part /data
sr0 11:0 1 1024M 0 rom
|
On Linux you cannot change partitions on a drive that is mounted. You will need to boot from a live usb [1], and first make more space by making /swap smaller from the end point (it looks too big anyway unless you have more than 15 Gb of ram). Otherwise make /home smaller from the start. You can then expand /tmp into the empty space.
With that said, I'm not sure that expanding /tmp is the best solution to your problem. I'm not familiar with Java development or spark but maybe you can set a directory in /home to work as temporary space.
| I would like to resize my /tmp partition but gparted doesn't want to change anything, nowhere |
1,565,291,835,000 |
I have a partition structure as follows:
/dev/sdc1 => Partition 1 ( My OS. [ Linux flavour ] )
/dev/sdc2 => Partition 2 ( This contents some data. )
/dev/sdc3 => Partition 3 ( This also contents some data. )
/dev/sdc4 => Partition 4 ( I want this as a deciding partition. )
I am trying to mount partition 2 or partition 3 dynamically depending on the file present in partition 4.
For example:- Partition 2 will be mounted if partition 4 consists of a file named two. Partition 3 will be mounted if partition 4 consists of a file named three.
Note:- This partition will never be mounted together i.e. if Partition 2 is mounted partition 3 will be not be mounted until partition 2 is unmounted. Thus I can use a common directory for both partitions.
As I have systemd available on my os I can write a startup script which can read from partition 4 and mount the appropriate partition at boot and write the partition record into /etc/fstab.
But according to my understanding fstab is critical file and if any failure happens or fstab get's corrupt it's going to stop the system from booting.
Question:
Now what I am trying to achieve is can I add an entry in fstab which will read dynamically partition 4 and add the entry for partition 2 or partition 3 depending on the file that exists in the partition 4.
|
The solution should be systemd-based, you do NOT have to edit /etc/fstab with systemd, why would you ? You just mount the partition, depending on the factors you have outlined, and leave it at that.
I do not understand why you would want to edit /etc/fstab if systemd can mount what you need. Do note that systemd will refuse to boot if an entry in /etc/fstab is not available. This entails that on systems with systemd, /etc/fstab should only be used for boot-essential static file systems.
| Can fstab dynamically mount a partition by reading from a file or filename? |
1,565,291,835,000 |
we create the following mount point and nfsshare folder as share folder on all our linux client machines ( total 1872 machines - redhat 7.2 )
master1:/nfs 41932800 6563840 35368960 16% /nfsshare
I just thinking about if from some reason by mistake someone of our users decided by mistake to perform umount to /nfsshare
in that case it will cause real problem to the application
so is it possible mount automatically if from some reason mount folder isn't mounted ?
|
As you're using RHEL 7.x, you could use a systemd .automount unit. Just make an entry like this in /etc/fstab:
master1:/nfs /nfsshare nfs defaults,noauto,x-systemd.automount 0 0
mount option noauto disables classic-style mounting at boot time
mount option x-systemd.automount causes systemd-fstab-generator to create an .automount unit in addition to a regular .mount unit. (Note: parallelizable .mount units are the way systemd handles filesystem mounts specified in /etc/fstab, instead of a traditional single-threaded mount -a.)
Make sure the mount-point directory /nfsshare exists; in theory systemd should automatically create it if it does not exist, but right now there is a bug affecting the auto-creation of mount points.
This will auto-mount the specified filesystem on demand: whenever an user process touches /nfsshare, the NFS filesystem will be automatically and transparently mounted if it isn't already mounted.
| is it possible mount automatically if from some reason mount folder isn't mounted |
1,565,291,835,000 |
System: Linux Mint 19.1 Cinnamon.
Disks in this question are considered external HDDs either ext4 or ntfs formatted.
I am interested in how do I manage to set my fstab or whatever else to be able to Unmount (umount) those external HDDs under my normal user account?
I have:
one External hard disk over USB 3.0 formatted as ext4
one External hard disk over USB 2.0 formatted as ntfs
Relevant parts of my fstab:
UUID=<the UUID of the Ext4 disk drive> /mnt/external-hdd-2tb-usb3-ext4 ext4 nosuid,nodev,nofail 0 0
UUID=<the UUID of the NTFS disk drive> /mnt/external-hdd-500gb-usb2-ntfs ntfs nosuid,nodev,nofail 0 0
|
You need to add users option to your fstab entries.
Working example on my setup:
UUID=<the UUID of the Ext4 disk drive> /mnt/external-hdd-2tb-usb3-ext4 ext4 nosuid,nodev,nofail,users 0 0
UUID=<the UUID of the NTFS disk drive> /mnt/external-hdd-500gb-usb2-ntfs ntfs nosuid,nodev,nofail,users 0 0
This will allow you (upon reboot) to execute for example:
umount /dev/sdX1
as an ordinary user without sudo.
Additionally, on Linux Mint, there is a Disks GUI, where you can then even power off those drives, I stress: once you unmounted them!, by pressing the Power off this disk button in the top bar, on the right:
| How to set fstab to be able to umount my external HDDs under normal user account? |
1,565,291,835,000 |
I follow a tutorial to secure my /etc/fstab file. This is a certain part about /var and /tmp
UUID=XXXX-XXXX-XXXX /var ext4 defaults,nodev,nosuid,noexec 1 2
UUID=ZZZZ-ZZZZ-ZZZZ /tmp ext4 defaults,nodev,nosuid,noexec 1 2
I executed the following commands to test the configuration :
touch /tmp/testFile
chmod u+s /tmp/testFile
I was expecting an error message but nothing... Is it normal ? Is it dangerous ?
|
nosuid doesn’t prevent setting the bits; it means that they don’t have any effect. (That way, previously-set bits are also rendered ineffective.)
Setting the bits is only dangerous if the file system is later mounted without nosuid; but if anyone has sufficient access to set those bits on your file system, you’ve lost anyway.
| nosuid doesn't prevent chmod u+s |
1,565,291,835,000 |
I have mounted a ramdrive via /etc/fstab.
And I would like to inspect the properties of the drive such as the memory policy (e.g. bind, prefer, or?) to make sure all properties are expected.
How can I do so?
Thanks!
|
Executing mount without any arguments gives a list of mounted filesystems, including tmpfs, and their properties:
tmpfs on /mountpoint type tmpfs (rw,relatime)
| Inspect a Ramdrive? |
1,565,291,835,000 |
We have a Raspberry Pi located at a location where it may experience frequent power loss. I'm trying to make it scan, and repair (if necessary) a filesystem every time it boots up, in case the power loss causes FS corruption. The filesystem in question is ext4, but it is NOT the root filesystem.
It seems that I can do what I want by using tune2fs -c 1 /dev/sdX#, and setting /etc/fstab's Filesystem Check Order to 2 for that partition. What I'm not sure about is what it does when it detects problems. Does this automatically fix them? Will it stop booting, and wait for someone to confirm that it should fix things?
The Pi is headless - there's no one to confirm anything.
|
You don't need to set "-c 1" on the filesystem. That means "force a full e2fsck run each mount", which would both be annoying (slow boot time), and unnecessary for ext4 with a journal. Even without a journal you don't strictly need to run a full e2fsck if the filesystem has been cleanly unmounted (it will record this into the superblock itself).
By default, if there is a check phase in /etc/fstab then e2fsck will repair the filesystem automatically. Per the e2fsck.8 man page, the default is to run with "-p", though "-y" is more aggressive in fixing problems automatically.
| Does a filesystem check initiated from /etc/fstab auto-repair? |
1,565,291,835,000 |
I have an external USB drive which is formatted in FAT32. That's the output of the fdisk -l command:
/dev/sdb1 * 56 15728639 15728584 7.5G c W95 FAT32 (LBA)
I have the following entry in my /etc/fstab:
UUID=FAF0-4AE6 /media/usb vfat defaults,auto,rw,users,nofail,x-systemd.automount,x-systemd.device-timeout=1 0 0
I am mounting the drive using: sudo mount -a but then everything is owned by root:root and I cannot change the ownership of the different directories and to copy files from my internal partition to the external USB drive. It gives me:
cp: cannot create regular file ... Permission denied
Are my fstab options correct, why I can't use my USB flash drive with regular user's permissions?
|
Vfat partitions don't support file owners/groups. Thus, the Linux kernel has to fake it. By default, it makes root:root own everything. To change this, add uid=youruser,gid=yourgroup to the mount options. Then, that user and group will own everything instead.
| “cp: cannot create regular file” on a VFAT formatted external USB flash drive |
1,565,291,835,000 |
This one is a bit weird.
We have a closed network of about five (5) Red Hat Workstation 7 assets in one of our development laboratories. One of the REHL 7 machines is hosting a USB connected DroboPro via NFS to the other machines - the other machines are mounting this share on boot via /etc/fstab. Everything works great and all users and access the share - unless the machine hosting the NFS share goes down. When that machine is shut down or brought offline, the share is inaccessible (obviously), but the other machines also experience a side effect we can't explain.
If the machine hosting the share is off, and we lock the screen or reboot any of the other four (4) RHEL 7 machines, they lockup/freeze and are inaccessible until the machine hosting the NFS share is brought back online.
We've narrowed the source down to the NFS share by unmounting it on the other four (4) RHEL 7 assets and bringing down the share, which resulted in no locking/freezing.
/etc/exports > /dir/path/ 192.168.100.0/24(rw)
Any insight or recommendation for further troubleshooting would be appreciated.
Thanks.
|
give this a try: add the following flags to your nfs mount point in /etc/fstab:
bg,intr,soft,timeo=3,retrans=3,actimeo=3,retry=3
adjust timeout rates accordingly but i found this combination works the best. Ensure "default" is not set in nfs mount point line and read the man pages for nfs to see exactly how this would effect your mount point.
| NFS Share Locking Workstations in Closed Network |
1,565,291,835,000 |
I have a raid10 btrfs volume. When I mount it by UUID, the mount fails and I am booted into emergency mode. When I mount it by drive letter (/dev/sdb/) the server boots fine. Why does this happen?
fstab:
/dev/sda2 /boot vfat defaults,noatime 0 2
/dev/sda3 / btrfs discard,ssd,compress=lzo,noatime 0 0
#e1ee5980-c54b-4b6e-82e2-3dbdcee1dd24 /mnt/store btrfs noatime 0 0
/dev/sdb /mnt/store btrfs noatime 0 0
gentooserver ~ # btrfs fi show
Label: none uuid: a782a62a-ffde-49b1-a680-0afeb9cdab0b
Total devices 1 FS bytes used 6.64GiB
devid 1 size 55.77GiB used 13.01GiB path /dev/sda3
Label: none uuid: e1ee5980-c54b-4b6e-82e2-3dbdcee1dd24
Total devices 10 FS bytes used 868.45GiB
devid 1 size 931.51GiB used 174.40GiB path /dev/sdb
devid 2 size 931.51GiB used 174.40GiB path /dev/sdc
devid 3 size 931.51GiB used 174.40GiB path /dev/sdd
devid 4 size 931.51GiB used 174.40GiB path /dev/sde
devid 5 size 931.51GiB used 174.40GiB path /dev/sdf
devid 6 size 931.51GiB used 174.40GiB path /dev/sdg
devid 7 size 931.51GiB used 174.40GiB path /dev/sdh
devid 8 size 931.51GiB used 174.40GiB path /dev/sdi
devid 9 size 931.51GiB used 174.40GiB path /dev/sdj
devid 10 size 931.51GiB used 174.40GiB path /dev/sdk
The actual data on the volume seems to be fine and undamaged. btrfs check returned no errors. systemctl status returned no info on the error.
|
You have the syntax wrong.
It should be :
UUID=e1ee5980-c54b-4b6e-82e2-3dbdcee1dd24 /mnt/store btrfs noatime 0 0
| /etc/fstab mount fails when mounted by UUID |
1,565,291,835,000 |
we have a lot of linux working machines
all mount point are configured in the /etc/fstab
as the following:
/dev/sdc /grd/sdc ext4 defaults,noatime 0 0
/dev/sdd /grd/sdd ext4 defaults,noatime 0 0
/dev/sdb /grd/sdb ext4 defaults,noatime 0 0
/dev/sde /grd/sde ext4 defaults,noatime 0 0
/dev/sdf /grd/sdf ext4 defaults,noatime 0 0
I want to change the /etc/fstab configuration to use the UUID instead the current conf
can we reconfigure the fstab to use UUID , after machines are working for along time is OK ?
or maybe too late? , or risky ?
example:
UUID="14314872-abd5-24e7-a850-db36fab2c6a1" /grd/sdc ext4 defaults,noatime 0 0
|
There shouldn't be any issues. If you do changes to your machine configuration (for example add or replace disks) the device names (/dev/sdX) might change at next boot. Using UUIDs avoids this issue.
Since you use device names to name the mount points (/grd/sdX), those might not match the device name anymore should the device names change for any reason.
| reconfigure the fstab file with UUID |
1,565,291,835,000 |
I wanted to put the mounting of my external USB storage disk into /etc/fstab so that I have it mounted READ/ONLY.
LABEL=PN /PN ext3 defaults,ro 1 3
My attempt at doing this causes the system to stop at the point in the boot process where the disks are being fsck'd as it apparently does not see USB drive yet at that point.
How can I make this happen?
|
Quick & dirty, omit the filesystem checking of the USB drive (changing the 3 to a 0):
LABEL=PN /PN ext3 defaults,ro 1 0
If/when you want to manually fsck the drive, unmount it.
| RHEL6 / Centos 6 -- mounting external USB storage at boot time in /etc/fstab |
1,565,291,835,000 |
I managed to run ownCloud on my RaspberryPi 2 on Raspbian. Now I am trying to move the data directory to my NAS.
I already shared a folder on my NAS with CIFS and mounted the folder.
This allows me to access the shared directory via the command line and manipulate entries. So that works. However, when opening ownCloud in my browser, I get the following error message:
Data directory (/home/pi/Cloud/storage) is readable by other users
Please change the permissions to 0770 so that the directory cannot be listed by other users.
So I tried to adjust the permissions in the /etc/fstab file, where I mounted the shared directory. This also worked out, but changes the owner from www-data to pi. With the result that ownCloud does not run at all, since the data directory has to be owned by www-data.
I mounted the shared folder by adding the following line to the /etc/fstab file:
//<NAS-IP>/<sharedFolder> /home/pi/Cloud/storage cifs username=<my username>,password=<my password>,uid=www-data,gid=www-data,dir_mode=770,file_mode=770,umask=0007 0 0
Which results in these permissions:
drwxr-xr-x 2 pi pi 4096 Sep 2 23:15 storage
So the problem is that the data directory can be read by all users, but when I restrict the permissions, it is not owned by www-data anymore.
Does anyone have an idea how to fix this? It seems that I am so close to have ownCloud running, but I can't figure out this last step.
|
It sounds like your NAS supports unix extensions that are overriding your mount settings. The man page for mount.cifs notes that dir_mode, file_mode, uid and gid can be overridden by the server if it supports unix extensions (very likely if it is a Linux based NAS).
If this is the case you might be able to change the permissions on the folder directly. If that doesn't work try mounting with the nounix option to disable the extensions.
| Cannot access mounted shared NAS directory with ownCloud |
1,565,291,835,000 |
I want to comment certain lines in fstab using sed command. The following are the lines I need to comment:
172.0.0.1:/export/project/common /nfs/share nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp
172.0.0.1:/export/project/share1 /nfs/shares1 nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp
I tried using this command but it didn't work:
sed -i '/172.0.0.1:/export/project/common /nfs/share nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp /s/^/#/' /etc/fstab_test
sed -i '/172.0.0.1:/export/project/share1 /nfs/shares1 nfs4 rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp /s/^/#/' /etc/fstab_test
|
Try this,
sed -e '/[/]/common s/^/#/' /etc/fstab
sed -e '/[/]/share1 s/^/#/' /etc/fstab
Specifying this /[/]common/ will select only lines that contain /common.
If this works then replace -e with -i for executing the changing into the file.
You can do this with awk
awk '/[/]common/{$0="#"$0} 1' /etc/fstab >/etc/fstab.tmp && mv /etc/fstab.tmp /etc/fstab
awk '/[/]share1/{$0="#"$0} 1' /etc/fstab >/etc/fstab.tmp && mv /etc/fstab.tmp /etc/fstab
Specifying this /[/]common/ {$0="#"$0} will chose those lines containing /common and place a # at the beginning of the line.
| How do I comment lines in fstab using sed? |
1,565,291,835,000 |
I am trying to setup a fully bootable Arch backup by following: rsync - As a backup utility.
I am having a little trouble understanding the example "Update the fstab".
Assume we are using UUIDs and not /dev/sdaXX style fstab files. Let X-num be the UUID of the original FS partitions (num as a placeholder for partition number) and Y-num be the backup's UUIDs. Would we replace:
UUID=X-1 /boot ext2 defaults 0 2
UUID=X-2 none swap defaults 0 0
UUID=X-3 / ext4 defaults 0 1
UUID=X-4 /home ext4 defaults 0 2
with:
UUID=Y-1 /boot ext2 defaults 0 2
UUID=Y-2 none swap defaults 0 0
UUID=Y-3 / ext4 defaults 0 1
UUID=Y-4 /home ext4 defaults 0 2
? I don't understand in the article how 4 rows are replaced with a single row.
|
Yes, you would replace the UUIDs as you think. The backup filesystems all have unique UUIDs, just as the active ones do, so the entries you have in the bootable backup will all be unique. The article you reference presents a simplified example, with expanding it to multiple fstab entries "left as an exercise for the reader".
| Arch linux; changing fstab for bootable backups |
1,565,291,835,000 |
I have run into a problem with auto mounting nfs exports on a RHEL 6 server. To give you a brief configuration and what I have tried, I’m mounting 6 NFS exported shares from the network. Unfortunately none of the mounts in fstab come up.
The mount directories exist, and are in the fstab file.
I have verified that nfs and netfs are both running at rc3 and the network is up before netfs starts up.
The system is mounting its / (nfs root) from the same network server I am attempting to get the other shares from, so I am 100% sure the network is up and the server is reachable.
fstab is correct since 'mount -a' works as expected once the system is up.
One solution would be to create a script that runs at the end of start-up and calls mount –a, but I really do not want to do that. I have referenced some other ‘solutions’ found on the internet but they have not worked. Here is a common problem, but it does not apply to my case,
http://www.linuxquestions.org/questions/linux-server-73/nfs-entries-in-etc-fstab-not-mounting-on-boot-546512/
My fstab file (note I added _netdev to two for testing...):
oc:/usr/PET /usr/PET nfs hard,intr,nolock,noatime,_netdev 0 0
oc:/usr/g /oc/usr/g nfs hard,intr,nolock,noatime,_netdev 0 0
oc:/usr/lib /oc/usr/lib nfs hard,intr,nolock,noatime 0 0
oc:/usr/lib32 /oc/usr/lib32 nfs hard,intr,nolock,noatime 0 0
oc:/usr/lib64 /oc/usr/lib64 nfs hard,intr,nolock,noatime 0 0
|
It turns out that the init script for netfs has the following:
[ -f /etc/sysconfig/network ] || exit 0
That file did not exist in my RHEL 6 install, possibly because it was a very minimal install, I'm not sure. Regardless, looking at another machine, I created the file with the following:
NETWORKING=yes
HOSTNAME=localhost.localdomain
Rebooted, and everything worked as expected.
| NFS mounts in fstab are not mounted during startup on RHEL |
1,565,291,835,000 |
I have 4 drives on my home server. 3 external drives mounted in fstab to three individual directories in /media. The fourth drive contains a headless debian os install. I want to mount a directory in my home folder on the os drive to a directory in /media, to get easier access to the drives storage space. Am I looking into setting it up in fstab, or use a symlink. What is the best approach to do that?
|
ln -s /home /media/volume
Much safer option security-wise. Won't accidentally overwrite or delete important system files, for instance.
| Best practice to mount directory in /home to directory in /media? |
1,565,291,835,000 |
Task: "You have to configure the /home's filesystem-mounting-method in such a way, so that all I/O operations will be always done in a synchronic mode, without the possibility of using the SUID authorisation. The mount options' changes must be done so that they would still remain in place after rebooting. Then you have to remount the filesystem so that the changes will be "activated" without the need for the system reboot.
(Hinted commands and directories to be used: mount -o remount, fuser, sync, /proc/mounts, /etc/fstab)"
After long thinking I've only managed to come up with:
[root ~]# mount -o remount,sync,nosuid /dev/mapper/fedora_12345-home
(Where the "/dev/mapper/fedora_12345-home" is the file system found with the "df /home" command.)
But there was no message after this command, so I can't determine if I done it right.
Did I do (the part of the task) right?
What other commands/modifications to files do I have to do? (And which commands should I use to confirm I've done things right?)
|
Usually mount will print an error if there is one, otherwise it prints nothing.
However if you type the mount command without any options it will print out a description of all mounted filesystems, including the mount options.
Alternatively, you could try creating a setuid binary:
[root@xxxlin01 jad87]# cp /usr/bin/passwd /home/jad87
[root@xxxlin01 jad87]# chmod u+s passwd
[root@xxxlin01 jad87]# ll passwd
-rwsr-xr-x 1 root 90328 30768 Apr 25 06:58 passwd
And seeing if it works on that filesystem. sync might be a little trickier but I would assume that if setuid restrictions are in effect, sync probably is as well.
To make them permanent you have to update /etc/fstab as well.
| Several specific configurations to the file system |
1,565,291,835,000 |
I was taking truability test to assess my skills. I got the below question.
Create a puppet manifest in /root/puppet/lad.pp to mount the filesystem located in /root/files/LAD/disk.ext2 that:
will mount the device at "/mnt/LAD"
sets fstab to prevent the filesystem from being fsck'd and will prevent dump from running on it
mounts the filesystem as ext2 via loopback device
mounts the filesystem as read-writable
I just started with puppet and I am not pretty sure if what I have is correct. I have the below file.
mount { "/mnt/LAD":
device => "/root/files/LAD/disk.ext2",
fstype => "ext2",
ensure => "mounted",
options => "-o loop",
}
When I run the above puppet configuration, I get the error as,
err: /Stage[main]//Mount[/mnt/LAD]: Could not evaluate: Execution of '/bin/mount -o -o
loop /mnt/LAD' returned 1: [mntent]: line 13 in /etc/fstab is bad
mount: can't find /mnt/LAD in /etc/fstab or /etc/mtab
The following option works perfectly fine.
mount -o loop /root/files/LAD/disk.ext2 /mnt/LAD
Can someone point me where am doing wrong regarding the settings from puppet?
|
Try changing
options => "-o loop",
to
options => "loop",
The error shows mount -o -o loop so you'd want to get rid of one of the -o arguments
| puppet mount a loopback device |
1,565,291,835,000 |
I store my data on a NTFS parition to share it with my Windows. I use the dmask,fmask option in my fstab entry to avoid rwx default permissions to everyone, for coherence, security, and also because my zsh color profile rendering is then ugly when listing.
/dev/sda5 /mnt/Data auto defaults,force,rw,uid=1000,gid=1000,dmask=027,fmask=137 0 0
The problem with f/dmask is that I can't change related permissions of files once it's mounted. For example, I have Unix scripts stored there; and when I try a chmod +x, the mask automatically discards any changes, even in root.
I thought about removing masks and automatically executing this kind of script :
chown -R me /mnt/Data
find -type f /mnt/Data -exec chmod -R 640 {} \;
but it isn't very elegant and can be very long for large amount of little imbricated files/directory. Is there any way or unknown (at least to me) option of mount to do that?
|
uid gid dmask fmask is mount's way of letting you specify owner, group and access permissions. Either you use it and accept the limitations, or you don't. There are a few options:
Skip dmask fmask and adjust your zsh color profile. This seems like the easiest option.
Keep your mount options and write a sudo-like script to temporarily remount the drive without dmask fmask, do the chmod and remount back with dmask fmask.
Imagine invoking it like:
remount-do /mnt/Data "chmod 755 /my/file"
| How to automatically set permission at automount without mask |
1,565,291,835,000 |
Problem Overview
I recently upgraded my remote server contract with IONOS, increasing my hard drive space from 8GB to 80GB. I have an Ubuntu OS running bash.
I then went about extending my working partition, following a tutorial here:
https://www.ryadel.com/en/resize-extend-disk-partition-unallocated-disk-space-linux-centos-rhel-ubuntu-debian/
All was OK, I wrote a new partition map, then rebooted my system. I waited a minute or two and then attempted to ssh into my server as usual. Problem. My ssh connection hangs, until eventually exiting with a time out.
Solution Attempts
At first, I reasoned the process of rebooting after a partition map change may take some time, and this was the cause of the timeout. After several more ssh attempts, this did not seem likely.
I used a 'KVM Console' provided in my IONOS console - here, the shell is in a state of (initramfs).
In attempting to diagnose the issue, I have tried the following:
Running: fsck /dev/sda1
Result: /dev/sda1: clean, 312/124672 files, 26890/124672 blocks
Running: fsck /dev/sda1
Result: fsck: error 2 (No such file or directory) while executing fsck.ext2 for /dev/sda2
Running: blkid
Result:
/dev/sda1: UUID="longString" TYPE="ext4" PARTUUID="520f1760-01"
/dev/sda2: PARTUUID="520f1760-02"
Running all of the following commands returns sh: command name: not found. These are:
vgdisplay -v vg00
parted -l /dev/sda
free -m
cfdisk
lvdisplay -v
fdisk /dev/sda
pvresize /dev/sda2
The output of cat proc/partitions is:
major minor #blocks name
8 0 83886080 sda
8 1 498688 sda1
8 2 83386368 sda2
11 0 1048575 sr0
From the above, I am confused why (2) returns no such file or directory - the entry sda2 is listed under the directory dev.
The output of cat /proc/cmdline is:
BOOT_IMAGE=/vmlinuz-5.4.0-132-generic root=/dev/mapper/vg00-lv01 ro apparmor=0
After entering lvm and then vgscan -ccc, the output is:
....
Start of output not visible in terminal window due to no scrolling
....
filter caching bad /dev/loop5
Opened /dev/loop6 RO O_DIRECT
/dev/loop6: size is 0 sectors
Closed /dev/loop6
/dev/loop6: Skipping: Too small to hold a PV
filter caching bad /dev/loop6
Opended /dev/loop7 RO O_DIRECT
/dev/loop7: size is 0 sectors
Closed /dev/loop7
/dev/loop7: Skipping: Too small to hold a PV
filter caching bad /dev/loop7
Will scan 3 devices skip 0
Checking fd limit for num_devs 3 want 35 soft 1024 hard 4096
Scanning 3 devices for VG info
Scanning submitted 3 reads
Processing data from device /dev/sda 8:0 fd 4 block 0x55b511a17cd0
Scan filtering /dev/sda
/dev/sda: using cached size 167772160 sectors
/dev/sda: Skipping: Partition table signature found
filter caching bad /dev/sda
/dev/sda: Not processing filtered
Processing data from device /dev/sda1 8:1 fd 5 block 0x55b511a17d10
Scan filtering /dev/sda1
/dev/sda1: using cached size 997376 sectors
/dev/sda1: Device is a partition, using primary device sda for mpath component detection
/dev/sda1: using cached size 997376 sectors
filter caching good /dev/sda1
/dev/sda1: No lvm label detected
Processing data from device /dev/sda2 8:2 fd 6 block 0x55b511a17d50
Scan filtering /dev/sda2
/dev/sda2: using cached size 166772736 sectors
/dev/sda2: Device is a partition, using primary device sda for mpath component detection
/dev/sda2: using cached size 166772736 sectors
filter caching good /dev/sda2
Label checksum incorrect on /dev/sda2 - ignoring
/dev/sda2: No lvm label detected
Scanned devices: read errors 0 process errors 0 failed 0
Found VG info for 0 VGs
Obtaining the complete list of VGs to process
No volume groups found
Unlocking /run/lock/lvm/P_global
_undo_flock /run/lock/lvm/P_global
Dropping VG info
lvmcache has no info for vgname "#orphans_lvm2" with VGID #orphans_lvm2.
lvmcache has no info for vgname "#orphans_lvm2".
lvmcache: Initialised VG #orphans_lvm2.
Completed: vgscan -vvv
The directory /etc/lvm/backup exists and contains:
vg00
The directory /etc/lvm/archive exists and contains:
vg00_00000-1647277590.vg vg00_00001-1228658393.vg
(3) and (5) give me hope - the location seems to be recognised, what would this suggest ?
Specific Steps Before Reboot
In summary, the steps I took before rebooting my system were:
ran fdisk /dev/sda and noted the start and end points of the file systems by entering p.
Deleted the file system map by entering d and then selecting sda2 with 2
Created a new partition map by entering n. Setting the partition type to primary.
I then entered the start and end locations for the new partition, as noted in step (1).
I changed the partition type, by entering t, and selecting the 2nd partition by entering 2.
I specified the partition type to be 'Linux LVM' by entering the HEX code 8e.
Before writing to the disk, I ensured start and end points were correctly listed by entering p. The start point matched that of the original partition. The end point matched that of the disk end point.
I wrote the partition map to disk by entering w.
I reboot the system with reboot.
The result of running lvm p prior to partition map changes was:
At this point I am not sure how to proceed - I have encountered a file system issue before and was troubled at the prospect of loosing all my files. Ultimately, in that case, the files were still present. From that experience I am restraining my assumption all is lost.
Does anyone have any suggestions, or tips to offer in terms of debugging this situation ? Please feel free to ask if you would like extra information regarding my setup.
Update
I have been able to boot into a knoppix CD on my remote server. Here, I have run fdisk -l which outputs:
Disk /dev/ram0: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram1: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram2: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram3: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram4: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram5: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram6: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram7: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram8: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram9: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram10: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram11: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram12: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram13: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram14: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/ram15: 4 MiB, 4194304 bytes, 8192 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/cloop0: 1.83 GiB, 1960312832 bytes, 3828736 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/cloop1: 9.63 GiB, 10335027200 bytes, 20185600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/zram0: 1.45 GiB, 1560817664 bytes, 381059 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sda: 80 GiB, 85899345920 bytes, 167772160 sectors
Disk model: Virtual disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x520f1760
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 999423 997376 487M 83 Linux
/dev/sda2 999424 167772159 166772736 79.5G 8e Linux LVM
I feel the final output lines, displaying the partition map for sda1 and sda2, are of interest. I believe the type of sda2 is correct as 8e (a Linux LVM), and the Start value correctly falls after the End of sda1.
Update II
Before attempting the below steps, I created a snapshot for backing up the system to its current state. I have now returned to this snapshot.
Attempting to restore from the /etc/lvm/backup/vg00 file (initramfs), first I ran pvcreate --restorefile /etc/lvm/backup/vg00 --uuid R5VWXg-jamB-5dWM-PpwY-7a49-LRz7-Vrvdl2 /dev/sda2. This returned:
WARNING: Couldn't find device with uuid `R5VWXg-jamB-5dWM-PpwY-7a49-LRz7-Vrvdl2.
Failed to clear hint file.
Physical volume "/dev/sda2" successfully created.
Then, I ran vgcfgrestore --file /etc/lvm/backup/vg00 which returned:
No command with matching syntax recognised.
Nearest similar syntax command has syntax:
vgfcgrestore -f:--file String VG
Restore VG metadata from specified file.
There seems to be an issue here.
|
You should examine the LVM VG metadata backup file /etc/lvm/backup/vg00 and find the original PV UUID of /dev/sda2 from there. It is a text file, and the PV UUID should be in a location like this: ([...] indicates some lines omitted for brevity)
[...]
vg00 {
[...]
physical_volumes {
pv0 {
id = "xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx"
device = "/dev/sda2" # Hint only
Once you know the PV UUID, you can use the backup file and the UUID to restore the PV UUID like this: (commands prefixed with lvm for use in initramfs environment; if you have extracted the VG metadata backup file from initramfs and do this in Knoppix, you can omit the lvm prefixes)
lvm pvcreate --restorefile /etc/lvm/backup/vg00 --uuid xxxxxx-xxxx-xxxx-xxxx-xxxx-xxxx-xxxxxx /dev/sda2
Once the PV UUID is restored, you can restore the rest of VG metadata with:
lvm vgcfgrestore --file /etc/lvm/backup/vg00 vg00
After this, the VG should be good for activation:
lvm vgchange -ay vg00
If the VG activates successfully, and the filesystem within it can be mounted (with e.g. mount /dev/mapper/vg00-lvol1 /mnt), you should now be able to boot normally.
Once the system is running normally, you'll need two commands as root to achieve your original goal:
pvresize /dev/sda2
After this, pvs should indicate the sda2 PV is now successfully resized and vgs should indicate there is now plenty of unallocated space in vg00. To finally make use of it:
lvextend -r -l +100%FREE /dev/mapper/vg00-lvol1
and now df should indicate the root filesystem has plenty of free space again.
There is the command growpart (part of the cloud-guest-utils package in Debian, might be packaged separately as cloud-utils-growpart or just growpart in other distributions) which is specifically made to extend partitions safely and quickly, usually with no rebooting required.
In this specific case, the extension could have been achieved with just three commands:
growpart /dev/sda 2
pvresize /dev/sda2
lvextend -r -l +100%FREE /dev/mapper/vg00-lvol1
| Reboot into `initramfs` after altering partition table |
1,565,291,835,000 |
After I upgraded from Linux Mint 21.1 to 21.2 the newest initrd stopped working. The older one works fine. Regenerating the initrd doesn't help. On startup with the newest initrd the error tells that the UUID of my decrypted Rootdrive does not exist, so I am guessing that the crypttab doesn't get used. I checked the crypttab that hasn't changed.
|
If your root filesystem is encrypted, in modern Debian/Ubuntu/Mint you will need not only the cryptsetup package, but also the cryptsetup-initramfs package.
After the upgrade, your /var/cache/apt/archives/ was most likely full of downloaded packages, and so it might have happened that the system is running out of disk space when regenerating the initrd.
This typically causes the initrd creation to fail, possibly causing a partial initrd file to be created. Trying to boot with an incomplete initrd file could easily cause the failure you're seeing.
So first, run ls -l /boot and look at the sizes of the initrd files. If the initrd of the new kernel is significantly smaller than one for older kernels, it is probably missing some parts.
In that case, try sudo apt clean all to make some space by cleaning the package cache (always a good idea after a major update, if disk space is tight), then make sure that the cryptsetup-initramfs package is installed and is up to date. Then try regenerating the initrd for the new kernel again.
| crypttab seems to not been active |
1,693,563,556,000 |
The fstab is a link to the next fstab file:
# <file system> <mount pt> <type> <options> <dump> <pass>
/dev/root / ext2 ro,noauto 0 1
proc /proc proc defaults 0 0
/dev/mmcblk0p10 /data ext4 defaults 0 0
overlay / overlay
lowerdir=/,upperdir=/data/rfs_overlay,workdir=/data/rfs_overlay_work 0 0
the overlay does not take effect using the fstab and the mount command output does no contain any overlay line.
I tried changing the fstab line to :
# <file system> <mount pt> <type> <options> <dump> <pass>
overlay /data/rfs_overlay overlay
lowerdir=/,upperdir=/data/rfs_overlay_upper,workdir=/data/rfs_overlay_work 0 0
Then, I get that line from the mount command :
overlay on /data/rfs_overlay type overlay
(rw,relatime,lowerdir=/,upperdir=/data/rfs_overlay_upper,workdir=/data/rfs_overlay_work)
However, when i try to create test.txt file in the rootfs I get the folowing resopnse:
touch test.txt
touch: test.txt: Read-only file system
It is to be noticed that if I change the rootfs to rw and than create the file on the rootfs, the file is created both in the roofs and in the overlay:
mount -o remount,rw /
touch test.txt
find / -name test.txt
/data/rfs_overlay/root/test.txt /root/test.txt
I tried the following links with no success:
askubuntu.com/questions/821733/
community.toradex.com/t/automount-overlay-for-etc/15529
wiki.archlinux.org/title/Overlay_filesystem
superuser.com/questions/1507278/
|
First, you need to understand what happens in the second try.
# <file system> <mount pt> <type> <options> <dump> <pass>
overlay /data/rfs_overlay overlay
lowerdir=/,upperdir=/data/rfs_overlay_upper,workdir=/data/rfs_overlay_work 0 0
Your lowerdir it /. The lowerdir is supposed to be static and should be readonly. You can think of this directory as the "base". Those are the initial unchanged files.
You upper dir is /data/rfs_overlay_upper. This is the folder that's supposed to hold the changes, or the "delta" from the lowerdir.
And your mount point in this case is /data/rfs_overlay. This means that this mount would be the result of the merge between your lowerdir (the "base" - /) and upperdir (the "delta" - /data/rfs_overlay_upper).
For instance, in your case, if you create a the file /data/rfs_overlay/afile, you will see it's created in the upperdir: /data/rfs_overlay_upper/afile. That's how the lowerdir remains unchanged, and the upperdir contains the "delta" - the changes between the lowerdir and the merged folder.
In your case, you made the change in /, which is the lowerdir for /data/rfs_overlay. As I said before, the lowerdir is supposed to remain static (that's why it was r/o). You shouldn't touch the lowerdir no the upperdir, only the merged mount. The kernel should make any change to the upperdir.
So that's the explanation for what happened to you in the second try.
Regarding you initial attempt. First of all, you're trying to mount / twice, which isn't possible.
In theory you first need to mount the partition(s) that include(s) the lower, upper and work dirs. And then at the end you need to mount your merged destination path.
But in your case, I don't think it can work either way. You want your merged folder to be / (so according to what I said, it needs to be mounted after the lower/upper/work dirs), but also the root filesystem has to be the first one to be mounted! Because the rest of the mounts should be mounted on it. So that's the reason it isn't possible.
I assume you're goal is to have some sort of "snapshot" for the rootfs, to keep it static and to make all the changes be written to an upper dir. If that's what you're trying to do, I suggest you use some Copy-On-Write filesystem that supports snapshots for your root fs, such as btrfs or ZFS. Of course you'll need to reinstall your host for that. But you cannot use overlay the way you want to.
| Chagnes done to the rootfs do not redirect to the overlay set in fstab |
1,693,563,556,000 |
//192.168.1.64/d /media/d cifs credentials=/home/nick/.smbcredentials,uid=nick,gid=media,file_mode=0777,dir_mode=0777,nofail,user 0 0
Here's my ls- l of the share once mounted:
drwxrwxrwx 2 nick media 0 Jan 20 2023 Blurays2
drwxrwxrwx 2 nick media 0 Jul 26 18:37 cache
drwxrwxrwx 2 nick media 0 Feb 5 2021 'Childrens Movies2'
drwxrwxrwx 2 nick media 0 Jan 22 2023 Movies2
drwxrwxrwx 2 nick media 0 Aug 5 22:34 temp-movies
drwxrwxrwx 2 nick media 0 Jul 11 14:08 tv_shows
and here's my share info from the server via the smb.conf file
[d]
path = /media/nick/Backup Drive/
valid users = share, nick
browsable = yes
writable = yes
read only = no
force create mode = 0666
force directory mode = 0777
Can anyone see what the heck I am missing?
|
You need to make sure that /media/nick/Backup Drive is accessible by your target users share and nick.
Provided noone else is using the server directly you can choose to relax the filesystem permissions entirely and leave Samba to manage access control:
chmod a=rwx '/media/nick/Backup Drive'
| Permission Denied CIFS. Tried everything I can think of |
1,693,563,556,000 |
I have bought a new hard drive and moved my home there. I have followed this link: https://www.howtogeek.com/442101/how-to-move-your-linux-home-directory-to-another-hard-drive/.
But the new hard drive keeps unmounting during restart and I always have to run mount -a command. I can't remove the old /home entry from /etc/fstab.
It might be important to denote that my first hard drive is encrypted.
The new hard drive is /dev/sda1.
blkid /dev/sda1 gives /dev/sda1: UUID="b9e91410-03ef-48d5-a8af-06837f2f0aae" UUID_SUB="285e8dea-340a-478c-9b20-cb09d4ee8485" BLOCK_SIZE="4096" TYPE="btrfs" PARTUUID="ba042bba-01".
This is my /etc/fstab file:
#
# /etc/fstab
# Created by anaconda on Thu Feb 18 09:11:13 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
UUID=621d28d8-16de-44e8-b733-1dab40daea02 / btrfs subvol=root,x-systemd.device-timeout=0 0 0
UUID=03011415-2410-443a-a923-ce1154ea8ea3 /boot ext4 defaults 1 2
UUID=3D2C-76FF /boot/efi vfat umask=0077,shortname=winnt 0 2
UUID=621d28d8-16de-44e8-b733-1dab40daea02 /home btrfs subvol=home,x-systemd.device-timeout=0 0 0
UUID=b9e91410-03ef-48d5-a8af-06837f2f0aae /home btrfs rw,seclabel,relatime,ssd,space_cache=v2,subvolid=5,subvol=/ 0 0
This is my /proc/mounts file:
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sysfs /sys sysfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
devtmpfs /dev devtmpfs rw,seclabel,nosuid,size=4096k,nr_inodes=131072,mode=755,inode64 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,seclabel,nosuid,nodev,inode64 0 0
devpts /dev/pts devpts rw,seclabel,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /run tmpfs rw,seclabel,nosuid,nodev,size=3187696k,nr_inodes=819200,mode=755,inode64 0 0
cgroup2 /sys/fs/cgroup cgroup2 rw,seclabel,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot 0 0
pstore /sys/fs/pstore pstore rw,seclabel,nosuid,nodev,noexec,relatime 0 0
efivarfs /sys/firmware/efi/efivars efivarfs rw,nosuid,nodev,noexec,relatime 0 0
bpf /sys/fs/bpf bpf rw,nosuid,nodev,noexec,relatime,mode=700 0 0
/dev/mapper/luks-76241767-67a8-4d40-aafc-25eb50a2e22a / btrfs rw,seclabel,relatime,ssd,space_cache,subvolid=257,subvol=/root 0 0
selinuxfs /sys/fs/selinux selinuxfs rw,nosuid,noexec,relatime 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=31,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=21597 0 0
mqueue /dev/mqueue mqueue rw,seclabel,nosuid,nodev,noexec,relatime 0 0
debugfs /sys/kernel/debug debugfs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,seclabel,relatime,pagesize=2M 0 0
tracefs /sys/kernel/tracing tracefs rw,seclabel,nosuid,nodev,noexec,relatime 0 0
fusectl /sys/fs/fuse/connections fusectl rw,nosuid,nodev,noexec,relatime 0 0
configfs /sys/kernel/config configfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /tmp tmpfs rw,seclabel,nosuid,nodev,nr_inodes=1048576,inode64 0 0
/dev/mapper/luks-76241767-67a8-4d40-aafc-25eb50a2e22a /home btrfs rw,seclabel,relatime,ssd,space_cache,subvolid=256,subvol=/home 0 0
/dev/loop3 /var/lib/snapd/snap/core18/2785 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop0 /var/lib/snapd/snap/core18/2721 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop8 /var/lib/snapd/snap/core22/817 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop4 /var/lib/snapd/snap/core/15511 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop5 /var/lib/snapd/snap/core20/1852 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop7 /var/lib/snapd/snap/core22/607 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop1 /var/lib/snapd/snap/core/14946 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop2 /var/lib/snapd/snap/atomify/153 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop6 /var/lib/snapd/snap/core20/1974 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop9 /var/lib/snapd/snap/gnome-3-28-1804/198 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop10 /var/lib/snapd/snap/gnome-3-38-2004/119 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop11 /var/lib/snapd/snap/gnome-3-38-2004/137 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop13 /var/lib/snapd/snap/gnome-42-2204/87 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop12 /var/lib/snapd/snap/gnome-42-2204/120 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop15 /var/lib/snapd/snap/gtk-common-themes/1535 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop17 /var/lib/snapd/snap/signal-desktop/493 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop18 /var/lib/snapd/snap/rubymine/334 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop16 /var/lib/snapd/snap/signal-desktop/428 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop14 /var/lib/snapd/snap/gtk-common-themes/1534 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop19 /var/lib/snapd/snap/rubymine/351 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop20 /var/lib/snapd/snap/snapd/17950 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop21 /var/lib/snapd/snap/snapd/19457 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop22 /var/lib/snapd/snap/xournalpp/69 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/loop23 /var/lib/snapd/snap/xournalpp/82 squashfs ro,context=system_u:object_r:snappy_snap_t:s0,nodev,relatime,errors=continue 0 0
/dev/nvme0n1p2 /boot ext4 rw,seclabel,relatime 0 0
/dev/nvme0n1p1 /boot/efi vfat rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /run/user/1000 tmpfs rw,seclabel,nosuid,nodev,relatime,size=1593844k,nr_inodes=398461,mode=700,uid=1000,gid=1000,inode64 0 0
portal /run/user/1000/doc fuse.portal rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
/dev/sda1 /home btrfs rw,seclabel,relatime,ssd,space_cache=v2,subvolid=5,subvol=/ 0 0
My /etc/mtab file is empty.
|
There was a permission issue. I have reset the security label on the home partition mount point and its directories.
After su:
restorecon -v /home /home/*
| Newly created home in a secondary hard drive keeps unmounting after restart, fstab does not work [closed] |
1,693,563,556,000 |
I changed my /etc/fstab from:
UUID=f1fc7345-be7a-4c6b-9559-fc6e2d445bfa / ext4 errors=remount-ro 0 1
UUID=4966-E925 /boot/efi vfat umask=0077 0 1
to this:
UUID=f1fc7345-be7a-4c6b-9559-fc6e2d445bfa / ext4 data=journal,errors=remount-ro 0 1
UUID=4966-E925 /boot/efi vfat umask=0077 0 1
Effectively adding data=journal, before errors=remount-ro option. The reasoning was this computer is running a fragile application 24/7, problems are the power cuts longer than my UPS can hold.
Upon the next boot the TTY greeted me, suppose I will be able to log in, is there a way to fix this?
|
TTYs have insanely fast keyboard set by default. I tried to log in about 30 times before the final success.
If you have numbers in your password or login name, you may want to turn the numlock on.
Issue this command, but make sure you use your drive and partition number:
sudo mount -o data=ordered,remount,rw /dev/nvme0n1p2 /
Edit your /etc/fstab not to contain the data=journal part, save it.
Reboot.
Optional, but recommended step: You may want to check your root filesystem upon boot now, if so, please refer to answers here, just a summary:
To force a fsck on every boot on Linux Mint 18.x, use either tune2fs, or fsck.mode=force, with optional fsck.repair=preen / fsck.repair=yes, the kernel command line switches.
| Fstab adding data=journal crashed my Linux' ext4 upon boot, how to fix? |
1,693,563,556,000 |
When I use this fstab syntax:
# grep raven /etc/fstab
\\raven.example.com\raven /raven cifs vers=2.0,credentials=/root/creds_smb_raven,\
uid=5000,gid=6000,file_mode=0664,dir_mode=0775 0 0
and reboot, the auto-mount fails, even though a manual re-try succeeds. I see:
# mount -t cifs
# grep -w mount.cifs /var/log/messages
Jan 27 11:59:16 myhost mount: mount.cifs: bad UNC (\raven.example.com\raven)
# mount /raven
# mount -t cifs
\\raven.example.com\raven on /raven type cifs (rw,relatime,vers=2.0,cache=strict,username=surfgeo,domain=raven,uid=5000,forceuid,gid=6000,forcegid,addr=10.27.4.22,file_mode=0664,dir_mode=0775,soft,nounix,serverino,mapposix,rsize=65536,wsize=65536,echo_interval=60,actimeo=1)
But when I double up the backslashes:
\\\\raven.example.com\\raven /raven cifs vers=2.0,credentials=/root/creds_smb_raven,uid=5000,gid=6000,file_mode=0664,dir_mode=0775 0 0
and reboot, the auto-mount succeeds but manual mounting is now broken. I get:
# mount -t cifs
\\raven.example.com\raven on /raven type cifs (rw,relatime,vers=2.0,cache=strict,username=surfgeo,domain=raven,uid=5000,forceuid,gid=6000,forcegid,addr=10.27.4.22,file_mode=0664,dir_mode=0775,soft,nounix,serverino,mapposix,rsize=65536,wsize=65536,echo_interval=60,actimeo=1)
# umount /raven
# mount /raven
mount.cifs: bad UNC (\\\\raven.example.com\\raven)
Is there one specific fstab syntax that will work both for auto-mount at boot
and manual unmount/remount after boot?
|
As noted in the comments, use forward slashes instead of backslashes:
//raven.example.com/raven /raven cifs vers=2.0,credentials=/root/creds_smb_raven,\
uid=5000,gid=6000,file_mode=0664,dir_mode=0775 0 0
| /etc/fstab syntax incompatible between auto-mount and manual mount of CIFS |
1,693,563,556,000 |
I created a partition called sdb1 in fat32 and created 3 folders within a main folder however I wanted the 3 folders to have different permissions.
I tried to make an ana folder with all permissions, the marco with permissions for the user and group to execute and the opencloud with permissions for everything but the group. However the end result was that all the folders had all permissions.
I dont understand what i am doing wrong.
|
First, you are using the FAT32 filesystem, which does not support Unix-style file ownerships and permissions. But because Unix-like operating systems assume that all files must have an owner, group and permissions, the vfat filesystem driver fakes it - by assigning all files and all directories in the filesystem the same permissions.
You can adjust the fake permissions created by the filesystem driver: by using the dmask mount option you can set the permissions for all directories on the filesystem, and with fmask for all regular files respectively. These options are specific to the vfat filesystem driver, and won't work with just any filesystem. The drivers for other filesystems that don't natively support Unix-style ownerships/permissions may have similar mount options, or some other ways to adapt the filesystem to Unix-like conventions.
If you need to be able to assign different permissions to different files and/or directories within a single filesystem, FAT32 (or any FAT subtype really) is a wrong filesystem type for that.
Second, you haven't really made three separate folders: you've actually mounted one filesystem (on partition /dev/sdb1) to three separate locations. So if you created a file to /data/ana, the same file would be immediately accessible at /data/marco and /data/opencloud too.
Mounting the same filesystem to multiple locations in a single system simultaneously used to be impossible until relatively recent times (roughly, about the same time the container technology was being developed; it might have been a side effect of that). As such, the vfat filesystem driver apparently cannot handle multiple mounts of the same filesystem with different permission options. It looks like /data/ana might be the most recent mount, so it looks like the most recent set of mount options for the filesystem takes effect on all mounts (think "views") of that filesystem.
| Fstab permissions |
1,693,563,556,000 |
I'm trying to mount an external usb drive to raspberry pi 4b with debian 11 bullseye.
Whatever I've tried so far to set mount options gets ignored.
/etc/fstab
UUID="9f32de87-6800-4585-a5c5-e6a3946ba2bb" /data ext4 defaults,nofail 0 0
UUID="9f32de87-6800-4585-a5c5-e6a3946ba2bb" /data ext4 rw,suid,dev,exec,auto,nouser,async,nofail 0 0
PARTUUID=20df08a4-01 /data ext4 rw,suid,dev,exec,auto,nouser,async,nofail 0 0
systemd mount unit
root@srv:/etc/systemd/system# cat data.mount
[Unit]
Description=Mount /data with systemd
[Mount]
What=/dev/disk/by-uuid/9f32de87-6800-4585-a5c5-e6a3946ba2bb
Where=/data
Type=ext4
Options=rw,suid,dev,exec,auto,nouser,async,nofail
[Install]
WantedBy=multi-user.target
mount command
root@srv:~# mount -t ext4 -o rw,suid,dev,exec,auto,nouser,async,nofail /dev/sda1 /data
Output is always:
root@srv:~# mount -l | grep data
/dev/sda1 on /data type ext4 (rw,relatime) [data]
I know that most of the options are in the ext4 defaults mount option included, but also other options I tried are completely ignored.
Any hints how to do this? Any constraints with USB drives here I'm missing?
Thanks
|
async, suid, dev and exec are the default states for an ext4 mount, so only the non-default options (sync, nosuid, nodev and/or noexec) may be displayed.
auto and nouser affect mainly the mount command itself, and these are also the default states for these options. Normally all /etc/fstab entries that are not specifically marked with a noauto option will be mounted if/when mount -a is executed; once the filesystem is mounted, the auto/noauto option has already fulfilled its purpose and so there is no reason for the kernel to track it.
If user was specified, the mount command would have to keep track who mounted the filesystem (classically in /etc/mtab if it's a regular file; nowadays in /run/mount/libmount instead) as only root or the user who originally mounted the filesystem will be allowed to unmount it. But with nouser, the default classic Unix behavior of "only root can mount/unmount filesystems" prevails.
Out of all these options you've specified, nofail is the only non-default one, and it also only affects the the mounting process, causing it to not report an error if this filesystem cannot be mounted. Once the filesystem has been successfully mounted, the kernel has no reason to track the state of that option.
The reason the rw and relatime options are explicitly displayed is essentially historical: showing the rw/ro state explicitly is a long-standing practice, and relatime highlights the fact that the handling of the atime timestamp is not done strictly the classic Unix way. The other alternatives for relatime would be noatime (which can cause problems with e.g. the classic way to detect if you have unread email or not in /var/mail), and strictatime which would enforce the classic Unix behavior (and cause lots of mostly-unnecessary small write operations, harming SSD life and preventing disks from going into power-saving states). relatime has been the default since kernel version 2.6.30.
So your mount options are not actually getting ignored: you are just specifying a set of options that is essentially equivalent to the default way to mount the filesystem.
| mount options ignored - debian 11 bullseye on raspberry with ext. usb drive |
1,693,563,556,000 |
I am trying to mount a hard drive with an NTFS filesystem on it on boot.
It doesn't need a special location. /mnt/ or something similar should be enough.
I had already done that with a drive but it is way back and i can't remember how I did it. Also I am too scared to just go dive into the fstab file.
I am on the latest Arch Linux 64 bit version
The drive is called /dev/sda1 (idk if thats important)
|
I recommend reading the fstab article on Arch Wiki, it's very good, but you don't really need that much, if your NTFS partition is /dev/sda1 all you need to add is this line
/dev/sda1 /mnt/<name> ntfs defaults 0 0
you don't really anything more that that.
You can also configure fstab from GUI using GNOME Disks. Select the NTFS partition, click one the Additional partition options icon and select Edit Mount options and then simply enable Mount at system startup, rest of the fields should be pre-filled with some reasonable defaults.
| Mounting NTFS hard drive on boot |
1,693,563,556,000 |
I've been following a set of steps very similar to the ones mentioned on the ArchWiki installation guide.
I've recently started playing around with BTRFS snapshots, particularly of the / subvolume.
During installation, my mount command looks something like this:
mount -o noatime,nodiratime,compress=lzo,space_cache,subvol=@ /dev/sda3 /mnt
(considering /dev/sda3 is the BTRFS partition containing all my system subvolumes).
genfstab produces an /etc/fstab in the below fashion:
# /dev/sda3
UUID=<long-uuid> / btrfs rw,noatime,nodiratime,compress=lzo,space_cache,subvolid=256,subvol=/@,subvol=@ 0 0
As you can see in the above snippet, it automatically adds the parameter for subvolid and also one more for a repeat subvol (which I don't care at this point).
The confusing part begins when I restore to a previous snapshot with commands like:
mount /dev/sda3 /mnt
mv /mnt/@ /mnt/<some-random-name>
mn /mnt/<an_old_snapshot_name> /mnt/@
it leaves the /etc/fstab file intact, which is OK in a sense if you consider that rather than changing the file, I renamed my subvolumes in a way that the same name now refers to a different subvolume, but what I'm confused about is the old subvolid, which even though remains unchanged, let's the right subvolume being mounted.
To summarize: does a different subvolid and subvol references not cause any issue while mounting volumes?
PS: I apologize for such a lengthy question and apparently also for a noob question, but couldn't find an answer by myself.
|
As mentioned in this comment on Reddit, subvolid can be safely removed to make the references consistent.
| Do contradicting BTRFS subvolid's not cause trouble? |
1,693,563,556,000 |
I have a 35G mount as my root file system, and until now, it was reporting 1% usage.
I was using an SD card for my storage and today I got a new one. I'm mounting my /swap partition on that, so I decided to partition the new one with a swap and a "normal" one.
First I created an NTFS partition in case I want to use the card in Windows. I had problems, so I tried FAT, and ultimately I went to ext4.
In the process, I was modifying my /etc/fstab file and restarting, but when I got the card working, now my root file system reports being used at 100%!!! without me changing anything there.
Output of df -h:
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p5 35G 33G 0 100% /
...
And df -i:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/nvme0n1p5 2.2M 20K 2.2M 1% /
...
I thought maybe this is because fsck, and I did
touch /forcefsck && reboot
But that didn't solve the issue. I didn't change anything in my root file system.
Line in fstab for that:
UUID=<uuid> / ext4 defaults,noatime 0 1
...
Can someone please help me troubleshoot this?
|
Turns out during my mounting/umounting/rebooting etc., something happened.
I was mounting the SD card on /media, and I'm not sure why, but when I finished with the SD card setup, apparently the /media was created as a directory and not a mount, so it was that directory taking up space.
I didn't notice that it's not actually my mount and a directory üôÅ, so I deleted the directory and mounted the SD card on /mount and the issue is solved üôÇ
| Root file system reported as being 100% without adding any file |
1,608,316,791,000 |
I am going for installing Gentoo for the second time. I read that multiple swap partitions can be created and their entries in /etc/fstab can be prioritised with pri mount option. Just wanted to ask whether that can be done with /home too. Like
/dev/sda1 /home ext4 defaults,pri=1 0 2 and /dev/sda2 /home ext4 defaults,pri=2 0 2. Thanks!
|
No not that way.
But there are other ways. Here are a few methods, they all do slightly different things.
Union file systems: create a layered file-system. A read-only as the base, then another overlay that is writable, and stored only the changes.
LVM, ZFS (and some other file systems): allow file-systems to span multiple partitions / disks.
Raid: allow file-systems to span multiple partitions / disks. (but for other reasons)
Symbolic linking: Allow you to make a file/sub-directory be in a different place.
Mount points in /home: Allow you to make a file/sub-directory be in a different place.
| Span /home on multiple partitions |
1,608,316,791,000 |
My ubuntu 18.04 boots into read only filesystem / and i really dont know why. I know, that a bad fstab can cause this problem, but my fstab looks okay:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
#UUID=ec9192f0-a26a-4e52-be83-084fd6599e55 / ext4 errors=remount-ro 0 1
/swapfile none swap sw 0 0
#/dev/sdb1 /home/nextcloud-storage ntfs-3g utf8,dmask=007,fmask=007,umask=007,uid=www-data,gid=www-data,noatime 0
I already #comment my /dev/sdb1 to check this. Also weird, maybe it has something to do with this problem: When i uncomment the /dev/sdb1 line, my /dev/sdb1 will be mounted correctly into /home/nextcloud-storage (still readonly file system /), lsblk show this but blkid shows only my /dev/sda1 with the UUID - not /dev/sdb1.
I can use of course sudo mount -o remount,rw /dev/sda1 / to get right access, but this won't fix my problem.
Has anyone an idea how to get rid of this?
|
Uncommenting the line, as already mentioned, would surely solve the problem (and, as you confirmed, it actually did).
As this might be someone else's problem, and the title says fstab is alright, I'll add something else that I think might be relevant for people looking for the same question. I'm uncertain whether the errors=remount-ro is a standard on your distribution or not, but it might be related to how errors are handled by your "init" scripts. It's often the case that when system starts as ready only there was an error, like a filesystem error, that needs to be fixed, but different distributions might handle that by different means. But regardless of how it's handled, tools like e2fsck cannot run safely on a read-write mounted filesystem, and that's why system sometimes might fallback to read-only.
If anything like this happens, running an e2fsck might solve a filesystem issue (but beware it might result in data loss). The man pages for e2fsck contain instructions on how to proceed and the implications of each option.
Also, running a dmesg command might show you why the root ended up mounted as read-only, in the case of some hardware error.
| root is mounted as read only filesystem but /etc/fstab looks alright [closed] |
1,608,316,791,000 |
As per the title, I would like to mount automatically the OMV5 samba shared folders onto FSTAB, however I failed multiple times.
The shared folder is called Rsync_Dell and it is on a static IP.
|
Found an answer:
Create a folder in mnt; paste this pattern to access to samba shared folder automatically after boots. This command line should be added in /etc/fstab
//192.168.XXX.XXX/SourceFolder_name /mnt/TargetFolder_name cifs uid=1000,rw,suid,username=Username,password=Password 0 0
cifs is the type of file system, that is samba share in this case
UID is your user id, you will get it by running id in command line
SUID is defined as giving temporary permissions to a user to run a program
rw, stands for reading and writing permissions
Username and password to access to samba shared folder
first 0, is for Dumping and Fscking, however I do not have knowledge in it
second 0, this tells the system which order fsck should perform a file system check
| Linking samba shared folders onto FSTAB linux |
1,608,316,791,000 |
I am very new to Linux and it seems that i have created an bad issue. I wanted to install another OS on a disc already in use by my Linux Manjaro. I did format the disc and installed the os. I forgot to remove it from the fstab file, so every time i boot i tries to start the device that does no longer exist. More specifically "A start job is running for ....." I cannot continue and i dont know how to remove the start job for the device.
All help appreciated.
Thanks
|
Step 1: get into the emergency mode.
Step 2: update your fstab file (typically using nano) with the correct parameters.
Step 3: restart your computer.
| Remove start job for disc not in use |
1,608,316,791,000 |
An application running under RHEL 7.x wasn't able to execute atomic file movement from /tmp to /home. It turned out they are located on different partitions
# df -h /tmp/
/dev/mapper/rhel-tmp 3,9G 17M 3,6G 1% /tmp
# df -h /home/
/dev/mapper/rhel-root 7,3G 1,9G 5,1G 28% /
Am I right in thinking that if I remove the entry for /tmp from /etc/fstab and reboot the system the /tmp will be on /dev/mapper/rhel-root ?
This is a corporate RHEL virtual machine, so it's highly desired that the steps lead to the goal after the first attempt. Unfortunately I don't have RHEL installed and there don't seem to be any Docker images to experiment with.
|
I was right.
After removal of corresponding entry in /etc/fstab
/tmp moved to /dev/mapper/rhel-root
| Making /tmp and /home be on the same partition |
1,608,316,791,000 |
I am trying to convert this command line:
sudo mount -t cifs //192.168.20.202/torrents /media/NAS -o username=x,password=y,dir_mode=0777,file_mode=0777
Into /etc/fstab entry, but it keeps mounting without 777 level of privileges.
So far I have this:
//192.168.20.202/torrents /media/NAS cifs uid=root,gid=root,username=x,password=y,0 0
|
You wrote:
//192.168.20.202/torrents /media/NAS cifs uid=root,gid=root,username=x,password=y,0 0
But you want to:
//192.168.20.202/torrents /media/NAS cifs username=x,password=y,dir_mode=0777,file_mode=0777 0 0
| Mounting Windows shared folder with fstab |
1,608,316,791,000 |
I use for a half and a year Linux with 6 HDD with NTFS format and modify my fstab to can read and write, also I have ntfs-3g installed.
UUID=480a3f32-e304-49b0-b322-4964349fd941 / ext4 rw,relatime,data=ordered 0 1
UUID=BAF0D1D3F0D195CB /media/ntfs/Anime ntfs-3g rw,uid=1000,umask=022 0 0
UUID=561CAEE01CAEB9FF /media/ntfs/Anime2.0 ntfs-3g rw,uid=1000,umask=022 0 0
UUID=68B283CAB2839AE8 /media/ntfs/Anime3.0 ntfs-3g rw,uid=1000,umask=022 0 0
UUID=E094004194001CA2 /media/ntfs/Anime4.0 ntfs-3g rw,uid=1000,umask=022 0 0
UUID=CAE8F43AE8F425FB /media/ntfs/Anime5.0 ntfs-3g rw,uid=1000,gid=users,umask=022 0 0
UUID=8A34984034983165 /media/ntfs/Anime6.0 ntfs-3g rw,uid=1000,gid=users,umask=022 0 0
UUID=64E6CDCBE6CD9E24 /media/ntfs/Win ntfs-3g rw,uid=1000,umask=022 0 0
UUID=AADEEA03DEE9C7A1 /media/ntfs/KK ntfs-3g rw,uid=1000,umask=022 0 0
Today I can't write or modify the HDD on the fstab with ntfs-3g, the mount command return the following info:
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
dev on /dev type devtmpfs (rw,nosuid,relatime,size=8173120k,nr_inodes=2043280,mode=755)
run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755)
/dev/sdh1 on / type ext4 (rw,relatime,data=ordered)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=35,pgrp=1,timeout=0,minproto=5,maxproto=5,direct)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/sdh3 on /media/ntfs/Win type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
/dev/sdb2 on /media/ntfs/Anime2.0 type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
/dev/sdc2 on /media/ntfs/Anime6.0 type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
/dev/sde2 on /media/ntfs/Anime5.0 type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
/dev/sda1 on /media/ntfs/Anime3.0 type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
/dev/sdd2 on /media/ntfs/KK type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
/dev/sdg2 on /media/ntfs/Anime type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
/dev/sdf2 on /media/ntfs/Anime4.0 type fuseblk (ro,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=1642420k,mode=700,uid=1000,gid=985)
gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=985)
As you can see the ntfs-3g partitons are mounted as ro instead rw.
|
man ntfs-3g
Windows hibernation and fast restarting
On computers which can be dual-booted into Windows or Linux, Windows
has to be fully shut down before booting into Linux, otherwise the NTFS
file systems on internal disks may be left in an inconsistent state and
changes made by Linux may be ignored by Windows.
So, Windows may not be left in hibernation when starting Linux, in or‐
der to avoid inconsistencies. Moreover, the fast restart feature avail‐
able on recent Windows systems has to be disabled. This can be achieved
by issuing as an Administrator the Windows command which disables both
hibernation and fast restarting :
powercfg /h off
If either Windows is hibernated or its fast restart is enabled, parti‐
tions on internal disks are forced to be mounted in read-only mode.
| Unable to mount read an write partitions |
1,608,316,791,000 |
Backstory:
I have a CentOS image on VirtualBox running on my local machine. (I created a group hadoop and user hduser to run Hadoop services on it.) I created a folder on my MacOS called shared. I did the same in the the VirtualBox image (under user hduser).
The directories are successfully mounted in the sense that I can see all files in shared from both machines. I can create and paste new files into shared on my local machine, but I cannot do the same in the virtual machine.
Issue:
There seems to be a write permission error, as I can see all files, but cannot write. Below is me testing to write a file in shared on the virtual machine.
[Error writing shared/test.txt: Permission Denied]
Here's the permissions:
Attempting to write to shared.
Here's my /etc/fstab:
I have read up on the issue and tried several things from:
Cannot mount vboxsf shared folder via /etc/fstab despite having a modules file
https://askubuntu.com/questions/365346/virtualbox-shared-folder-mount-from-fstab-fails-works-once-bootup-is-complete
|
I have managed to solve the issue. I initially followed the instructions from this github post where it is suggested to use these settings:
shared /home/hduser/shared vboxsf defaults,uid=1000,gid=1000,umask=0022 0 0
However, that didn't work in my case. Instead it is either:
shared /home/hduser/shared vboxsf uid=1001 defaults 0 0
or
shared /home/hduser/shared vboxsf uid=1000 defaults 0 0
| No write permissions after shared directory mount from fstab |
1,608,316,791,000 |
I'm trying to mount at boot a HFS+ core partition in Linux Ubuntu. I followed some other questions here and I came up with this mount command, which seems to properly work, providing read/write access to the partition:
sudo mount -t hfsplus -o force,rw,sizelimit=$((935960064*512)) /dev/sda2 mount
Now I would like it to be mounted at boot as usual, so I'm trying to add the proper line to /etc/fstab. This is what I wrote:
UUID=0e1ad81d-63b6-35cd-93f0-ea8cfbadaefe /mnt/480GB1 hfsplus force,rw,sizelimit=479582773248 0 0
where:
sudo blkid /dev/sda2
/dev/sda2: UUID="0e1ad81d-63b6-35cd-93f0-ea8cfbadaefe" TYPE="hfsplus" PARTLABEL="480GB1" PARTUUID="276b2d0a-193a-4325-a1aa-09dd84516b4e"
but this is the answer:
sudo mount -a
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
dmesg returns:
[10292.068918] hfsplus: invalid secondary volume header
[10292.068922] hfsplus: unable to find HFS+ superblock
Any idea what I'm doing wrong?
|
Your sizelimit is not identical. Could this be the cause?
| Adding new HFS+ partition to fstab |
1,486,823,193,000 |
I've got a second hard drive in my laptop. However, it only mounts itself when I load the GUI and click on the device in Nemo. What I'd like is for it to automount on boot.
sudo lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sdb
├─sdb2 ntfs BIOS_RVY F61C92C71C9281F3
└─sdb1 crypto_LUKS 3df2999e-9b64-46ec-b634-7986877c57f5
└─luks-3df2999e-9b64-46ec-b634-7986877c57f5 ext4 32c29f17-28fd-4288-8680-2fc62027586a /run/media/bill/32c29f17-28fd-4288-8680-2fc62027586a
sr0
sda
├─sda4 ntfs WinRE tools 0CDA8AEEDA8AD2FE
├─sda2
├─sda5 crypto_LUKS 28c449da-d8ba-42be-8a4e-17822270b7bd
│ └─luks-28c449da-d8ba-42be-8a4e-17822270b7bd LVM2_member LQR013-0T1K-E5QL-8sVa-94rN-C8cE-Agfbtn
│ ├─fedora-root ext4 047ddca4-cfb8-4307-9c86-a8de31c0bc68 /
│ ├─fedora-swap swap 18e032b2-eb2c-485c-97ce-b500c675dfda [SWAP]
│ └─fedora-home ext4 19caa2b4-d5a3-4c0d-bd76-c11ec303dd0c /home
├─sda3 ext4 f11b0191-49b9-41c2-a8f2-f26851442b17 /boot
└─sda1 vfat SYSTEM 1288-7285 /boot/efi
(Ignore the NTFS partitions, these are the recovery partitions from the original OEM WIndows setup, just in case I ever want to restore it to factory.)
My fstab is:
/dev/mapper/fedora-root / ext4 defaults,x-systemd.device-timeout=0 1 1
UUID=f11b0191-49b9-41c2-a8f2-f26851442b17 /boot ext4 defaults 1 2
UUID=1288-7285 /boot/efi vfat umask=0077,shortname=winnt 0 2
/dev/mapper/fedora-home /home ext4 defaults,x-systemd.device-timeout=0 1 2
/dev/mapper/fedora-swap swap swap defaults,x-systemd.device-timeout=0 0 0
/extraswap none swap sw 0 0
and crypttab is
luks-28c449da-d8ba-42be-8a4e-17822270b7bd UUID=28c449da-d8ba-42be-8a4e-17822270b7bd none discard
luks-3df2999e-9b64-46ec-b634-7986877c57f5 UUID=3df2999e-9b64-46ec-b634-7986877c57f5 none luks
Both drives are encrypted with the same passphrase, and I only have to enter it once during boot. I tried adding the following to fstab
/dev/mapper/luks-3df2999e-9b64-46ec-b634-7986877c57f5 /run/media/bill/32c29f17-28fd-4288-8680-2fc62027586a ext4 defaults 0 2
However, it got an error on boot. My guess is that it's something to do with LVM and I need to add a reference in there?
|
As @Thomas points out in comments, the following worked:
sudo mkdir /mnt/data
Then in fstab:
/dev/mapper/luks-3df2999e-9b64-46ec-b634-7986877c57f5 /mnt/data ext4 defaults 0 2
| Secondary LUKS physical volume won't automount |
1,486,823,193,000 |
I'm mounting my NAS/NFS share to /mnt/nas on my notebook/wlan connection. This did work until last week. Mounting the NFS share with two other wired PCs (same distro) still works.
In Debian / Arch Linux forums they suggested to add more x-systemd options to the fstab
noauto
x-systemd.automount
x-systemd.requires=network-online.target
x-systemd.device-timeout=10
My current fstab looks like this
192.168.220.100:/foo/bar /mnt/nas nfs nfsvers=3,rsize=32768,wsize=32768,noauto,x-systemd.automount,x-systemd.requires=network-online.target,x-systemd.device-timeout=60 0 0
I've tried the following options in fstab x-systemd.requires= network-online.target or systemd-networkd-wait-online.service or nfs-client.target none of them worked.
journalctl error: mount[841]: mount.nfs: Network is unreachable
When I run a sudo mount /mnt/nas manually after boot, it mounts the share.
How can I have my NFS share mounted automatically after boot?
|
If you read this, you might probably have the same problem I had.
My hint was comming from journalctl | grep nas
systemd[1]: mnt-nas.automount: Got automount request for /mnt/nas, triggered by 746 (zeitgeist-datah)
I simply uninstalled the zeitgeist package - not sure why it was installed, and mounting works again
| Mount NFS share on notebook (wlan) |
1,486,823,193,000 |
I am working on a desktop running on Ubuntu 16.04. I want to isolate the directories /var, /etc, /opt in separate partitions. Creating new partitions is fine.
At this moment, the fstab only mounts copies (say, /media/var, /media/etc, /media/opt) on the newly created partitions, so as to interfere with the ordinary course of things minimally.
I am aware of this other post Recommended fstab settings and of the Ubuntu fstab summary which only provides general information.
At the point of editing the /ect/fstab file, I became aware of the importance of setting an appropriate mount option field (the fourth field, indicated as <options>).
The naive evidence is:
Choosing defaults as a mount option makes the rebooting of Ubuntu stall. After logging in, the greeter does not move on to the password request for the encrypted file system.
On the contrary, if I copycat the option nodev, nosuid from the option already set for /home (indeed residing on an own partition), I do manage to access my desktop manager as usual.
However, I don't want to presume that this will be the best option when the new partitions have the real /var, /etc, /opt directories mounted on. For the example, the mount options for the current / directory are errors=remount-ro. This option may well also be suitable also for any subdirectory moved out to an independent partition. I wish to avoid guesswork though.
The question is: what are the mount options for standalone /var, /etc and /opt such that the system performs like when they are subdirectories of /?
|
You can use the same mount options for standalone parts of the system such as /var, /opt, etc. Using defaults is not the cause of your problem.
Your description is not precise enough to identify what went wrong in one attempt and why the other attempt succeeded. However, there's one thing you mention that's doomed to failure: /etc belongs on the root partition. It contains /etc/fstab as well as the scripts that would trigger the mounting of the other partitions. You must leave /etc on the root filesystem.
Splitting off /var, /usr and /opt is generally not useful, but not harmful either. Splitting off some specific parts of /var can make sense, for example split off /var/mail on a mail server, split off /var/log on a server that has a lot of important logs, etc.
You can use nodev everywhere except /dev. A system partition should generally not have nosuid, but it can make sense for some parts of /var.
| fstab mount options for /etc /opt /var partitions |
1,486,823,193,000 |
I'm on a Raspberry with DietPI distro and I cannot mount a NTFS hard drive at boot.
This is the fstab file:
#Internal Drives---------------------------------------------------
proc /proc proc defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults,noatime 0 2
/dev/mmcblk0p2 / ext4 defaults,noatime 0 1
tmpfs /tmp tmpfs noatime,nodev,nosuid,mode=1777 0 0
tmpfs /var/log tmpfs defaults,size=20m,noatime,nodev,nosuid,mode=1777 0 0
tmpfs /DietPi tmpfs defaults,size=10m,noatime,nodev,nosuid,mode=1777 0 0
UID=4E1AEA7B1AEA6007 /mnt/hdd ntfs-3g uid=1000,gid=1000,umask=007 0 0
The last line is the drive I want to mount at boot (the UID is correct).
The strange thing is that if I manually run mount -a or mount /dev/sda1 /mnt/hdd it works and I can see the contents of the drive in the /mnt/hdd directory.
Also, this is dmesg | tail
~# dmesg | tail
[ 9.507925] sd 0:0:0:0: [sda] Write Protect is off
[ 9.519623] sd 0:0:0:0: [sda] Mode Sense: 47 00 10 08
[ 9.520422] sd 0:0:0:0: [sda] No Caching mode page found
[ 9.532854] sd 0:0:0:0: [sda] Assuming drive cache: write through
[ 9.616554] random: nonblocking pool is initialized
[ 9.620081] sda: sda1
[ 9.638842] sd 0:0:0:0: [sda] Attached SCSI disk
[ 10.968120] smsc95xx 1-1.1:1.0 eth0: hardware isn't capable of remote wakeup
[ 12.556564] smsc95xx 1-1.1:1.0 eth0: link up, 100Mbps, full-duplex, lpa 0xC5E1
[ 22.488053] Adding 102396k swap on /var/swap. Priority:-1 extents:1 across:102396k SSFS
UPDATE: strange output if I just run mount: /dev/sda1 is not listed!
/dev/root on / type ext4 (rw,noatime,data=ordered)
devtmpfs on /dev type devtmpfs (rw,relatime,size=469756k,nr_inodes=117439,mode=755)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=94812k,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=189620k)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
/dev/mmcblk0p1 on /boot type vfat (rw,noatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)
tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noatime)
tmpfs on /var/log type tmpfs (rw,nosuid,nodev,noatime,size=20480k)
tmpfs on /DietPi type tmpfs (rw,nosuid,nodev,noatime,size=10240k)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
UPDATE 2: this is blkid:
~# blkid
/dev/mmcblk0p1: SEC_TYPE="msdos" LABEL="PISD" UUID="CB99-4C7E" TYPE="vfat"
/dev/mmcblk0p2: UUID="1263ae8d-aaf3-41b6-9ac0-03e7fecb5d6a" TYPE="ext4"
/dev/sda1: LABEL="PileOfPi" UUID="4E1AEA7B1AEA6007" TYPE="ntfs"
Is there an error somewhere?
|
This issue can occur if your USB HDD is not fully powered up, before the filesystem mounting service (fstab) is completed.
Resolution: Add a boot delay to cmdline.txt
The default /etc/fstab used in DietPi will automatically mount a single connected ext4/ntfs drive from /dev/sda1 to /mnt/usb_1.
Modifying the fstab entry to UUID is not required, unless you plan to have more than 1 USB drive.
| External HDD not mounted at boot, but fstab seems ok |
1,486,823,193,000 |
During boot, some/many mount points in /etc/fstab are not mounted.
The /etc/mtab file contains these mount points - my understanding is that the system believes the filesystems are already mounted.
Modifying my /etc/rc.d/init.d/mountfs script (taken from LFS) with the line
grep -v root /proc/mounts > /etc/mtab
before the call to (I added the v and # to get some output - the system claims already mounted)
mount -av -O no_netdev # > /dev/null
allows the system to appropriately mount the filesystems.
However, in this case, the filesystems are not correctly unmounted on shutdown (by the same script taking the stop argument). The error on shutdown relates to the root filesystem.
df returns output which shows the state of my filesystems, so is easy to check. mount outputs all of the expected mountpoints, even if they are not mounted (ie, without the modification to /etc/rc.d/init.d/mountfs)
Issuing commands such as mount /mountpoint/in/fstab successfully mounts the point, even if it is already in /etc/mtab (presumably this is because mount -a checks mtab, and mount <specific point> does not?)
What's going wrong?
My /etc/fstab:
# device mount-point fs-type options dump fsck-order
# Core mount points
proc /proc proc nosuid,noexec,nodev 0 0
sysfs /sys sysfs nosuid,noexec,nodev 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /run tmpfs defaults 0 0
devtmpfs /dev devtmpfs mode=0755,nosuid 0 0
/dev/sda8 swap swap pri=1 0 0
/dev/sda9 / ext4 defaults 1 1
/dev/sda10 /home ext4 defaults 0 2
# Additional mount points
/dev/sda6 /mnt/Ubuntu ext4 defaults 0 0
/dev/sda11 /sources ext4 defaults 0 0
# Network mounts
//software.blah.blah/path /mnt/Licensed cifs credentials=/home/<user>/.smbpasswd,ro,_netdev 0 0
|
Issuing grep -v root /proc/mounts > /etc/mtab; echo "/dev/sda9 / ext4 defaults 1 1" >> /etc/mtab fixed this problem.
The startup issue was due to the mtab file having entries not properly removed during shutdown. Once the root filesystem was added to the mtab file (after boot), the shutdown occured properly and then startup also works fine.
The line added to mountfs was not needed after the mtab file was correctly set.
If the computer loses power/is shutdown forcefully, this has on one occasion become broken again. Then the steps above correct the problem.
| Problems with (local) mount at boot (sysvinit) |
1,486,823,193,000 |
I am having a problem with mounting an internal SSD. For some reason The mounting point is chosen as /media/user/Data instead of, as specified in /etc/fstab, /mnt/Data. The according line in /etc/fstab goes as follows:
UUID="064ced5e-19c1-43d1-876f-3de0c115b65e" /mnt/Data ext4 users,noauto,exec,rw,async,dev 0 0
I am using Ubuntu 14.04 LTS 64bit.
(edit:)
Here goes the complete fstab file:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sdc2 during installation
UUID=9cf86df3-3d02-45d7-8078-d6ff5fc83ea6 / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sda2 during installation
UUID=5AC7-594F /boot/efi vfat defaults 0 1
# swap was on /dev/sdc5 during installation
UUID=5460f609-5245-417a-833a-271c533db97a none swap sw 0 0
UUID="064ced5e-19c1-43d1-876f-3de0c115b65e" /mnt/Data ext4 users,noauto,exec,rw,async,dev 0 0
|
Given the simplicity of the /etc/fstab parser I would expect, removing the quotes around the UUID in the affected entry might help that is, use
UUID=064ced5e-19c1-43d1-876f-3de0c115b65e /mnt/Data ...
instead of
UUID="064ced5e-19c1-43d1-876f-3de0c115b65e" /mnt/Data ...
| Wrong mount point for internal SSD |
1,486,823,193,000 |
I added three drives to the fstab file following a tutorial but only one of them mounts at startup. I tried sudo mount -a and all three drives mounted.
fstab:
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name
devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
UUID=b964dcd3-f8cf-45e4-b903-a90febb29195 / ext4
noatime,errors=remount-ro 0 0
PARTUUID=bdc5a06f-06c7-4b1c-bf9e-e8770d24ce87 /boot/efi vfat
umask=0077 0 0
UUID=B28691348690F9D5 /media/data ntfs defaults 0 0
UUID=228C52CC8C5299DD /media/data ntfs defaults 0 0
UUID=40D6A802D6A7F676 /media/data ntfs defaults 0 0
|
You are mounting all three NTFS partitions to the same mount point (/media/data). You probably want to modify the entries in your /etc/fstab like so:
UUID=B28691348690F9D5 /media/data1 ntfs defaults 0 0
UUID=228C52CC8C5299DD /media/data2 ntfs defaults 0 0
UUID=40D6A802D6A7F676 /media/data3 ntfs defaults 0 0
Don't forget to create the mount points (sudo mkdir /media/data{1..3}) before running sudo mount -a.
| Hard drives added to fstab not mounting at startup – Pop-OS 20.04 |
1,486,823,193,000 |
OS: Ubuntu 18.04
Today I noticed that some script has modified my /etc/fstab and commented out the efivars partition:
# /boot/efi was on /dev/nvme0n1p1 during installation
# UUID=1562-9EFD /boot/efi vfat umask=0077 0 1
This was resulting in an error while installing an update for grub as it couldn't find the efivars partition.
Is there any way to prevent this from happening? Can I somehow limit access to this file, or override some setting in apt so that it will never be modified again?
|
you can try chattr
chattr - change file attributes on a Linux file system
To set attribute :
chattr +i file
To unset :
chattr -i file
| Prevent post-install scripts from modifying /etc/fstab |
1,486,823,193,000 |
Background
I'd like to store music on /dev/sda1, which is a physically separate drive. The /etc/fstab contains:
# grep "music" /etc/fstab
UUID=10...92 /mnt/music ext4 defaults,user,rw 0 2
Problem
When the system restarts, the files disappear:
# ls /mnt/music/
lost+found
When unmounting the drive, the files reappear:
# umount /mnt/music
# ls /mnt/music
archives jazz logs scripts
In both cases, the directory /mnt/music is owned by the user account, not the root account.
I've also tried changing the /etc/fstab entry to:
UUID=10...92 /mnt/music ext4 rw,nosuid,nodev,noexec,relatime,user=USERNAME 0 2
Question
How do you mount a drive in Arch Linux such that it is read-write for a specific user at startup (without having to unmount the drive)?
|
Your problem is that you wrote the files to the wrong filesystem (the one containing the /mnt/music directory, rather than the on on /dev/sda1).
To fix this, get the admin to move the mount elsewhere, and move the files from /mnt/music into the mounted filesystem.
user-only version
Alternatively, as an ordinary user, umount the filesystem and cd into /mnt/music.
Then mount the filesystem. Now, /mnt/music is your filesystem and . is the underlying (shadowed) directory - confirm that, e.g. using ls.
Now put the files where you wanted them: mv * /mnt/music.
sync for good luck, and you're done.
To ensure that the filesystem gets mounted at boot time, get auto added to its mount options in /etc/fstab.
| Why do my files only appear when the drive is unmounted? |
1,486,823,193,000 |
Okay so I was messing around in the fstab file under /etc/ and I made a new partition which I could play around with, this partitions name is /dev/sda2.
So when I was messing around I did this configuration on my /dev/sda2 partition. Once I was done, I rebooted the system and it didn't load the system and it only allowed me to configure the system via a Command Line interface terminal with no GUI or anything.
<file system> <mount point> <type> <options> <dump> <pass>
#/data on /dev/sda2
UUID=910d5659-9fe1-43d5-bff6-738459fcdbd /home/r00t/Document/mount-point ext4 relatime,ro,owner,errors=remount-rw 0 2
When mounting /dev/sda2 I was playing around with the options column, by adding different options such as relatime,ro,errors=remount-rw so I think that may be a cause of the problem.
|
The cause is errors=remount-rw, acceptable valuse are :
errors={continue|remount-ro|panic}
This is what ext4 man page says EXT4(5) :
The ext4 filesystem is an advanced level of the ext3 filesystem which incorporates scalability and reliability enhancements for
supporting large filesystem.
The options [list of options ...], errors, data_err ... are backwardly compatible with ext3 or ext2.
Also EXT2(5) says :
errors={continue|remount-ro|panic}
Define the behavior when an error is encountered. (Either ignore errors and just mark the filesystem erroneous and continue, or remount the filesystem read-only, or panic and halt the system.)
| Fstab Configuration Help |
1,486,823,193,000 |
I use the following command to mount an NFS share from another system on Debian:
sudo mount hypercube.home:/volume1/ /mnt/hypercube/
But this does not persist after boot. I believe I need to add something to fstab but the syntax is probably different and I'm not sure what's the correct syntax for NFS shares.
How can I find out the correct mount syntax and parameters to mount this share automatically at boot on my Debian system?
|
Add a line to /etc/fstab:
hypercube.home:/volume1 /mnt/hypercube nfs4 defaults,_netdev 0 0
If the target system isn’t always on, you might like to check the options described in Debian NFS wait too long when the other Debian is turned off in addition to the above.
| How to automatically mount NFS share from another system automatically at boot? |
1,486,823,193,000 |
at every boot of my system I should check the file "/etc/fstab":
# UNCONFIGURED FSTAB FOR BASE SYSTEM
#/dev/disk/by-partlabel/userdata /userdata ext4 defaults 0 2
#/dev/disk/by-partlabel/oem /oem ext4 defaults 0 2
and if the lines:
/dev/disk/by-partlabel/userdata
/dev/disk/by-partlabel/oem
start with '#' (comment), remove the comment and reboot the system
I make a simple batch that removes the '#' for each line, starting from the line number 2:
#!/bin/bash
# Check if the WiFi driver will be loaded
echo -e "\n\r##############################"
echo -e "# Start reading /etc/fstab #"
echo -e "##############################\n\r"
cat /etc/fstab
# Backup the original file
cp /etc/fstab /etc/fstab.bak
# Remove '#' starting from the second line
sed -i '2,3s/#//' $fstabConfigPath
Since in future, the file /etc/fstab could contain other lines, I would like to improve the script to remove the '#' only for the specified lines.
Thank you for the help,
Federico
|
Rather that pointing at numbered lines, you should ask sed to operate only on lines that match a pattern. Here's a way you could do that:
sed -ri.bak '/^#[/]dev[/]disk[/]by-partlabel[/](userdata|oem)( |\t)/ s/^#//' /etc/fstab
There's no need to copy the file, you can tell sed -i to create it for you by giving it the extension you want it to have.
Checking if the file was actually modified before rebooting.
diff -q /etc/fstab /etc/fstab.bak || reboot
Warning
Automatically editing /etc/fstab and rebooting seems a bit risky -- if something goes wrong you could find your system rendered un-bootable.
| Check & Modify /etc/fstab |
1,373,982,804,000 |
scp works well in all cases, but the Raspberry Pi is to weak to copy files efficiently in a secure environment (lan). The theoretically possible 6,75 MB/s via 54 Mbit wireless lan shrink down to about 1.1 MB/s.
Is there a way to copy files remotely without encryption?
It should be a cli command with no dependencies to extra services (ftp, samba) or at least with a minimum of configuration. I mean, a standard tool that works quite well out-of-the-box with standard programms/services (like scp/ssh).
|
You might be looking for rcp, it performs remote execution via rsh so you will have to rely on that and have in mind that all communication are insecure.
| Copy files without encryption (ssh) in local network |
1,373,982,804,000 |
I'm trying to copy a file from one of my local machines to a remote machine. Copying a file with size upto 1405 bytes works fine. When I try to scp a larger file, the file gets copied but the scp process hangs up and doesn't exit. I have to hit Ctrl-C to return back to the shell.
I have observed the same behavior with FTP as well. Any ideas about what might be causing this?
|
This definitely sounds like MTU problems (like @Konerak pointed out), this is how I would test this:
ip link set eth0 mtu 1400
This temporarily sets the allowed size for network packets to 1400 on the network interface eth0 (you might need to adjust the name). Your system will then split all packets above this size before sending it on to the network. If this fixes the scp command, you need to find the problem within the network or make this ugly fix permanent ;)
| Why does SCP hang on copying files larger than 1405 bytes? [duplicate] |
1,373,982,804,000 |
I'm trying to upload all the text files within the current folder via FTP to a server location using curl. I tried the following line:
curl -T "{file1.txt, file2.txt}" ftp://XXX --user YYY
where XXX is the server's IP address and YYY is the username and password.
I'm able to transfer file1.txt to the server successfully, but it complains about the second file saying 'Can't open 'file_name'!'
I swapped the file names and it worked for file2.txt and not file1.txt. Seems like I've got the syntax wrong, but this is what the manual says?
Also, ideally I would be able to do something like this:
curl -T *.txt ftp://XXX --user YYY
because I won't always know the names of the txt files in the current folder or the number of files to be transferred.
I'm of the opinion I may have to write a bash script that collects the output of ls *.txt into an array and put it into the multiple-files-format required by curl.
I've not done bash scripting before - is this the simplest way to achieve this?
|
Your first command should work without whitespaces:
curl -T "{file1.txt,file2.txt}" ftp://XXX/ -user YYY
Also note the trailing "/" in the URLs above.
This is curl's manual entry about option "-T":
-T, --upload-file
This transfers the specified local file to the remote URL. If there is no file part in the specified URL, Curl will append the local file name. NOTE that you must use a trailing / on the last directory to really prove to Curl that there is no file name or curl will think that your last directory name is the remote file name to use. That will most likely cause the upload operation to fail. If this is used on an HTTP(S) server, the PUT command will be used.
Use the file name "-" (a single dash) to use stdin instead of a given file. Alternately, the file name "." (a single period) may be specified instead of
"-" to use stdin in non-blocking mode to allow reading server output while stdin is being uploaded.
You can specify one -T for each URL on the command line. Each -T + URL pair specifies what to upload and to where. curl also supports "globbing" of the -T
argument, meaning that you can upload multiple files to a single URL by using the same URL globbing style supported in the URL, like this:
curl -T "{file1,file2}" http://www.uploadtothissite.com
or even
curl -T "img[1-1000].png" ftp://ftp.picturemania.com/upload/
"*.txt" expansion does not work because curl supports only the same syntax as for URLs:
You can specify multiple URLs or parts of URLs by writing part sets within braces as in:
http://site.{one,two,three}.com
or you can get sequences of alphanumeric series by using [] as in:
ftp://ftp.numericals.com/file[1-100].txt
ftp://ftp.numericals.com/file[001-100].txt (with leading zeros)
ftp://ftp.letters.com/file[a-z].txt
[...]
When using [] or {} sequences when invoked from a command line prompt, you probably have to put the full URL within double quotes to avoid the shell from interfering with it. This also goes for other characters treated special, like for example '&', '?' and '*'.
But you could use the "normal" shell globbing like this:
curl -T "{$(echo *.txt | tr ' ' ',')}" ftp://XXX/ -user YYY
(The last example may not work in all shells or with any kind of exotic file names.)
| Uploading multiple files via FTP using curl |
1,373,982,804,000 |
How do I check which FTP (Passive or Active) is running?
By default, passive FTP is running in linux, but how do I check?
|
I found the answer as below.
in passive mode we can run ls command but in active mode we have to manually disable passive mode by typing passive command then it will accept ls command otherwise it's gives 550 permission denied error . see below (pasv_enable=NO in vsftpd.conf)
ftp> passive
Passive mode on.
ftp> ls
550 Permission denied.
Passive mode refused.
ftp> passive
Passive mode off.
ftp> ls
200 PORT command successful. Consider using PASV.
150 Here comes the directory listing.
-rw-rw-r-- 1 503 503 0 Jan 11 2013 files1
-rw-rw-r-- 1 503 503 0 Jan 11 2013 files10
-rw-rw-r-- 1 503 503 0 Jan 11 2013 files2
-rw-rw-r-- 1 503 503 0 Jan 11 2013 files3
-rw-rw-r-- 1 503 503 0 Jan 11 2013 files4
-rw-rw-r-- 1 503 503 0 Jan 11 2013 files5
-rw-rw-r-- 1 503 503 0 Jan 11 2013 files6
-rw-rw-r-- 1 503 503 0 Jan 11 2013 files7
-rw-rw-r-- 1 503 503 0 Jan 11 2013 files8
-rw-rw-r-- 1 503 503 0 Jan 11 2013 files9
-rw-r--r-- 1 0 0 10240 Jan 11 2013 test.tar
226 Directory send OK.
ftp>
ls listing that we asked for on the server comes back over the port 20 on the server to a high port connection on the client. No use of port 21 on the server is made to send back the results of the ls command on the server.
above is extracted from "http://www.markus-gattol.name/ws/vsftpd.html"
| How to check the Passive and Active FTP |
1,373,982,804,000 |
I love linux because I get control over my system. But I do herald from the school of mac, where things are simple, beautiful, and powerful. I like it that way, as opposed to having lots of knobs and levers and everything.
Does anyone know of a strong FTP client for linux that is in the vein of Panic's Transmit?
It's my choice FTP software in Mac OS, but I doubt I need to convince anyone here that I don't want to DEVELOP web apps in OS X. It's a pain, imo.
Currently I use Filezilla. It works fine. But it's UI is a mess, imho.
|
Since you're using Gnome on Ubuntu, why not use the default file manager (Nautilus)?
Under Ubuntu 10.04, choose “Connect to Server” in the Places menu, select “Public FTP” or “FTP (with login)” as the service type, enter the server name and other parameters (you can define bookmarks in this dialog box too), and voilà.
| FTP client with a good GUI? |
1,373,982,804,000 |
Moving a tried-and-true vsftpd configuration onto a new server with Fedora 16, I ran into a problem. All seems to go as it should, but user authentication fails. I cannot find any entry in any log that indicates what happened.
Here is the full config file:
anonymous_enable=NO
local_enable=YES
write_enable=YES
local_umask=022
dirmessage_enable=YES
xferlog_enable=YES
connect_from_port_20=YES
xferlog_file=/var/log/vsftpd.log
xferlog_std_format=YES
idle_session_timeout=0
data_connection_timeout=0
nopriv_user=ftpsecure
connect_from_port_20=YES
listen=YES
chroot_local_user=YES
chroot_list_enable=NO
ls_recurse_enable=YES
listen_ipv6=NO
pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES
FTP challenges me for a username and password, I provide them, Login Incorrect. I have verified, this user is able to login from ssh. Something is screwed up with pam_service.
Anonymous (if changed to allowed) seems to work well.
SELinux is disabled.
Ftpsecure appears to be configured fine... I am at a complete loss!
Here are the log files I examined with no success:
/var/log/messages
/var/log/xferlog #empty
/var/log/vsftpd.log #empty
/var/log/secure
Found something in /var/log/audit/audit.log:
type=USER_AUTH msg=audit(1335632253.332:18486): user pid=19528 uid=0 auid=4294967295 ses=4294967295 msg='op=PAM:authentication acct="kate" exe="/usr/sbin/vsftpd" hostname=ip68-5-219-23.oc.oc.cox.net addr=68.5.219.23 terminal=ftp res=failed'
Perhaps I should look at /var/log/wtf-is-wrong.help :-)
Further info:
/etc/pam.d/vsftpd looks like this:
#%PAM-1.0
session optional pam_keyinit.so force revoke
auth required pam_listfile.so item=user sense=deny file=/etc/vsftpd/ftpusers onerr=succeed
auth required pam_shells.so
auth include password-auth
account include password-auth
session required pam_loginuid.so
session include password-auth
|
Whew. I solved the problem. It amounts to a config but within /etc/pam.d/vsftpd
Because ssh sessions succeeded while ftp sessions failed, I went to
/etc/pam.d/vsftpd, removed everything that was there and instead placed the contents of ./sshd to match the rules precisely. All worked!
By method of elimination, I found that the offending line was:
auth required pam_shells.so
Removing it allows me to proceed.
Tuns out, "pam_shells is a PAM module that only allows access to the system if the users shell is listed in /etc/shells." I looked there and sure enough, no bash, no nothing. This is a bug in vsftpd configuration in my opinion as nowhere in the documentation does it have you editing /etc/shells. Thus default installation and instructions do not work as stated.
I'll go find where I can submit the bug now.
| vsftpd fails pam authentication |
1,373,982,804,000 |
I have a firewall (csf) that lets you to separately allow incoming and outgoing TCP ports. My question is, why would anyone want to have any outgoing ports closed?
I understand that by default you might want to have all ports closed for incoming connections. From there, if you are running an HTTP server you might want to open port 80. If you want to run an FTP server (in active mode) you might want to open port 21. But if it's set up for passive FTP mode, a bunch of ports will be necessary to receive data connections from FTP clients... and so on for additional services. But that's all. The rest of ports not concerned with a particular service that the server provides, and especially if you are mostly a client computer, must be closed.
But what about outgoing connections? Is there any security gain in having destination ports closed for outbound connections? I ask this because at first I thought that a very similar policy of closing all ports as for incoming connections could apply. But then I realised that when acting as a client in passive FTP mode, for instance, random high ports try to connect to the FTP server. Therefore by blocking these high ports in the client side you are effectively disabling passive FTP in that client, which is annoying. I'm tempted to just allow everything outgoing, but I'm concerned that this might be a security threat.
Is this the case? Is it a bad idea, or has it noticeable drawbacks just opening all (or many) ports only for outgoing connections to facilitate services such as passive FTP?
|
There can be many reasons why someone might want to have outgoing ports closed. Here are some that I have applied to various servers at various times
The machine is in a corporate environment where only outbound web traffic is permitted, and that via a proxy. All other ports are closed because they are not needed.
The machine is running a webserver with executable code (think PHP, Ruby, Python, Perl, etc.) As part of a mitigation against possible code flaws, only expected outbound services are allowed.
A service or application running on the machine attempts to connect to a remote resource but the server administrator does not want it to do so.
Good security practice: what is not explicitly permitted should be denied.
| What's the point of firewalling outgoing connections? |
1,373,982,804,000 |
ftp ftp://bapte:[email protected]
And I got
ftp: ftp://bapte:[email protected]: Name or service not known
I tried a bunch of different things. I looked at manual. No luck
I tried
ftp -user username password ftp.backupte4.rsyncbackup.info
as said in manual
ftp: u: unknown option
All I need is a sample that works.
The manual said
http://linux.about.com/od/commands/l/blcmdl1_ftp.htm
user user-name [password ] [account ]
Identify yourself to the remote FTP server. If the password is not specified and the server requires it, ftp will prompt the user for it (after disabling local echo). If an account field is not specified, and the FTP server requires it, the user will be prompted for it. If an account field is specified, an account command will be relayed to the remote server after the login sequence is completed if the remote server did not require it for logging in. Unless ftp is invoked with auto-login disabled, this process is done automatically on initial connection to the FTP server.
So what exactly I should put?
|
you can't exactly do it with a command line option, but what you can do is redirect stdin like so:
$ ftp -n ftp.backupte4.rsyncbackup.info << EOF
> quote USER bapte
> quote PASS b2p7Ua2
> put somefile <-- this is the command you want to execute
> quit
> EOF
or you can put it in a script:
#!/bin/sh
ftp -n ftp.backupte4.rsyncbackup.info << EOF
quote USER bapte
quote PASS b2p7Ua2
put somefile
quit
EOF
Finally you could use lftp:
lftp -u bapte,b2p7Ua2 -e "your command;quit" ftp.backupte4.rsyncbackup.info
http://lftp.yar.ru/lftp-man.html
| How to specify username password for ftp |
1,373,982,804,000 |
I need to upload a directory with a rather complicated tree (lots of subdirectories, etc.) by FTP. I am unable to compress this directory, since I do not have any access to the destination apart from FTP - e.g. no tar. Since this is over a very long distance (USA => Australia), latency is quite high.
Following the advice in How to FTP multiple folders to another server using mput in Unix?, I am currently using ncftp to perform the transfer with mput -r. Unfortunately, this seems to transfer a single file at a time, wasting a lot of the available bandwidth on communication overhead.
Is there any way I can parallelise this process, i.e. upload multiple files from this directory at the same time? Of course, I could manually split it and execute mput -r on each chunk, but that's a tedious process.
A CLI method is heavily preferred, as the client machine is actually a headless server accessed via SSH.
|
lftp would do this with the command mirror -R -P 20 localpath - mirror syncs between locations, and -R uses the remote server as the destination , with P doing 20 parallel transfers at once.
As explained in man lftp:
mirror [OPTS] [source [target]]
Mirror specified source directory to local target directory. If target
directory ends with a slash, the source base name is appended to target
directory name. Source and/or target can be URLs pointing to directo‐
ries.
-R, --reverse reverse mirror (put files)
-P, --parallel[=N] download N files in parallel
| How can I parallelise the upload of a directory by FTP? |
1,373,982,804,000 |
I would like to backup some of my very important data on a remote machine.
Currently I'm just saving it to my local machine by using this command: tar -cvjf ~/backups/Backup.tar.bz2 ~/importantfiles/*
I would prefer not using another command to transger it to the remote machine, meaning I would like to just have this command being upgraded so it can transfer the data to the remote machine.
This is designed to be in a script later that is suposed to run on its own, meaning any type of required user input would completly mess it up!
Something like
tar -cvjf sftp://user:pwassword@host/Backup.tar.bz2 ~/importantfiles/*
tar -cvjf ftp://user:pwassword@host/Backup.tar.bz2 ~/importantfiles/*
would be perfect! (No pipes (etc.), just one command!)
|
For SSH:
tar czf - . | ssh remote "( cd /somewhere ; cat > file.tar.gz )"
For SFTP:
outfile=/tmp/test.tar.gz
tar cvf $outfile . && echo "put $outfile" | sftp remote:/tmp/
Connecting to remote...
Changing to: /tmp/
sftp> put /tmp/test.tar.gz
Uploading /tmp/test.tar.gz to /tmp/test.tar.gz
/tmp/test.tar.gz
Another SFTP:
outfile=/tmp/test.tar.gz
sftp -b /dev/stdin remote >/dev/null 2>&1 << EOF
cd /tmp
get $outfile
bye
EOF
echo $?
0
| How to make tar save the archive on a remote machine using sftp or ftp? |
1,373,982,804,000 |
Why do I get kicked out of a FTP session once I run a command? It seems that once I successfully login into a server is get the following after running a command such as "ls" (I've enclosed the error portion in the "[ERROR]" tags):
allen92@allen92-VirtualBox:~/Videos$ ftp -n ftp.FreeBSD.org
Connected to ftp.FreeBSD.org.
220 beastie.tdk.net FTP server (Version 6.00LS) ready.
ftp> user
(username) anonymous
331 Guest login ok, send your email address as password.
Password:
230 Guest login ok, access restrictions apply.
Remote system type is UNIX.
Using binary mode to transfer files.
ftp> ls
[ERROR]
421 Service not available, remote server has closed connection
[ERROR]
ftp>
This seems to happen on any remote FTP server. Everything works fine when I login onto the local machine and run FTP commands. If in fact the "421" error is a generic error, is there any way to find out the source of the problem? Any leads on this would be appreciated. I haven't been able to find any support on this particular issue. Anybody with an similar problem please share your thoughts.
NOTE: I have VSFTPD installed.
|
There is most likely a NAT-firewall between you and the servers showing the symptom. (NAT-firewalls hide a whole network behind a single IP-number).
The problem is that ftp wants to send the data resulting from the command in a new, separate TCP/IP connection and that cannot go through the firewall because it needs to go from the server to you, and you are hidden behind the firewall which has no clue that the data is intended for your machine. When the FTP protocol was designed, many modern devices like the NAT-router (which became necessary when there were more devices than available IP-addresses) had not been invented yet.
Use the pasv command (may be called something different in your client) to change to a passive connection where data connections go from you to the server.
See http://slacksite.com/other/ftp.html for a more detailed explanation.
| Why do I get kicked out of a FTP session once I run a command? |
1,373,982,804,000 |
When SCP'ing to my Fedora server, a user keeps getting errors about not being able to modify file timestamps ("set time: operation not permitted"). The user is not the owner of the file, but we cannot chown files to this user for security reasons. The user can sudo, but since this is happening via an SCP/FTP client, there's no way to do that either. And finally, we don't want to have to give this user root access, just to allow him to use a synchronization like rsync or WinSCP that needs to set timestamps.
The user is part of a group with full rw permissions on all relevant files and dirs. Any thoughts on how to grant user permission to touch -t these specific files without chowning them to him?
Further Info This all has to do with enabling PHP development in a single-developer scenario (ie: without SCM). I'm trying to work with Eclipse or NetBeans to work on a local copy of the PHP-based (WordPress) site, while allowing the user to "instantly" preview his changes on the development server. The user will be working remotely. So far, all attempts at automatic synchronization have failed - even using WinSCP in "watch folder" mode, where it monitors a local folder and attempts to upload any changes up to the remote directory error out because it always tries to set the date/timestamp.
The user does have sudo access, but I have been told that it's really not a good idea to work under 'root', so I have been unwilling to just log in as root to do this work. Besides, it ought not to be necessary. I would want some other, non-superuser to be able to do the same thing - using their account information, establish an FTP connection and be able to work remotely via sync. So the solution needs to work for someone without root access.
What staggers me is how much difficulty I'm having. All these softwares (NetBeans, Eclipse, WinSCP) are designed to allow synchronization, and they all try to write the timestamp. So it must be possible. WinSCP has the option to turn off "set timestamp", but this option becomes unavailable (always "on") when you select monitor/synchronize folder. So it's got to be something that is fairly standard.
Given that I'm a complete idiot when it comes to Linux, and I'm the dev "server admin" I can only assume it's something idiotic that I'm doing or that I have (mis)configured.
Summary In a nutshell, I want any users that have group r/w access to a directory, to be able to change the timestamp on files in that directory via SCP.
|
Why it doesn't work
When you attempt to change the modification time of a file with touch, or more generally with the underlying system call utime, there are two cases.
You are attempting to set the file's modification time to a specific time. This requires that you are the owner of the file. (Technically speaking, the process's effective user ID must be the owner of the file.²)
You are attempting to set the file's modification time to the current time. This works if and only if you have permission to write to the file. The reason for this exception is that you could achieve the same effect anyway by overwriting an existing byte of the file with the same value¹.
Why this typically doesn't matter
When you copy files with ftp, scp, rsync, etc., the copy creates a new file that's owned by whoever did the copy. So the copier has the permission to set the file's times.
With rsync, you won't be able to set the time of existing directories: they'll be set to the time when a file was last synchronized in them. In most cases, this doesn't matter. You can tell rsync not to bother with directory times by passing --omit-dir-times (-O).
With version control systems, revision dates are stored inside files; the metadata on the files is mostly irrelevant.
Solutions
This all has to do with enabling PHP development in a single-developer scenario (ie: without SCM).
Ok, stop right there. Just because there's a single developer doesn't mean you shouldn't use SCM. You should be using SCM. Have the developer check in a file, and give him a way to press a “deploy” button to check out the files from SCM into the live directory.
There is absolutely no technical reason why you shouldn't be using SCM, but there may be a human reason. If the person working on these files styles himself “developer”, he should be using SCM. But if this is a non-technical person pushing documents in, SCM might be too complicated. So go on pushing the files over FTP or SSH. There are three ways this can work.
Do you really need to synchronize times? As indicated above, rsync has an option to not synchronize times. Scp doesn't unless you tell it to. I don't know WinSCP but it probably can too.
Continue doing what you're doing, just ignore messages about times. The files are still being copied. This isn't a good option, because ignoring errors is always risky. But it is technically possible.
If you need flexibility in populating the files owned by the apache user, then the usual approach would be to allow the user SSH access as apache. The easy approach is to have the user create an SSH private key and add the corresponding public key to ~apache/.ssh/authorized_keys. This means the user will be able to run arbitrary commands as the apache user. Since you're ok with giving the user sudo rights anyway, it doesn't matter in your case. It's possible, but not so easy, to put more restrictions (you need a separate user database entry with a different name, the same user ID, a restricted shell and a chroot jail; details in a separate question, though this may already be covered on this site or on Server Fault).
¹ Or, for an empty file, write a byte and then truncate.
² Barring additional complications, but none that I know of applies here.
| User can't touch -t |
1,373,982,804,000 |
Computer A (assumed that ip is 44.44.44.44)can ftp the host 130.89.148.12.
ftp 130.89.148.12
Connected to 130.89.148.12.
220 ftp.debian.org FTP server
Name (130.89.148.12:debian8): anonymous
331 Please specify the password.
Password:
230 Login successful.
Remote system type is UNIX.
Using binary mode to transfer files.
Computer B (my local pc) can not ftp the host 130.89.148.12.
Let's build a ssh tunnel with ssh command this way:
ssh -L -f -N localhost:2121:130.89.148.12:21 [email protected]
The ssh tunnel between my local pc and Computer A (44.44.44.44) was connected after password to login into 44.44.44.44.
Then to input the command on my local pc console:
ftp localhost:2121
ftp: localhost:2121: Name or service not known
What is the matter with my ssh tunnel?
Think to chexum, the right ftp command is ftp localhost 2121
But new problem.
|
Your approach is not taking in account that contrary to other common protocols, FTP uses both port 20 and port 21 over TCP by default.
The term passive refers that the protocol is slightly better behaved than the initial implementations.
Here is a link:
http://www.slacksite.com/other/ftp.html
Port 20/TCP is used for data, and port 21/TCP for commands.
In Unix, also privileged ports < 1024, only can be bound by root.
So either you do:
sudo ssh -f -N -L 20:130.89.148.12:20 -L 21:130.89.148.12:21 [email protected]
This way you do not give any extra port, and only use it with
ftp -p localhost
or if you do not have root:
ssh -f -N -L 2120:130.89.148.12:20 -L 2121:130.89.148.12:21 [email protected]
and then use:
ftp -p -P 2121 localhost
From man ftp http://linux.die.net/man/1/ftp
-p passive mode
-P port
or if with a version of ftp that does not support -P (Debian 9/Ubuntu 16.04):
ftp -p localhost 2121
I will also leave a link to "SSH tunnels local and remote port forwarding explained"
http://blog.trackets.com/2014/05/17/ssh-tunnel-local-and-remote-port-forwarding-explained-with-examples.html
Lastly, I would advise on not using root in the remote system for ssh connections. root is a very powerful account, and should only be reserved for system administration.
Furthermore, in many modern Linuxes ssh remote login as root comes disabled by default.
Why is root login via SSH so bad that everyone advises to disable it?
| Why doesn't FTP work through my ssh tunnel? |
1,373,982,804,000 |
I'd like to give temporary SFTP access to a support guy. How do I create an SFTP user? And how can I delete it once the job is done?
Also, how do I specify a home directory for them? Can I prevent them from accessing certain subdirectories within their home directory?
We use CentOS 6.3 and fzSftp
|
Non-chroot access
If you don't have a FTP server setup, and you trust the user that will be logging in, not to go poking around your server too much, I'd be inclined to give them an account to SFTP into the system instead.
The CentOS wiki maintains a simple howto titled: Simple SFTP setup that makes this pretty pain free.
I say it's pain free because you literally just have to make the account and make sure that the firewall allows SSH traffic, make sure SSH the service is running, and you're pretty much done.
If sshd isn't already running:
$ /etc/init.d/sshd start
To add a user:
$ sudo useradd userX
$ sudo passwd userX
... set the password ...
When you're done with the account:
$ sudo userdel -r userX
Chroot access
If on the other hand you want to limit this user to a designated directory, the SFTP server included with SSH (openssh) provides a configuration that makes this easy to enable too. It's a bit more work but not too much. The steps are covered here in this tutorial titled: How to Setup Chroot SFTP in Linux (Allow Only SFTP, not SSH).
Make these changes to your /etc/ssh/sshd_config file.
Subsystem sftp internal-sftp
## You want to put only certain users (i.e users who belongs to sftpusers group) in the chroot jail environment. Add the following lines at the end of /etc/ssh/sshd_config
Match Group sftpusers
ChrootDirectory /sftp/%u
ForceCommand internal-sftp
Now you'll need to make the chrooted directory tree where this user will get locked into.
$ sudo mkdir -p /sftp/userX/{incoming,outgoing}
$ sudo chown guestuser:sftpusers /sftp/guestuser/{incoming,outgoing}
Permissions should look like the following:
$ ls -ld /sftp/guestuser/{incoming,outgoing}
drwxr-xr-x 2 guestuser sftpusers 4096 Dec 28 23:49 /sftp/guestuser/incoming
drwxr-xr-x 2 guestuser sftpusers 4096 Dec 28 23:49 /sftp/guestuser/outgoing
The top level directories like this:
$ ls -ld /sftp /sftp/guestuser
drwxr-xr-x 3 root root 4096 Dec 28 23:49 /sftp
drwxr-xr-x 3 root root 4096 Dec 28 23:49 /sftp/guestuser
Don't forget to restart the sshd server:
$ sudo service sshd restart
Now create the userX account:
$ sudo useradd -g sftpusers -d /incoming -s /sbin/nologin userX
$ sudo passwd userX
... set password ...
You can check that the account was created correctly:
$ grep userX /etc/passwd
userX:x:500:500::/incoming:/sbin/nologin
When you're done with the account, delete it in the same way above:
$ sudo userdel -r userX
...and don't forget to remove the configuration file changes we made above, then restart sshd to make them active once more.
| How can I create an SFTP user in CentOS? |
1,373,982,804,000 |
I want to duplicate a directory on an FTP server I'm connected to from my Mac via the command-line
Let's say I have file. I want to have files2 with all of file's subdirectories and files, in the same directory as the original. What would be the simplest way to achieve this?
EDIT:
With mget and mput you could download all files and upload them again into a different folder but this is definitely NOT what i want/need (I started this question trying to avoid duplicating with this download upload method from the dektop client)
|
What you have is not a unix command line, what you have is an FTP session. FTP is designed primarily to upload and download files, it's not designed for general file management, and it doesn't let you run arbitrary commands on the server. In particular, as far as I know, there is no way to trigger a file copy on the server: all you can do is download the file then upload it under a different name.
Some servers support extensions to the FTP protocol, and it's remotely possible that one of these extensions lets you copy remote files. Try help site or remotehelp to see what extensions the server supports.
If you want a unix command line, you need remote shell access, via rsh (remote shell) or more commonly in the 21st century ssh (secure shell). If this is a web host, check if it provides ssh access. Otherwise, contact the system administrator. But don't be surprised if the answer is no: command line access would be a security breach in some multi-user setups, so there may be a legitimate reason why it's not offered.
| Easiest way to duplicate directory over FTP |
1,373,982,804,000 |
I want to automate a call to ftp in a shell script. If I type
$ftp somehost.domain.com
I am prompted for a username and password. I want to give that username and password as part of the call to ftp. The man page for ftp says I can issue a user command at the ftp prompt -- but I want to login to ftp all in one go. Is that possible? I don't see anything in the flags for ftp. I see that the -s option give me the option of specifying some ftp commands once I have the ftp prompt -- but I need to give the user name to get to the prompt...
|
Use a .netrc file in your home directory.
The content is:
# machine <hostname> login <username> password <password>
machine ftp.example.com login myuser password $ecret
If this is something you're doing programmatically, write the .netrc before connecting, delete it when you're done.
| How to specify username and password in ftp command? |
1,373,982,804,000 |
With lftp, when I do ls I get the listing of the files on the FTP server, with their date. However, the timezone is not displayed.
On my local machine, I can do ls -l --time-style=full-iso to see the timezone, but this command doesn't work with lftp.
Generally speaking, does the FTP protocol allow for server timezone discovery?
When I do a file listing (ls), how can I see which timezone the date is supposed to be?
|
http://ohse.de/uwe/ftpcopy/faq.html#timestamp
The FTP protocol, misdesigned as it is, doesn't include time zone information. This means client programs have to guess what the time zone of the server is. At least my programs aren't good in guessing, so they don't even try.
ftpcopy simply assumes UTC (GMT, greenwhich mean time).
| How to get the timezone of a FTP server? |
1,373,982,804,000 |
I'm on Arch. When pacman tries to download a package from an ftp server, It fails with the error message
Protocol rsync not supported or disabled in libcurl
This has been bugging me for a little while now, but I can't remember what I did to cause it (Pacman just downloads from an http server instead, so I've been able to ignore it). I have both rsync and libcurl installed, and they apparently played well together before. I can't find any libcurl or curl config files and found no mention of rsync in the man page. How can I go about enabling rsync?
|
libcurl does not support the rsync protocol.
From the libcurl FAQ: Section 3.21
3.21 Protocol xxx not supported or disabled in libcurl
When passing on a URL to curl to use, it may respond that the particular
protocol is not supported or disabled. The particular way this error message
is phrased is because curl doesn't make a distinction internally of whether
a particular protocol is not supported (i.e. never got any code added that
knows how to speak that protocol) or if it was explicitly disabled. curl can
be built to only support a given set of protocols, and the rest would then
be disabled or not supported.
Note that this error will also occur if you pass a wrongly spelled protocol
part as in "htpt://example.com" or as in the less evident case if you prefix
the protocol part with a space as in " http://example.com/".
libcurl doesn't know the rsync protocol at all, not even a hint. BUT, since it was designed to 'guess' the protocol from the designator in a URL, trying to use rsync://blah.blah will give you the error you see, since it guesses you meant 'rsync', but it doesn't know that one, so it returns the error.
It'll give you the same error if you tried lornix://blah.blah, I doubt I'm a file transfer protocol either. (If I am, please let me know!)
Libcurl does support an impressive set of protocols, but rsync isn't one of them.
| How do I enable rsync in libcurl? |
1,373,982,804,000 |
I'm attempting to download a year's worth of data from an NOAA FTP Server using wget (or ncftpget). However, it takes way longer than it should due to FTP's overhead (I think). For instance, this command
time wget -nv -m ftp://ftp:[email protected]/pub/data/noaa/2015 -O /weather/noaa/2015
Or similarly, via ncftpget
ncftpget -R -T -v ftp.ncdc.noaa.gov /weather/noaa/ /pub/data/noaa/2015
Yields a result of. 53 minutes to transfer 30M!
FINISHED --2015-01-03 16:21:41--
Total wall clock time: 53m 32s
Downloaded: 12615 files, 30M in 7m 8s (72.6 KB/s)
real 53m32.447s
user 0m2.858s
sys 0m8.744s
When I watch this transfer, each individual file transfers quite quickly (500kb/sec) but the process of downloading 12,000 relatively small files incurs an enormous amount of overhead and slows the entire process down.
My Questions:
Am I assessing the situation correctly? I realize it's hard to tell without knowing the servers but does FTP really suck this much when transferring tons of small files?
Are there any tweaks to wget or ncftpget to enable them to play nicer with the remote FTP server? Or perhaps some kind of parallelism?
|
Here's how I ended up doing solving this using the advice from others. The NOAA in this case has an FTP and an HTTP resource for this, so what I wrote a script that does the following:
ncftpls to get a list of files
sed to complete the filepaths to a full list of http files
aria2c to quickly download them all
Example script:
# generate file list
ncftpls ftp://path/to/ftp/resources > /tmp/remote_files.txt
# append the full path, use http
sed -i -e 's/^/http:\/\/www1\.website\.gov\/pub\/data\//' /tmp/remote_files.txt
# download using aria2c
aria2c -i /tmp/remote_files.txt -d /filestore/2015
This runs much faster and is probably kinder to the NOAA's servers. There's probably even a clever way to get rid of that middle step, but I haven't found it yet.
| Speeding up Recursive FTP |
1,373,982,804,000 |
People around the net are all yelling how insecure it is to have writable root FTP directory, if you configure your FTP server with the chroot option (vsftpd won't even run).
I miss the explanation why is it bad?
Could someone expand a little bit more on that topic and explain what are the dangers, how a chroot directory writable by unprivileged users can be exploited?
|
The attack here is commonly known as the "Roaring Beast" attack; you can read more about it in these bulletins:
https://www.auscert.org.au/bulletins/15286/
https://www.auscert.org.au/bulletins/15526/
In order to use the chroot(2) function, the FTP server must have root privileges. Later, the unprivileged client requests the creation of files within /etc (or /lib) within that chrooted server process. These directories usually contain dynamically loaded libraries and configuration for system libraries like the DNS resolver, user/group name discovery, etc. The client-created files are not in the real /etc/ and /lib directories on the system -- but within the chroot, these client-created files are real.
So the malicious client connects to an FTP server which chroots their process, they create the necessary /lib and /etc directories/files within that chroot, upload a malicious copy of some dynamic libraries, and then ask the server to perform some action that triggers the use of their new dynamic libraries (usually just a directory listing, which leads to using the system functions for user/group discovery, etc). The server process runs that malicious libraries, and because the server might still have root privileges, that malicious library code can then have extra access to do whatever it wants.
Note that /etc and /lib are not the only directories to watch; the issue is more about the assumptions made by system libraries about their file locations in general. Thus different platforms may have other directories to guard.
ProFTPD, for example, now bars the creation of such /etc/ and /lib directories when chrooted, to mitigate such attacks.
| What are the dangers of having writable chroot directory for FTP? |
1,373,982,804,000 |
I have multiple users on a server. They upload and download their files through FTP. Sometimes some heavy transfer causes high load on the server. I am wondering, if there is any way to limit the ftp speed to avoid high load.
Any help would be much appreciated.
|
I found a way to limit ftp speed:
In the /etc/proftpd.conf insert this line:
TransferRate RETR,STOR,APPE,STOU 2000
This will limit ftp speed to 2 megabyte per second.
After changing the file you should restart the proftpd service:
/etc/init.d/proftpd restart
| How to limit ftp speed |
1,373,982,804,000 |
Oh! with my slow net connection, I am badly stuck. I was uploading a video file from local box to remote one via ftp. But net failed.I know there is a command named reget to resume download but Is there any command to resume upload.?
If no then I am hit.
|
I always use the lftp client which has the ability to resume a download that either died midstream or that I want to cancel and later restart.
I usually use the command like so:
$ lftp -e "mirror -c /download/<dir> /local/<dir>" -u user -p <port> ftp.server.com
What else?
This tool's name is a bit misleading, it can handle either FTP or SFTP.
ftp
$ lftp -e "mirror -c /download/<dir> /local/<dir>" -u user ftp://ftp.server.com
sftp
$ lftp -e "mirror -c /download/<dir> /local/<dir>" -u user sftp://sftp.server.com
Mirroring Links
From time to time you might encounter a issue with mirroring directories that contain symlinks, to work around this issue you can add this option to your lftp command:
set ftp:list-options -L
For eg:
$ lftp -e "set ftp:list-options -L; mirror -c /download/<dir> /local/<dir>" \
-u user ftp://ftp.server.com
References
lftp man page
Re: [lftp] Mirror not detecting change in remote symlinked file
| Is there any ftp command to resume upload? |
1,373,982,804,000 |
When I did the command :
wget -r ftp://user:[email protected]/
It's missing any sub-sub-directories. Does recursive FTP have a limit?
|
How many level deep are you getting? If you need more than 5, you need to provide the -l option.
man wget
-r
--recursive
Turn on recursive retrieving. The default maximum depth is 5.
-l depth
--level=depth
Specify recursion maximum depth level depth.
-m
--mirror
Turn on options suitable for mirroring.
This option turns on recursion and time-stamping,
sets infinite recursion depth and keeps FTP directory listings.
It is currently equivalent to ‘-r -N -l inf --no-remove-listing’.
| Why doesn't wget -r get all FTP subdirectories? |
1,373,982,804,000 |
From the command line, I want to download a file from a FTP server. Once the download completes, I want the file to be deleted on the server. Is there any way to do this?
Originally I considered wget, but there is no particular reason why to use that specifically. Any tool would be fine as long as it runs on Linux.
|
with curl:
curl ftp://example.com/ -X 'DELE myfile.zip' --user username:password
| How can I download a file from a FTP server, then automatically delete it from the server once the download completes? |
1,373,982,804,000 |
I have user that have a symlink to somewhere in the computer like this :
# ls -ltr /home/guirec0
total 4
lrwxrwxrwx 1 root root 24 Jan 9 17:56 int -> /disk2/clients/optik/int
drwxr-xr-x 2 guirec0 guirec0 4096 Jan 9 18:13 blabla
I use sftp to connect to this user. I have this setup in /etc/ssh/sshd_config :
Subsystem sftp internal-sftp
Match Group sftpgroup
ChrootDirectory %h
ForceCommand internal-sftp
X11Forwarding no
AllowTcpForwarding no
So the root is changed and /disk2/clients/optik/int is not the same for root and for guirec0.
Is there a way to allow access /disk2/clients/optik/int for guirec0?
The goal of chrooting is to restrict access of the users.
|
Use bind mount instead of symlink:
rm /home/guirec0/int
mkdir /home/guirec0/int
mount --bind /disk2/clients/optik/int /home/guirec0/int
| allow access to a symlink when chrooted on the home directory |
1,373,982,804,000 |
Currently I'm running this command:
curlftpfs user_name:password@hostname ~/mnt/sitename
It mounts the contents of main ftp dir on server to ~/mnt/sitename. But on server I need to open public_html directory every time.
Is it possible to mount /public_html directory from server directly to the mountpoint?
|
You could specify the path on the FTP server after hostname part in the original command of curlftpfs.
For example, you could have your command as,
curlftpfs user_name:password@hostname:/var/www/public_html ~/mnt/sitename
References
https://askubuntu.com/a/323215
https://askubuntu.com/a/200812
| Is it possible to mount a subdirectory in ftp server via curlftpfs |
1,373,982,804,000 |
Is there a way to shebang-ify ftp and write small FTP scripts?
For example:
#!/usr/bin/ftp
open 192.168.1.1
put *.gz
quit
Any thoughts?
|
Not with the ftp programs I've run into, as they expect a script on their standard input but a shebang would pass the script name on their command line.
You can use a here document to pass a script to ftp through a shell wrapper.
#!/bin/sh
ftp <<EOF
open 192.168.1.1
put *.gz
EOF
Lftp accepts a script name passed as an argument.
#!/usr/bin/lftp -f
open 192.168.1.1
put *.gz
Ncftp comes with two tools ncftpget and ncftpput for simple batches of gets or puts.
Zsh includes an FTP module. Using a proper shell rather than a straight FTP script has the advantage that you can react to failures.
#!/bin/zsh
zmodload zsh/zftp
open 192.168.1.1
put *.gz
Of course there are plenty of other languages you could use: Perl, Python, Ruby, etc.
Another approach is to mount the FTP server as a directory, and then use cp (or rsync or other tools) to copy files. There are many FUSE filesystems for FTP access, principally CurlFtpFS and LftpFS.
Note that if you were planning to use authentication (likely if you're uploading), and you have control over the server, you'd be better off with SSH access. It's more secure and more flexible. To copy files over SSH, you can use scp or sftp, or rsync for efficient synchronization (if some of the files may already be there), or Unison (for bidirectional synchronization), or mount with SshFS.
| Scripting FTP transfers |
1,373,982,804,000 |
Is there a way to sum up the disk usage of a certain directory while in ftp?
I was trying to create a script that will check what is the disk usage of the current directory and prints out the free space for the home directory.
Example:
ftp> cd /home/directory/
drw-rw-rw- 1 user group 0 Nov 16 /directory
drw-rw-rw- 1 user group 0 Nov 16 next/directory
drw-rw-rw- 1 user group 0 Nov 16 next/next/directory
For some reason, I can't see any size for directories. But inside them were files that I need to check the usage, so I have to get something like this:
total disk usage for /home/directory = "some count"
total disk usage for /next/directory = "some count"
total disk usage for /../directory = "some count"
|
You could use Perl. From http://aplawrence.com/Unixart/perlnetftp.html:
#!/usr/bin/perl
my $param = $ARGV[0];
# required modules
use Net::FTP;
use File::Listing qw(parse_dir);
sub getRecursiveDirListing
{
# create a new instance of the FTP connection
my $ftp = Net::FTP->new("fftpserver", Debug=>0) or die("Cannot connect $!");
# login to the server
$ftp->login("username","password") or die("Login failed $!");
# create an array to hold directories, it should be a local variable
local @dirs = ();
# directory parameter passed to the sub-routine
my $dir = $_[0];
# if the directory was passed onto the sub-routin, change the remote directory
$ftp->cwd($dir) if($dir);
# get the file listing
@ls = $ftp->ls('-lR');
# the current working directory on the remote server
my $cur_dir = $ftp->pwd();
my $totsize = 0;
my $i = 0;
my @arr = parse_dir(\@ls);
my $arrcnt = scalar(@arr);
if ($arrcnt == 0) {
print "$cur_dir 0\n";
$ftp->quit();
exit 1;
}
else {
# parse and loop through the directory listing
foreach my $file (parse_dir(\@ls))
{
$i++;
my($name, $type, $size, $mtime, $mode) = @$file;
$totsize = $totsize + $size if ($type eq 'f');
print "$cur_dir $totsize\n" if ($i == $arrcnt);
# recursive call to get the entries in the entry, and get an array of return values
# @xx = getRecursiveDirListing ("$cur_dir/$name") if ($type eq 'd');
}
# close the FTP connection
$ftp->quit();
}
# merge the array returned from the recursive call with the current directory listing
# return (@dirs,@xx);
}
@y = getRecursiveDirListing ("$param");
To run it:
$ ./getSize.pl <directory>
| How to check disk usage in ftp? |
1,373,982,804,000 |
When I log in to SSH while forwarding my local port, it's 21 FTP port, with the command:
ssh -R 2101:localhost:21 [email protected] -p 8288
After successfully logging in, I sent this command in the SSH:
ftp ikiw@localhost -p 2101
The command runs normally, and I also successfully logged into FTP smoothly, but when in FTP, I want to see a list of files available with the command ls or dir and I get this error:
ftp: Can't connect to '::1:27394': Connection refused
What is wrong? Does it seem like FTP creates a new port randomly when I run the ls command?
I want to forward my local FTP to my SSH/VPS, and run FTP from my SSH/VPS to my local machine normally, can someone help me and provide a solution? Thank you very much! :D
|
FTP is a horrible protocol. Yes, it uses multiple ports; there's the control port and then each data transfer (ls or get and so on) opens a second new random port.
Worse, depending on if you're doing PASV or active mode FTP, the server could try to initiate the connection.
FTP isn't easy to handle with forwarding like this. Since you have ssh connectivity, can't you use sftp? That's an FTP-like protocol that's built directly into ssh so no need to port forward.
| Cannot do "ls" in FTP while port forwarding to SSH |
1,373,982,804,000 |
I would like to so the same thing like in SSH where you can save the server in the config file.
I would also like to save my username and password, so that it is not prompted each time I connect.
I use the ftp command,
|
from man ftp on my CentOS
If auto-login is enabled, ftp will check the .netrc (see below)
file in the user’s home directory for an entry describing an account on the remote machine. If no entry exists, ftp will prompt for the
remote machine login name (default is the user identity on the local machine), and, if necessary, prompt for a password and an account
with which to login.
Example: ~/.netrc
machine ftp.freebsd.org
login anonymous
password [email protected]
machine myownmachine
login useraccount
password xyz
More on .netrc file in the man page:
THE .netrc FILE
The .netrc file contains login and initialization information used by the auto-login process. It resides in the user’s home directory. The
following tokens are recognized; they may be separated by spaces, tabs, or new-lines:
machine name
Identify a remote machine name. The auto-login process searches the .netrc file for a machine token that matches the remote machine
specified on the ftp command line or as an open command argument. Once a match is made, the subsequent .netrc tokens are processed,
stopping when the end of file is reached or another machine or a default token is encountered.
default
This is the same as machine name except that default matches any name. There can be only one default token, and it must be after all
machine tokens. This is normally used as:
default login anonymous password user@site
thereby giving the user automatic anonymous ftp login to machines not specified in .netrc. This can be overridden by using the -n flag
to disable auto-login.
login name
Identify a user on the remote machine. If this token is present, the auto-login process will initiate a login using the specified name.
password string
Supply a password. If this token is present, the auto-login process will supply the specified string if the remote server requires a
password as part of the login process. Note that if this token is present in the .netrc file for any user other than anonymous, ftp will
abort the auto-login process if the .netrc is readable by anyone besides the user.
account string
Supply an additional account password. If this token is present, the auto-login process will supply the specified string if the remote
server requires an additional account password, or the auto-login process will initiate an ACCT command if it does not.
| Use configuration file for ftp with auto login enabled upon initial connection |
1,373,982,804,000 |
I don't think it's possible but still I would like to ask if there is any command to transfer a remote file from one directory to other at an FTP prompt.
In more detail: I run ftp to a remote machine. I am at the ftp prompt, in the directory /a. By mistake, I have uploaded a file (via put) to this directory, but that's the wrong directory. I want to move the file from this directory to /a/b on the remote machine. Can I do this from the FTP prompt?
I have checked and I cannot telnet to that machine. If this is impossible in FTP, is there another way I can move the file and avoid having to transfer it again?
|
I think it depends more on the client that you're using. Take a look at the client, lftp. There's a good tutorial on using it here, titled: Unix: Flexibly moving files with lftp.
If you look through the help for lftp you'll notice the command mv.
$ lftp
lftp :~> help
!<shell-command> (commands) alias [<name> [<value>]] attach [PID]
bookmark [SUBCMD] cache [SUBCMD] cat [-b] <files> cd <rdir>
chmod [OPTS] mode file... close [-a] [re]cls [opts] [path/][pattern] debug [<level>|off] [-o <file>]
du [options] <dirs> exit [<code>|bg] get [OPTS] <rfile> [-o <lfile>] glob [OPTS] <cmd> <args> help [<cmd>]
history -w file|-r file|-c|-l [cnt] jobs [-v] kill all|<job_no> lcd <ldir>
lftp [OPTS] <site> ln [-s] <file1> <file2> ls [<args>] mget [OPTS] <files>
mirror [OPTS] [remote [local]] mkdir [-p] <dirs> module name [args] more <files>
mput [OPTS] <files> mrm <files> mv <file1> <file2> [re]nlist [<args>]
open [OPTS] <site> pget [OPTS] <rfile> [-o <lfile>] put [OPTS] <lfile> [-o <rfile>] pwd [-p]
queue [OPTS] [<cmd>] quote <cmd> repeat [OPTS] [delay] [command] rm [-r] [-f] <files>
rmdir [-f] <dirs> scache [<session_no>] set [OPT] [<var> [<val>]] site <site-cmd> source <file>
torrent [-O <dir>] <file|URL>... user <user|URL> [<pass>] wait [<jobno>] zcat <files> zmore <files>
| Move a remote file at an FTP prompt |
1,373,982,804,000 |
I am recently working extensively with a remote system via ftp connection, since the session time is so short that I have to re-login quite often when I finish my work on my local machine, so I need a way to create a custom command/shell script to login to the ftp server with just one word, but the question is how to do it.
e.g.
~$ ftp domainname.com
...
Name (domainname.com): MyName
...
Password: xxxx
...
ftp>
|
Usually, ftp command line clients support the configuration file ~/.netrc where you can configure credentials for remote systems, e.g.:
machine legacy.system.example.org
login juser
password keins
When you ftp legacy.system.example.org then you don't have to retype this information anymore.
If you need to do more automation, you can script ftp via piping commands into it, e.g.:
$ cat pushit.sh
# complex logic to set
# EXAMPLE_FILE=
ftp <<EOF
prompt
mput $EXAMPLE_FILE
quit
EOF
Sure, if the system does not support ssh, it probably does not support ftps either - but you can try it (e.g. via ftp-ssl) if you need to secure your connection.
LFTP
An alternative to one of the plain ftp commands is to use lftp, since it provides several features to automated login and command execution.
Example:
$ lftp -e 'source ~/login.lftp'
$ cat login.lftp
open sftp://juser:[email protected]
cd /path/to/favorite/dir
Note that this example shows automated password authentication to a SFTP server which is not supported by the standard OpenSSH sftp client.
The option -e instructs lftp to execute the commands at startup and stay interactive.
Such an lftp script might also source other scripts, automatically disconnect from the server, etc.
In contrast, with -c or -f lftp directly exits after executing the commands specified as argument or read from the specified file.
| Is there a way to write a script to do ftp login so I don't have to type things over and over again? |
1,373,982,804,000 |
Browsing through FTP help (i.e. ftp> ?), showed me a command name literal, description of which is
ftp> ? literal
literal Send arbitrary ftp command
I tried to do some trial and error, and following is the terminal output. All returned 500 Unknown command.
ftp> literal check
500 Unknown command.
ftp> literal bye
500 Unknown command.
ftp> literal
Command line to send check
500 Unknown command.
ftp> literal
Command line to send ascii
500 Unknown command.
I would like to know more about this literal command and in what scenarios is it helpful?
Edit
Note: I am connecting a unix machine from windows 7 command prompt. And I see both literal and quote in ftp help.
|
FTP has quite a few commands. While the client maps some of these to a more userfriendly text interface.
For example, if you use ftp -v (depending on your ftp client, the one I use needs ftp -vd), you'll notice something like the following (---> shows what is sent to the server):
$ ftp -vd ftp.debian.org
Connected to ftp.debian.org.
220 ftp.debian.org FTP server
Name (ftp.debian.org:user): anonymous
---> USER anonymous
331 Please specify the password.
Password:
---> PASS XXXX
230 Login successful.
[...]
ftp> cd debian
---> CWD debian
250 Directory successfully changed.
That is, your convenient cd calls get mapped to CWD commands.
Some FTP clients allow you to send verbatim FTP commands to the server; in yours it is done with literal (my ftp uses quote):
ftp> quote CWD ..
---> CWD ..
250 Directory successfully changed.
Useful? Indeed, it allows you to interact with your FTP server in ways the client doesn't know about. Maybe your client doesn't implement SITE commands, then you could still use literal SITE [...] to have the server do what you want. Things like FXP can be done with any FTP client using handcrafted commands, too (albeit quite inconveniently). Also, for experimenting with FTP, it's more comfortable to have the login process handled by the FTP client and use literal commands afterwards (compared to using telnet/netcat only).
However, what the server understands obviously depends on your server:
ftp> quote foobar
---> foobar
500 Unknown command.
| What is the use of literal command in ftp? |
1,373,982,804,000 |
I have been wasting more then an hour on this now and I think this should be really simple...
I have an azure website that allows me to connect and deploy to it using sftp. I can connect to it fine using FileZilla with the following settings:
Host: The host given by azure portal
Port: Empty
Protocol: FTP - File Transfer Protocol
Encryption: Require implicit FTP over TLS
Logon Type: Normal
User: The username given by Azure portal
Password: The password given by Azure portal.
I don't want to connect to it using FileZilla though. I want to move files over using the command line. I have been trying to use sftp, ftp and scp all without success. In the end they all fail with the following:
$ sftp -v -oPort=990 [email protected]
OpenSSH_7.9p1, OpenSSL 1.0.2r 26 Feb 2019
debug1: Reading configuration data /home/rg/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 17: Applying options for *
debug1: Connecting to xxxxxxx.azurewebsites.windows.net [xxx.xxx.xxx.xxx] port 990.
debug1: Connection established.
debug1: identity file /home/rg/.ssh/id_rsa type 0
debug1: identity file /home/rg/.ssh/id_rsa-cert type -1
debug1: identity file /home/rg/.ssh/id_dsa type -1
debug1: identity file /home/rg/.ssh/id_dsa-cert type -1
debug1: identity file /home/rg/.ssh/id_ecdsa type -1
debug1: identity file /home/rg/.ssh/id_ecdsa-cert type -1
debug1: identity file /home/rg/.ssh/id_ed25519 type -1
debug1: identity file /home/rg/.ssh/id_ed25519-cert type -1
debug1: identity file /home/rg/.ssh/id_xmss type -1
debug1: identity file /home/rg/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_7.9
ssh_exchange_identification: Connection closed by remote host
Connection closed.
Connection closed
I have tested that the OpenSSL version in use supports TLS 1.2. Neither is the host in the known hosts with another fingerprint.
I hope somebody can help me here.
|
FTP (over TLS) is not SFTP.
If you can connect using FTP with FileZilla, you have to use a command-line FTP client. Not SFTP client. Though not all command-line FTP clients support TLS encryption.
| Connect to "FTP over TLS" with sftp |
1,452,599,967,000 |
I want to write a lftp script that will download files every 15 minutes from a server every x amount of time.
Can someone advise how i can do this?
Thanks
|
First: Create a script. You can call it whatever you want. I will call it downloader.sh.
#!/bin/bash
PROTOCOL="ftp"
URL="server.example.com"
LOCALDIR="/home/user/downloads"
REMOTEDIR="dir/remote/server/"
USER="user"
PASS="password"
REGEX="*.txt"
LOG="/home/user/script.log"
cd $LOCALDIR
if [ ! $? -eq 0 ]; then
echo "$(date "+%d/%m/%Y-%T") Cant cd to $LOCALDIR. Please make sure this local directory is valid" >> $LOG
fi
lftp $PROTOCOL://$URL <<- DOWNLOAD
user $USER "$PASS"
cd $REMOTEDIR
mget -E $REGEX
DOWNLOAD
if [ ! $? -eq 0 ]; then
echo "$(date "+%d/%m/%Y-%T") Cant download files. Make sure the credentials and server information are correct" >> $LOG
fi
Second: Add it to crontab. If you want to execute it every exact 15 minutes inside an hour:
45,30,15,00 * * * * /home/user/downloader.sh >/dev/null 2>&1
If you want to execute it each 15 minutes no matter what is the starting minute:
*/15 * * * * /home/user/downloader.sh >/dev/null 2>&1
Explaining the variables:
PROTOCOL - What protocol to use. lftp supports a good range of them: ftp, ftps, http, https, hftp, fish, sftp and file. https and ftps require lftp to be compiled with OpenSSL or GNU TLS support.
URL- Name or IP of the server. You can even add :PORT at the end if your server doesn't use the default port of the protocol being used.
LOCALDIR - Where to save the files.
REMOTEDIR - Where to cd on the remote server to get the files.
USER and PASSWORD - ftp credentials.
REGEX - Regular expression to filter files to download. It can be useful if you want to download only files of a determined extension, for example. Use * if you want to download everything.
LOG - Logfile location.
Explaining some code logic:
1. - if
if [ ! $? -eq 0 ]; then
fi
The $? variable is a special bash variable that means "status code of last command". Bash always return zero on successful command executions so, comparing -eq (equal to) with the starting ! (negative) on an if should be enough to see if cd and lftp had issues during execution. If you want a better log of what happened, you will have to crawl through those commands' documentation.
2. - heredocs
lftp $PROTOCOL://$URL <<- DOWNLOAD
DOWNLOAD
bash heredocs. It's a way to say "feed this command with this input list". I've named the limit string DOWNLOAD so, everything between <<- DOWNLOAD and DOWNLOAD will be input to lftp. You will see examples on the internet with << symbol but I prefer the <<- version since it supports indentation.
3. - lftp commands
user $USER "$PASS"
cd $REMOTEDIR
mget -E $REGEX
These are internal commands of lftp that means respectively, auth the user with $USER login and "$PASS" password, change to $REMOTEDIR and bulk download anything with the $REGEX keywords. You can learn them by simply typing lftp, and as soon as an lftp shell is opened, type ? and press Enter or ? lftp-command-you-want and press Enter. Example:
[root@host ~]# lftp
lftp :~> ?
!<shell-command> (commands) alias [<name> [<value>]]
attach [PID] bookmark [SUBCMD] cache [SUBCMD]
cat [-b] <files> cd <rdir> chmod [OPTS] mode file...
close [-a] [re]cls [opts] [path/][pattern] debug [<level>|off] [-o <file>]
du [options] <dirs> exit [<code>|bg] get [OPTS] <rfile> [-o <lfile>]
glob [OPTS] <cmd> <args> help [<cmd>] history -w file|-r file|-c|-l [cnt]
jobs [-v] [<job_no...>] kill all|<job_no> lcd <ldir>
lftp [OPTS] <site> ln [-s] <file1> <file2> ls [<args>]
mget [OPTS] <files> mirror [OPTS] [remote [local]] mkdir [-p] <dirs>
module name [args] more <files> mput [OPTS] <files>
mrm <files> mv <file1> <file2> [re]nlist [<args>]
open [OPTS] <site> pget [OPTS] <rfile> [-o <lfile>] put [OPTS] <lfile> [-o <rfile>]
pwd [-p] queue [OPTS] [<cmd>] quote <cmd>
repeat [OPTS] [delay] [command] rm [-r] [-f] <files> rmdir [-f] <dirs>
scache [<session_no>] set [OPT] [<var> [<val>]] site <site-cmd>
source <file> torrent [-O <dir>] <file|URL>... user <user|URL> [<pass>]
wait [<jobno>] zcat <files> zmore <files>
lftp :~> ? mget
Usage: mget [OPTS] <files>
Gets selected files with expanded wildcards
-c continue, resume transfer
-d create directories the same as in file names and get the
files into them instead of current directory
-E delete remote files after successful transfer
-a use ascii mode (binary is the default)
-O <base> specifies base directory or URL where files should be placed
The knowledge related to know that mget would be the right command inside lftp came from reading manpages and searching for keywords like "bulk", "multi" or "mass", and knowing that the ftp(1) command also have the mget command so, probably lftp would have an equivalent.
Manpage: lftp(1)
| LFTP Script to Download Files |
1,452,599,967,000 |
Is there any (simple) way to deny FTP connections based on the general physical location? I plan to use FTP as a simple cloud storage for me and my friends. I use an odroid c2 (similar to raspberry pi but uses arm64 architecture) running Debian 8 with proftpd and ufw as my firewall. Ftp server runs on a non-standard port which I prefer not to mention here. I want to do this to increase the security of my server.
|
Use pam and geoip module
This PAM module provides GeoIP checking for logins. The user can be
allowed or denied based on the location of the originating IP address.
This is similar to pam_access(8), but uses a GeoIP City or GeoIP
Country database instead of host name / IP matching.
| Limit FTP connections by area |
1,452,599,967,000 |
Without SSL, FTP works fine over a stateful Firewall, like netfilter (iptables) + the nf_conntrack_ftp kernel module like this:
# modprobe nf_conntrack_ftp
# iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
# iptables -A INPUT -p tcp --dport 21 -j ACCEPT
The problem is that, when SSL is used, the FTP connection tracking module cannot work because it is unable to spy on the session to discover the session-port chosen for data exchange. It is thus unable to open that port dynamically.
Is there a proper way to make a SSL-enabled FTP server work, without disabling the firewall?
For information, I use vsftpd with the ssl_enable=YES configuration option.
|
There are several modes with SSL and FTP:
Implicit SSL, that is SSL from start (usually port 990) and never plain text. In this case you get no clear text information at the firewall about the dynamic data ports and thus cannot restrict communication to only these ports.
Explicit SSL with "AUTH TLS" command before login to enable SSL but without CCC after login to disable SSL. Here you have the same problem as with implicit SSL, that is you cannot read which data ports are in use.
Explicit SSL as before but with CCC command after login. In this case the login is protected by SSL, but the rest of the control connection uses plain text. The data transfer can still be protected by SSL. You must enable this mode at the client, like with ftp:ssl-use-ccc with lftp. There is no way to enforce this mode at the ftp server.
If you cannot get the exact data ports because the relevant commands are encrypted you could at least make the firewall a bit less restrictive:
In active mode ftp the server will originate the data connections from port 20 so you can have an iptables rule allowing these connections, i.e. something like
iptables -A OUTPUT -p tcp --sport 20 -j ACCEPT and additionally accept established connections.
In passive mode ftp you could restrict the port range offered by vsftpd with pasv_max_port and pasv_min_port settings and add a matching rule like iptables -A INPUT -p tcp --dport min_port:max_port -j ACCEPT. This is not very restrictive but at least more restrictive than disabling the firewall.
| Proper way to handle FTP over SSL with restrictive firewall rules? |
1,452,599,967,000 |
I have a question concerning permissions.
I'm running lighttpd and a ftp server.
I want to add a ftp user that is able to upload files to /var/www, which then are viewable in a browser.
What is the safest way to set this up (apart from not using ftp)?
|
usermod -a -G ftp user
chown -R :ftp /var/www/html
chmod -R g+w /var/www/html
| File permissions issue with webserver and ftp server |
1,452,599,967,000 |
I installed vsftpd and was in the process of configuring it. When I sent the vsftpd server stop command:
sudo service vsftpd stop
I received:
stop: Unknown instance
So I went ahead and uninstalled it and rebooted the system
sudo apt-get remove --purge vsftpd
when I 'stop' vsftpd now it says:
vsftpd: unrecognized service
If I try 'uninstalling' vsftpd its says:
Package vsftpd is not installed, so not removed
Issue: But I can still connect to my server using FTP client
I cannot somehow believe there is a zombie? process that is not getting killed even after reboot. Can someone please throw light on this?
System Configuration:
Ubuntu 12.04.1 Server LTS as Guest VM on Windows 7 Host VM
TCP dump as requested:
Command: sudo tcpdump port 22 >tcpdump.log
Action taken: Used WinSCP to SFTP into the Guest OS Server
tcpdumplog:
17:00:42.745423 IP Brown.home.54199 > ubuntu-12.home.ssh: Flags [P.], seq 4076673955:4076673991, ack 3552872727, win 17520, length 36
17:00:42.745442 IP Brown.home.54199 > ubuntu-12.home.ssh: Flags [F.], seq 36, ack 1, win 17520, length 0
17:00:42.746192 IP ubuntu-12.home.ssh > Brown.home.54199: Flags [F.], seq 1, ack 37, win 16616, length 0
17:00:42.746406 IP Brown.home.54199 > ubuntu-12.home.ssh: Flags [.], ack 2, win 17520, length 0
17:00:50.181085 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [S], seq 8389211, win 8192, options [mss 1460,nop,nop,sackOK], length 0
17:00:50.181112 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [S.], seq 1127786298, ack 8389212, win 14600, options [mss 1460,nop,nop,sackOK], length 0
17:00:50.181262 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [.], ack 1, win 17520, length 0
17:00:50.186862 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 1:40, ack 1, win 14600, length 39
17:00:50.187152 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 1:31, ack 40, win 17481, length 30
17:00:50.187282 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [.], ack 31, win 14600, length 0
17:00:50.187476 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 31:639, ack 40, win 17481, length 608
17:00:50.187485 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [.], ack 639, win 15808, length 0
17:00:50.188653 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 40:1024, ack 639, win 15808, length 984
17:00:50.188900 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 639:655, ack 1024, win 16497, length 16
17:00:50.190537 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 1024:1304, ack 655, win 15808, length 280
17:00:50.240004 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 655:927, ack 1304, win 16217, length 272
17:00:50.254190 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 1304:2152, ack 927, win 17024, length 848
17:00:50.312380 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 927:943, ack 2152, win 17520, length 16
17:00:50.351847 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [.], ack 943, win 17024, length 0
17:00:50.352298 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 943:995, ack 2152, win 17520, length 52
17:00:50.352316 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [.], ack 995, win 17024, length 0
17:00:50.352579 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 2152:2204, ack 995, win 17024, length 52
17:00:50.361499 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 995:1063, ack 2204, win 17468, length 68
17:00:50.388593 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 2204:2272, ack 1063, win 17024, length 68
17:00:50.590761 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [.], ack 2272, win 17400, length 0
17:00:52.960712 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 1063:1147, ack 2272, win 17400, length 84
17:00:52.999659 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [.], ack 1147, win 17024, length 0
17:00:53.037972 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 2272:2308, ack 1147, win 17024, length 36
17:00:53.038482 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 1147:1215, ack 2308, win 17364, length 68
17:00:53.038510 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [.], ack 1215, win 17024, length 0
17:00:53.271416 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 2308:2360, ack 1215, win 17024, length 52
17:00:53.271628 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 1215:1299, ack 2360, win 17312, length 84
17:00:53.271661 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [.], ack 1299, win 17024, length 0
17:00:53.271864 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 1299:1367, ack 2360, win 17312, length 68
17:00:53.271872 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [.], ack 1367, win 17024, length 0
17:00:53.272369 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 2360:2448, ack 1367, win 17024, length 88
17:00:53.275151 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 1367:1419, ack 2448, win 17224, length 52
17:00:53.275347 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 2448:2628, ack 1419, win 17024, length 180
17:00:53.279576 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 1419:1471, ack 2628, win 17044, length 52
17:00:53.279717 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 2628:2728, ack 1471, win 17024, length 100
17:00:53.280194 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 1471:1539, ack 2728, win 16944, length 68
17:00:53.280339 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 2728:2796, ack 1539, win 17024, length 68
17:00:53.280546 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 1539:1607, ack 2796, win 16876, length 68
17:00:53.280869 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 2796:3504, ack 1607, win 17024, length 708
17:00:53.281105 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 1607:1675, ack 3504, win 16168, length 68
17:00:53.281218 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 3504:3588, ack 1675, win 17024, length 84
17:00:53.281416 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 1675:1743, ack 3588, win 16084, length 68
17:00:53.281543 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [P.], seq 3588:3656, ack 1743, win 17024, length 68
17:00:53.480952 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [.], ack 3656, win 17520, length 0
17:00:56.881662 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [P.], seq 1743:1779, ack 3656, win 17520, length 36
17:00:56.881688 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [F.], seq 1779, ack 3656, win 17520, length 0
17:00:56.881908 IP ubuntu-12.home.ssh > Brown.home.54223: Flags [F.], seq 3656, ack 1780, win 17024, length 0
17:00:56.882061 IP Brown.home.54223 > ubuntu-12.home.ssh: Flags [.], ack 3657, win 17520, length 0
Please let me know in case you require any additional information
|
SFTP is not FTP. It's the sftp subsystem of ssh, it's handled by the sshd daemon, not vsftpd or any FTP server. It's on the ssh TCP port (22), not the FTP port 21 (well FTP commands are on 21 while data connections are on arbitrary ports, and those multiple connections in FTP are one of the many reasons why SFTP is so much better than FTP).
ss -lp sport = :22
or
ss -lp sport = :ssh
would show you that sshd is handling the connections there.
If you want to disable SFTP but retain ssh access (though that would make little sense unless users land with a restricted shell on that machine), you have to disable sftp in sshd_config by commenting out the Subsystem sftp... line.
| I uninstalled vsftpd, but I can still connect with sftp |
1,452,599,967,000 |
I'm setting up and FTP server, and I want to be able to login myself and do whatever I want, but if I want others to connect to my server, I give them credentials that restrict them to the home directory.
I've created the user with:
adduser username
passwd username
mkdir /home/user_dir
usermod -d /home/user_dir
I then enabled chroot_local_user=YES and chroot_list_enable, created a file and put my username in it so that I still have access to the entire machine.
If I ftp as myself, I can login and do whatever I want, but I can't login as this new user. (I get 530 Login Incorrect).
Info:
Linux Mint
Using vsftpd
I can login to a shell as the new user (su newUser... password)
Also - if I remove myself from the chroot list, I get the 500 OOPS refusing to run with writeable root inside chroot. I'm assuming this error will be the same with the new user, how do I give them restricted access if I get this error when restricting their access?
Fixed - forgot about writeable chroot
|
Look at your /etc/passwd file
Find your user and look shell (example ttr:x:501:501::/home/username/ttr:/sbin/nologin)
Add this shell (/bin/false or /sbin/nologin) to your /etc/shell or /etc/shells
After that, check your connection
Try again...If it still does not work
Back up the config file before making a change;
sudo cp /etc/vsftpd.conf /etc/vsftpd.conf.back
and then edit vsftpd.conf (with vi or nano)
nano /etc/vsftpd.conf
Then make the following change
pam_service_name=ftp
Save your change and restart the ftp server (if you use nano hit CTRL+O & enter to save then CTRL+X to exit)
sudo service vsftpd restart
| Added a user with adduser, but can't login with that user through FTP |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.