date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,642,817,338,000 |
The title was a question in an exam I had recently.
I could not find the answer afterwards in the slides (also not on the web).
In the course slides it is only described that the parent process holds the PIDs of its child process but not how it received them.
My guess is that transmission of the IDs is directly done with the fork command or afterwards through signals.
|
My guess is that transmission of the IDs is directly done with the fork command or afterwards through signals.
It’s the former: fork() returns the child PID to the parent. See Why does fork sometimes return parent and sometimes child? for more detail (and man 2 fork of course, and the POSIX definition).
A process can find its parent’s PID using the getppid() system call (also defined by POSIX).
| How does a parent process know the process IDs of the child processes it started? |
1,417,719,191,000 |
I use a FUSE filesystem with no problems as my own user, but root can't access my FUSE mounts. Instead, any command gives Permission denied. How can I give root the permission to read these mounts?
~/top$ sudo ls -l
total 12
drwxr-xr-x 2 yonran yonran 4096 2011-07-25 18:50 bar
drwxr-xr-x 2 yonran yonran 4096 2011-07-25 18:50 foo
drwxr-xr-x 2 yonran yonran 4096 2011-07-25 18:50 normal-directory
~/top$ fuse-zip foo.zip foo
~/top$ unionfs-fuse ~/Pictures bar
My user, yonran, can read it fine:
~/top$ ls -l
total 8
drwxr-xr-x 1 yonran yonran 4096 2011-07-25 18:12 bar
drwxr-xr-x 2 yonran yonran 0 2011-07-25 18:51 foo
drwxr-xr-x 2 yonran yonran 4096 2011-07-25 18:50 normal-directory
~/top$ ls bar/
Photos
But root can't read either FUSE directory:
~/top$ sudo ls -l
ls: cannot access foo: Permission denied
ls: cannot access bar: Permission denied
total 4
d????????? ? ? ? ? ? bar
d????????? ? ? ? ? ? foo
drwxr-xr-x 2 yonran yonran 4096 2011-07-25 18:50 normal-directory
~/top$ sudo ls bar/
ls: cannot access bar/: Permission denied
I'm running Ubuntu 10.04: I always install any update from Canonical.
$ uname -a
Linux mochi 2.6.32-33-generic #70-Ubuntu SMP Thu Jul 7 21:13:52 UTC 2011 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 10.04.3 LTS
Release: 10.04
Codename: lucid
Edit: removed the implication that root used to be able to access the mounts. Come to think of it, maybe my scripts never tried to access the directory as root.
|
It's the way fuse works.
If you want to allow access to root or others users, you have to add:
user_allow_other
in /etc/fuse.conf and mount your fuse filesystem with allow_other or allow_root as options.
| Why does root get Permission denied when accessing FUSE directory? |
1,417,719,191,000 |
I know some filesystems present themselves through Fuse and I was wondering about the pros and cons to this approach.
|
I'm not positive if you mean real, on-disk filesystems or any filesystem. I've never seen a normal filesystem use FUSE, although I suppose it's possible; the main benefit of FUSE is it lets you present something to applications (or the user) that looks like a filesystem, but really just calls functions within your application when the user tries to do things like list the files in a directory or create a new file. Plan9 is well known for trying to make everything accessible through the filesystem, and the /proc pseudo-filesystem comes from them; FUSE is a way for applications to easily follow that pattern
For example, here's a screenshot of a (very featureless) FUSE filesystem that gives access to SE site data:
Naturally none of those files actually exist; when ls asked for the list of files in the directory FUSE called a function in my program which did an API request to this site to load information about user 73 (me); cat trying to read from display_name and website_url called more functions that returned the cached data from memory, without anything actually existing on-disk
| What are the benefits and downsides to use FuseFS filesystems? |
1,417,719,191,000 |
I try to sshfs mount a remote dir, but the mounted files are not writable. I have run out of ideas or ways to debug this. Is there anything I should check on the remote server?
I am on an Xubuntu 14.04. I mount remote dir of a 14.04 Ubuntu.
local $ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04.3 LTS
Release: 14.04
Codename: trusty
I changed the /etc/fuse.conf
local $ sudo cat /etc/fuse.conf
# /etc/fuse.conf - Configuration file for Filesystem in Userspace (FUSE)
# Set the maximum number of FUSE mounts allowed to non-root users.
# The default is 1000.
#mount_max = 1000
# Allow non-root users to specify the allow_other or allow_root mount options.
user_allow_other
And my user is in the fuse group
local $ sudo grep fuse /etc/group
fuse:x:105:MY_LOACL_USERNAME
And I mount the remote dir with (tried with/without combinations of sudo, default_permissions, allow_other):
local $sudo sshfs -o allow_other,default_permissions -o IdentityFile=/path/to/ssh_key REMOTE_USERNAME@REMOTE_HOST:/remote/dir/path/ /mnt/LOCAL_DIR_NAME/
The REMOTE_USERNAME has write permissions to the dir/files (on the remote server).
I tried the above command without sudo, default_permissions, and in all cases I get:
local $ ls -al /mnt/LOCAL_DIR_NAME/a_file
-rw-rw-r-- 1 699 699 1513 Aug 12 16:08 /mnt/LOCAL_DIR_NAME/a_file
local $ test -w /mnt/LOCAL_DIR_NAME/a_file && echo "Writable" || echo "Not Writable"
Not Writable
Clarification 0
In response to user3188445's comment:
$ whoami
LOCAL_USER
$ cd
$ mkdir test_mnt
$ sshfs -o allow_other,default_permissions -o IdentityFile=/path/to/ssh_key REMOTE_USERNAME@REMOTE_HOST:/remote/dir/path/ test_mnt/
$ ls test_mnt/
I see the contents of the dir correctly
$ ls -al test_mnt/
total 216
drwxr-xr-x 1 699 699 4096 Aug 12 16:42 .
drwxr----- 58 LOCAL_USER LOCAL_USER 4096 Aug 17 15:46 ..
-rw-r--r-- 1 699 699 2557 Jul 30 16:48 sample_file
drwxr-xr-x 1 699 699 4096 Aug 11 17:25 sample_dir
$ touch test_mnt/new_file
touch: cannot touch ‘test_mnt/new_file’: Permission denied
# extra info: SSH to the remote host and check file permissions
$ ssh REMOTE_USERNAME@REMOTE_HOST
# on remote host
$ ls -al /remote/dir/path/
lrwxrwxrwx 1 root root 18 Jul 30 13:48 /remote/dir/path/ -> /srv/path/path/path/
$ cd /remote/dir/path/
$ ls -al
total 216
drwxr-xr-x 26 REMOTE_USERNAME REMOTE_USERNAME 4096 Aug 12 13:42 .
drwxr-xr-x 4 root root 4096 Jul 30 14:37 ..
-rw-r--r-- 1 REMOTE_USERNAME REMOTE_USERNAME 2557 Jul 30 13:48 sample_file
drwxr-xr-x 2 REMOTE_USERNAME REMOTE_USERNAME 4096 Aug 11 14:25 sample_dir
|
The question was answered in a linux mailing list; I post a translated answer here for completeness.
Solution
The solution is to not use both of the options default_permissions and allow_other when mounting (which I didn't try in my original experiments).
Explanation
The problem seems to be quite simple. When you use the option default_permissions in fusermount then fuse's permission control of the fuse mount is handled by the kernel and not by fuse.
This means that the REMOTE_USER's uid/gid aren't mapped to the LOCAL_USER (sshfs.c IDMAP_NONE). It works the same way as a simple nfs fs without mapping.
So, it makes sense to prohibit the access, if the uid/gid numbers don't match.
If you have the option allow_other then this dir is writable only by the local user with uid 699, if it exists.
From fuse's man:
'default_permissions'
By default FUSE doesn't check file access permissions, the
filesystem is free to implement its access policy or leave it to
the underlying file access mechanism (e.g. in case of network
filesystems). This option enables permission checking, restricting
access based on file mode. It is usually useful together with the
'allow_other' mount option.
'allow_other'
This option overrides the security measure restricting file access
to the user mounting the filesystem. This option is by default only
allowed to root, but this restriction can be removed with a
(userspace) configuration option.
| Mount with sshfs and write file permissions |
1,417,719,191,000 |
I have the following on my /etc/fuse.conf file:
# Set the maximum number of FUSE mounts allowed to non-root users.
# The default is 1000.
#
#mount_max = 1000
# Allow non-root users to specify the 'allow_other' or 'allow_root'
# mount options.
#
user_allow_other
But when I try to mount a remote path with the option allow_other:
> sshfs name@server:/remote/path /local/path -o allow_other
I get:
fusermount: failed to open /etc/fuse.conf: Permission denied
fusermount: option allow_other only allowed if 'user_allow_other' is set in /etc/fuse.conf
I have triple checked and the option user_allow_other is uncommented in my fuse.conf,as I copied above.
I have also executed sudo adduser my_user_name fuse (not sure if this is needed though), but I still get the same problem.
Why is it not parsing the /etc/fuse.conf file correctly?
|
A better solution might be to add the user to the fuse group, i.e.:
addgroup <username> fuse
| Unable to use -o allow_other with sshfs (option enabled in fuse.conf) |
1,417,719,191,000 |
when I run mount, I can see my hard drive mount as fuseblk.
/dev/sdb1 on /media/ecarroll/hd type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)
However, fuseblk doesn't tell me what filesystem is on my device. I found it using gparted but I want to know how to find the fs using the command line utilities.
|
I found the answer provided by in the comments by Don Crissti to be the best
lsblk -no name,fstype
This shows me exactly what I want and I don't have to unmount the device,
mmcblk0
└─mmcblk0p1 exfat
See also,
man page on lsblk
| How do I find out what filesystem FUSE is using? |
1,417,719,191,000 |
I have a directory that is always going to be storing text files that are rarely (think weekly) used. Naturally this is a great place to use compression. However, rather than having to use tar ever time I want to access a file, I would love it if I could "mount a compressed folder".
Lets assume the folder is called mydir
Ideally the following should be true:
Items copied/moved/deleted/read in mydir without programs needing to know that the directory is compressed
When a file from mydir is read by a program, only that file is decompressed, not the entire directory.
The directory should be always available. (maybe mounted on boot or login)
|
If read-only access is acceptable, then SquashFS is a good choice.
However, it sounds like you want to be able to do in place updating as well. Btrfs may be an option for you. It is still considered somewhat experimental, but it does support transparent file compression, and is available to try in most distros.
The other approach is to do this in userspace, via FUSE. The most plausible of the options here is probably fusecompress.
| On the Fly Compression for a Directory |
1,417,719,191,000 |
UPDATE
Please correct me if I'm wrong: For working on my computer, with a GNU/Linux Distribution named Debian, I know two
ways to enter a command, start an application, open a file, etc.:
a Command Line Interface where I enter text
a Graphical User Interface [a.k.a GUI]: an interface which provides "windows", symbols etc.
There is something going by the name "Window Manager". As I use GNU/Linux, I do work on the X-Window System [as far as I know].
Original Posting
Situation: I disabled automount in /etc/fstab for USB Sticks [e.g. /dev/sdb1]. Mounting requires to be root, or at least a sudo entry on the command line but not in a window manager (!).
I do not mean automount, I mean "clicking on the symbol" in a window manager opens the device on the GUI without any questions, where on the CLI one must be root.
Question: How does mounting in a GUI work "under the hood"? Is there a configfile for window managers in general or does one have to set this individually?
I do understand and use mount command, I think to understand how to read and configure /etc/fstab and know where to look what the entries there and in /etc/mtabmean.
|
This is my understanding of the situation, but I'm not an expert so it is less technical than the other answers. This is what I understand after using these systems for many years, I have not studied them in any detail.
There are three main players here and between them they manage the mounts:
FUSE: This is at the center of everything, as described in its wikipedia page:
Filesystem in Userspace (FUSE) is an operating system mechanism for Unix-like computer operating systems that lets non-privileged users create their own file systems without editing kernel code. This is achieved by running file system code in user space while the FUSE module provides only a "bridge" to the actual kernel interfaces.
So, basically, this is what allows unprivileged users to mount filesystems.
gvfs : In the Gnome family of desktop environments (which includes Gnome, Mate, Cinnamon), this is (among other things) a daemon that will automatically mount newly connected drives. It does so via FUSE. I believe (but may well be wrong) the equivalent for the KDE family is called KIO
The main processes of gvfs are (taken from man gvfs):
gvfsd - the main gvfs daemon
gvfs-fuse-daemon - mounts gvfs as a fuse filesystem
gvfsd-metadata - writes gvfs metadata
udev : This is a system that detects new devices and allows you to run scripts/commands when they are connected. For example, it is udev that detects a new screen and can mirror your desktop on it:
udev is a device manager for the Linux kernel. Primarily, it manages device nodes in /dev. It is the successor of devfs and hotplug, which means that it handles the /dev directory and all user space actions when adding/removing devices, including firmware load.
Specifically, gvfs seems to work through gvfs-udisks2-volume-monitor which is a udisks-based volume monitor. udisks itself however, relies on udev (see man 7 udisks).
So, basically (read "horrible simplification") what happens is that when you connect your drive, udev detects it and alerts the gvfs daemon which will then mount it as a FUSE device.
FUSE and udev will be the same for all desktop environments, what changes is the DE daemon that monitors udev and mounts the drive as a FUSE filesystem.
| How does mounting on the GUI work "under the hood" |
1,417,719,191,000 |
I'm trying to compile a C program and it tells me
user@cu-cs-vm:~/Downloads/pa5$ make
gcc -c -g -Wall -Wextra `pkg-config fuse --cflags` fusehello.c
Package fuse was not found in the pkg-config search path.
Perhaps you should add the directory containing `fuse.pc'
to the PKG_CONFIG_PATH environment variable
No package 'fuse' found
fusehello.c:21:18: fatal error: fuse.h: No such file or directory
compilation terminated.
When I try to install fuse, it just tells me it's already installed.
sudo apt-get install fuse
I looked in usr/lib/pkgconfig and fuse.pc wasn't there. Should it be there?
|
I don't know which distro you are using, but you probably need to install libfuse-dev. The fuse header files are missing.
| fuse is installed but compiler is saying "no package 'fuse' found |
1,417,719,191,000 |
What is the syntax of the LoggedFS configuration file?
The official documentation only had usage instructions for the loggedfs command and a configuration file example. Ok, it's XML, but what are all the possible tags and attributes and what do they mean?
|
I poked through Config.cpp, the file responsible for parsing the configuration. The example configuration actually does a pretty good job of capturing the available options -- there aren't very many
When I refer to "the example output" below, I'm talking about this line (pulled at random from the sample page):
17:29:35 (src/loggedfs.cpp:136) getattr /var/ {SUCCESS} [ pid = 8700 kded [kdeinit] uid = 1000 ]
The root tag is <loggedFS>. It has two optional attributes:
logEnabled is a string -- "true" means it should actually output log info; anything else disables all logging. Defaults to "true", since that's kind of the whole point of the program
printProcessName is a string -- "true" means the log output will include the process name, anything else means it won't. Defaults to "true". In the example output, kded [kdeinit] is the process name
The only child nodes it cares about are <include> and <exclude>. In the example they group those under <includes> and <excludes> blocks, but those are ignored by the parser (as are any other nodes except <include> and <exclude>).
Naturally, <include> rules cause it to output the log line if they match, while <exclude> lines cause it not to. In the event of overlap, <exclude> overrides <include>. Normally you need at least one <include> rule to match for an event to be logged, but an exception is if there are 0 <include> rules -- then all events are logged, even if there are matching <exclude> lines.
Both <include> and <exclude> take the same attributes:
extension is a regular expression that is matched against the absolute path of the file that was accessed/modified/whatever (extension is a rather poor name, but I guess that's the common usage). For example, if you touch /mnt/loggedfs/some/file, the regular expression in extension would need to (partial) match /mnt/loggedfs/some/file
uid is a string that contains either an integer or *. The rule only matches a given operation if the owner of the process that caused the operation has the specified user ID (* naturally means any user ID matches). In the example output, 1000 is the uid
action is the specific type of operation performed on the filesystem. In the example output, getattr is the action. The possible actions are:
access
chmod
chown
getattr
link
mkdir
mkfifo
mknod
open
open-readonly
open-readwrite
open-writeonly
read
readdir
readlink
rename
rmdir
statfs
symlink
truncate
unlink
utime
utimens
write
retname is a regular expression. If the return code of the actual filesystem operation performed by LoggedFS is 0, the regular expression is matched against the string SUCCESS. A non-zero return code causes it to match against FAILURE. Those are the only possible values, so most likely you're either going to hardcode SUCCESS, FAILURE, or use .* if you want both. In the example output, SUCCESS is the retname
Unlike with the <loggedFS> attributes, these have no defaults. Also, while the parser will recognize unknown attributes and error out, it does not detect missing attributes, so if you forget an attribute it will use uninitialized memory.
| LoggedFS configuration file syntax |
1,417,719,191,000 |
Say I have a folder which is completely readable by my user. I want it to be mounted to my home folder. I can't use a symlink because I want the files to be exposed at ~/ and I want other programs "not to know" that it's a mount. Is there a fuse program that implements this? The issue with mount is that it requires root privileges.
I'd also appreciate, if I'd be able to mount this directory, not on boot, but on demand - that's because my ~/ is encrypted unless I'm logged in.
|
You could set up /etc/fstab once, with the following entry:
/path/to/original/dir /path/to/bind/dir none bind,rw,user,noauto 0 0
The mount options specify the following things, in order:
bind indicates that this entry is a bind mount.
rw specifies that the entry will be mounted in read-write mode.
user allows any non-root user to mount the filesystem.
noauto specifies that this entry should not be automatically mounted with mount -a and at boot time.
You'll need to set this up once using root privileges. Once the entry is in place, you can perform the mount as a non-root user. Simply run mount /path/to/bind/dir.
A couple of points to note:
With the user option, only the same user account that originally mounted the filesystem can perform the unmount. If multiple users are involved, you can look at the users option instead. See man 8 mount for details.
The user option implies three other options: noexec (do not permit execution of binaries), nosuid (do not honor setuid/setgid bits) and nodev (do not interpret devices). If you want to restore any of these functionality, append the corresponding option to the end of the list. For example, bind,rw,user,noauto,exec. Keep in mind that there are security implications for these options. See man 8 mount for details.
| How to mount a local directory without root |
1,417,719,191,000 |
mkdir ~/mnt/2letter
echo PASSWORD | sshfs -o password_stdin www-data@localhost:/var/www/sites/2letter ~/mnt/2letter -o sshfs_sync,cache=no,password_stdin
After this:
$ ls -ld ~/mnt/2letter/
drwxr-xr-x 1 www-data www-data 4096 Jan 28 21:29 /home/porton/mnt/2letter/
I need to access /home/porton/mnt/2letter/ under my UID (porton) not as www-data, because I am not allowed by file system permissions to modify www-data owner files, but need to edit them.
Moreover it seems to have been working with the correct UID with older versions of Linux. Why doesn't it work now?
|
Try chucking in the two following options
-o idmap=user,uid=<YOUR UID>
| UID/GID with sshfs of Linux FUSE |
1,417,719,191,000 |
How can I get the process ID of the driver of a FUSE filesystem?
For example, I currently have two SSHFS filesystems mounted on a Linux machine:
$ grep sshfs /proc/mounts
host:dir1 /home/gilles/net/dir1 fuse.sshfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
host:dir2 /home/gilles/net/dir2 fuse.sshfs rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 0 0
$ pidof sshfs
15031 15007
How can I know which of 15007 and 15031 is dir1 and which is dir2? Ideally I'd like to automate that, i.e. run somecommand /home/gilles/net/dir1 and have it display 15007 (or 15031, or “not a FUSE mount point”, as appropriate).
Note that I'm looking for a generic answer, not an answer that's specific to SSHFS, like tracking which host and port the sshfs processes are connected to, and what files the server-side process has open — which might not even be possible at all due to connection sharing.
I'm primarily interested in a Linux answer, but a generic answer that works on all systems that support FUSE would be ideal.
Why I want to know: to trace its operation, to kill it in case of problems, etc.
|
I don't think it's possible. Here's why. I took the naive approach, which was to add the pid of the process opening /dev/fuse to the meta data that fuse creates at mount time, struct fuse_conn. I then used that information to display a pid= field in the mount command. The patch is really simple:
diff --git a/fs/fuse/fuse_i.h b/fs/fuse/fuse_i.h
index 7354dc1..32b05ca 100644
--- a/fs/fuse/fuse_i.h
+++ b/fs/fuse/fuse_i.h
@@ -402,6 +402,9 @@ struct fuse_conn {
/** The group id for this mount */
kgid_t group_id;
+ /** The pid mounting process */
+ pid_t pid;
+
/** The fuse mount flags for this mount */
unsigned flags;
diff --git a/fs/fuse/inode.c b/fs/fuse/inode.c
index e8799c1..23a27be 100644
--- a/fs/fuse/inode.c
+++ b/fs/fuse/inode.c
@@ -554,6 +554,7 @@ static int fuse_show_options(struct seq_file *m, struct dentry *root)
struct super_block *sb = root->d_sb;
struct fuse_conn *fc = get_fuse_conn_super(sb);
+ seq_printf(m, ",pid=%u", fc->pid);
seq_printf(m, ",user_id=%u", from_kuid_munged(&init_user_ns, fc->user_id));
seq_printf(m, ",group_id=%u", from_kgid_munged(&init_user_ns, fc->group_id));
if (fc->flags & FUSE_DEFAULT_PERMISSIONS)
@@ -1042,6 +1043,7 @@ static int fuse_fill_super(struct super_block *sb, void *data, int silent)
fc->release = fuse_free_conn;
fc->flags = d.flags;
+ fc->pid = current->pid;
fc->user_id = d.user_id;
fc->group_id = d.group_id;
fc->max_read = max_t(unsigned, 4096, d.max_read);
def@fractal [6:37] ~/p/linux -- master
I booted the kernel, mounted sshfs, ran mount:
[email protected]:/tmp on /root/tmp type fuse.sshfs (rw,nosuid,nodev,relatime,pid=1549,user_id=0,group_id=0)
Success? Unfortunately, not:
root 1552 0.0 0.0 45152 332 ? Ssl 13:39 0:00 sshfs [email protected]:/tmp tmp
Then I realized: the remaining sshfs process is a child of the one that created the mount. It inherited from the fd. As fuse is implemented, we could have a multitude of processes inheriting from the fd. We could have the fd passed around in UNIX sockets, completely out of the original process tree.
We can obtain the 'who owns this TCP port' information, because sockets have this meta data, and simply parsing /proc tells us that information. Unfortunately, the fuse fd is a regular fd on /dev/fuse. Unless that fd somehow becomes special, I don't see how this can be implemented.
| Find what process implements a FUSE filesystem |
1,417,719,191,000 |
I have an embarrassingly parallel process that creates a huge amount of nearly (but not completely) identical files. Is there a way to archive the files "on the fly", so that the data does not consume more space than necessary?
The process itself accepts command-line parameters and prints the name of each file created to stdout. I'm invoking it with parallel --gnu which takes care of distributing input (which comes from another process) and collecting output:
arg_generating_process | parallel --gnu my_process | magic_otf_compressor
SIMPLE EXAMPLE for the first part of the pipe in bash:
for ((f = 0; $f < 100000; f++)); do touch $f; echo $f; done
How could magic_otf_compressor look like? It's supposed to treat each input line as file name, copy each file to a compressed .tar archive (the same archive for all files processed!) and then delete it. (Actually, it should be enough to print the name of each processed file, another | parallel --gnu rm could take care of deleting the files.)
Is there any such tool? I'm not considering compressing each file individually, this would waste far too much space. I have looked into archivemount (will keep file system in memory -> impossible, my files are too large and too many) and avfs (couldn't get it to work together with FUSE). What have I missed?
I'm just one step away from hacking such a tool myself, but somebody must have done it before...
EDIT: Essentially I think I'm looking for a stdin front-end for libtar (as opposed to the command-line front-end tar that reads arguments from, well, the command line).
|
It seems tar wants to know all the file names upfront. So it is less on-the-fly and more after-the-fly. cpio does not seem to have that problem:
| cpio -vo 2>&1 > >(gzip > /tmp/arc.cpio.gz) | parallel rm
| Virtual write-only file system for storing files in archive |
1,417,719,191,000 |
Say I created a FUSE filesystem called foo and mounted it at /mnt/foo.
If I have a user called myuser that is running vi to open a file in /mnt/foo, what FUSE methods or data structure contains the info about the user and process? I'd want the actual name of the user/group and process, or the RUID and PID.
I've been staring at this
but I can't find the information I mention from the doxygen documentation.
|
During the call to a fuse operation you can call fuse_get_context() to get the current calling user id, group id, process id, and umask in a fuse_context Struct.
struct fuse_context {
struct fuse *fuse;
uid_t uid;
gid_t gid;
pid_t pid;
void *private_data;
mode_t umask;
};
Here's a doc and bsd man page mentioning this function.
If you're using the lowlevel API you need to use fuse_req_ctx and pass in the fuse_req_t that was passed to the current function, see this thread. fuse_req_ctx returns a pointer to a fuse_ctx struct which has the uid, gid, pid, and umask of the invoking process.
| In FUSE, how do I get the information about the user and the process that is trying to read/write in the virtual file system? [closed] |
1,417,719,191,000 |
I'm trying to mount various SD cards automatically with udev rules. I started with these rules, solved a problem with the help of this question, and now I have the following situation:
ext4 and vfat formatted devices work perfectly, but when I plug in an exfat or an NTFS formatted disk I get the following line in mount:
/dev/sda1 on /media/GoPro type fuseblk (rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096)
And the directory listing looks like this:
$ ls -l /media/
ls: cannot access '/media/GoPro': Transport endpoint is not connected
total 0
d????????? ? ? ? ? ? GoPro
I can't do anything under that mountpoint, not even as root:
$ sudo ls -l /media/GoPro
ls: cannot access '/media/GoPro': Transport endpoint is not connected
The only problems I can find from other people with the error message Transport endpoint is not connected seem to happen after a disk wasn't unmounted properly. But I have the problem while the disk is mounted.
My current udev rules look like this:
KERNEL!="sd[a-z][0-9]", GOTO="media_by_label_auto_mount_end"
ACTION=="add", PROGRAM!="/sbin/blkid %N", GOTO="media_by_label_auto_mount_end"
# Do not mount devices already mounted somewhere else to avoid entries for all your local partitions in /media
ACTION=="add", PROGRAM=="/bin/grep -q ' /dev/%k ' /proc/self/mountinfo", GOTO="media_by_label_auto_mount_end"
# Global mount options
ACTION=="add", ENV{mount_options}="noatime"
# Filesystem-specific mount options
ACTION=="add", PROGRAM=="/sbin/blkid -o value -s TYPE %E{device}", RESULT=="vfat|ntfs", ENV{mount_options}="%E{mount_options},utf8,uid=1000,gid=100,umask=002"
ACTION=="add", PROGRAM=="/sbin/blkid -o value -s TYPE %E{device}", RESULT=="exfat", ENV{mount_options}="%E{mount_options},utf8,allow_other,umask=002,uid=1000,gid=1000"
# Get label if present, otherwise assign one
ENV{ID_FS_LABEL}!="", ENV{dir_name}="%E{ID_FS_LABEL}"
ENV{ID_FS_LABEL}=="", ENV{dir_name}="usbhd-%k"
# Mount the device
ACTION=="add", ENV{dir_name}!="", RUN+="/bin/mkdir -p '/media/%E{dir_name}'", RUN+="/bin/mount -o %E{mount_options} /dev/%k '/media/%E{dir_name}'"
# Clean up after removal
ACTION=="remove", ENV{dir_name}!="", RUN+="/bin/umount -l '/media/%E{dir_name}'"
ACTION=="remove", ENV{dir_name}!="", RUN+="/bin/rmdir '/media/%E{dir_name}'"
# Exit
LABEL="media_by_label_auto_mount_end"
I tried using user_id and group_id instead of uid and gid but to no avail.
Mounting the device manually works fine:
$ sudo mount -o noatime,utf8,allow_other,umask=002,uid=1000,gid=1000 /dev/sdb1 /media/GoPro/
FUSE exfat 1.2.5
$ ls -l /media/
total 132
drwxrwxr-x 1 pi pi 131072 Jan 1 1970 GoPro
|
TL;DR: udev and fuse are not really compatible
After noticing that this problem not only occurs with exfat but also with NTFS formatted devices I started looking specifically for problems with udev and fuse.
Some comments about the combination I found:
I think that the fuse process is being killed. You cannot start long-lived processes from a udev rule, this should be handled by systemd.
(from Debian-devel)
Warning: To mount removable drives, do not call mount from udev rules. In case of FUSE filesystems, you will get Transport endpoint not connected errors. Instead, you could use udisks that handles automount correctly or to make mount work inside udev rules, copy /usr/lib/systemd/system/systemd-udevd.service to /etc/systemd/system/systemd-udevd.service and replace MountFlags=slave to MountFlags=shared.[3] Keep in mind though that udev is not intended to invoke long-running processes.
(from ArchWiki)
And there are more.
I ended up using the scripts and configuration files from this answer. It works perfectly with all filesystem types. I wish I had found this earlier, it would have spared me a couple of days of debugging, trial and error.
| Mounting exfat with udev rules automatically |
1,417,719,191,000 |
It's a bit indirect, but it's possible to mount a partition with a disk image using mount or losetup's "offset" parameter.
I'm looking to be able to use fuse to do the same thing in user space
Use Case
My use case is building disk images on an autobuild server where the build job is not allowed to have root permissions, and the server should not need a custom setup for particular build jobs.
|
It's possible to do with fuse, but would probably be cleaner with custom tools.
Solution
With apt-get-able tools the following kludge is possible:
mkdir mnt
xmount --in dd --out vdi disk.img mnt
mkdir mnt2
vdfuse -f mnt/disk.vdi
mkdir mnt3
fuseext2 -o "rw" mnt2/Partition1 mnt3
Explanation
The basic idea is that fuse can be used to separate a full disk image in place into files that point to it's partitions. vdfuse does this, but is a VirtualBox tool and requires a VDI or VMDK file to work. xmount uses fuse to make a raw disk image appear as a VDI file.
Finally once the partition file is available via vdfuse, it can be mounted via an ext2/3/4 tool fuseext2.
It's ugly but it works completely in userspace.
Update
vdfuse should be able to mount a raw image without the help of xmount, but there is a bug which ignores the RAW option.
I tracked down and fixed the bug with a patch here:
https://bugs.launchpad.net/ubuntu/+source/virtualbox-ose/+bug/1019075
| How can I mount partitions in a full disk image (i.e. image with partition table) with fuse? |
1,417,719,191,000 |
TL;DR
Attempting to format a block device on my FUSE file system fails with EPERM at the open syscall. Permissions are set to 777 and the necessary ioctls are stubbed, but no logs are printed from within the FUSE handler.
Background
I'm writing a program to create virtual disk images. One of my criteria is that it must be able to run with zero superuser access, meaning I can't mount loopback devices, change owners of files or even edit /etc/fuse.conf. For this reason, my approach ends up being fairly long-winded. Specifically, in order to format the various partitions on the disk, I would like to be able to use system tools, because that gives me a far greater range of possible file systems. This involves exposing the various partitions on the VDisk as block devices to the system. However, all the possible methods I've found have required either nbds or loopback devices. Both of which require superuser access.
Implementing FUSE myself
However, implementing block devices in FUSE is not only possible, but supported. Unfortunately, I wasn't able to find much documentation on the matter and since I'm doing all this in Rust, the documentation world for this is even more scarce.
I've implemented the following FUSE methods:
init
lookup
getattr
open
read
write
readdir
ioctl
BLKGETSIZE
BLKFLSBUF
BLKSSZGET
I can list the contents of the file system and get directory/file information. I'm deliberately ignoring methods which create or modify resources, as this is done through the build process.
The error
As mentioned, I get permission denied (EPERM) error. straceing the mkfs call shows that it's the open call to the block device that fails on the kernel side. Full strace result.
execve("/usr/sbin/mkfs.fat", ["mkfs.fat", "out/partitions/EFI"], 0x7ffd42f64ab8 /* 76 vars */) = 0
--- snip ---
openat(AT_FDCWD, "out/partitions/EFI", O_RDWR|O_EXCL) = -1 EACCES (Permission denied)
write(2, "mkfs.fat: unable to open out/par"..., 63mkfs.fat: unable to open out/partitions/EFI: Permission denied
) = 63
exit_group(1) = ?
For clarity, my directory structure looks like this:
out
├── minimal.qcow2 [raw disk image] (shadows minimal.qcow2 [qcow2 file] with qemu-storage-daemon)
├── partitions
│ ├── EFI [Block device]
│ └── System [Block device]
└── qemu-monitor.sock [UNIX domain socket]
Of course, there are logging functions tracing every method. I do see logs when listing out the partitions, but not when formatting.
As I mentioned, I've found very little documentation on what could actually be causing this error.
Further insights
Thanks to the insights from @orenkishon, I've found some more details that just baffle me.
I found some options in fuser which were interesting:
MountOption::Dev Enable special character and block devices
MountOption::DefaultPermission Enable permission checking in the kernel
MountOption::RW Read-write filesystem (apparently not a default option)
Unfortunately, no combination of which resolved my issue.
Log functions aren't called immediately. They seem to be tied to some sort of flushing operation. I can run the mkfs.fat command, see one or two logs, switch back to my IDE and see a page worth of logs appear.
This may be due to the fact that the directory I'm generating the files is within the project's directory, so it is visible to the IDE, but it strikes me as very unusual.
The log in the access function is never visible, but in the statfs function is, but only if mkfs is called from outside the out directory and is the first of any mkfs calls.
project > cd ./out
project/out > mkfs.fat partitions/EFI
mkfs.fat 4.2 (2021-01-31)
mkfs.fat: unable to open partitions/EFI: Permission denied
# No logs
project > mkfs.fat out/partitions/EFI
mkfs.fat 4.2 (2021-01-31)
mkfs.fat: unable to open out/partitions/EFI: Permission denied
# No logs
project > cargo run ...
project > mkfs.fat out/partitions
mkfs.fat 4.2 (2021-01-31)
mkfs.fat: unable to open out/partitions/EFI: Permission denied
# Logs appear after switching to IDE
I'm seeing this log message for the first time today:
[2024-04-21T16:58:24Z DEBUG fuser::mnt::fuse_pure] fusermount:
[2024-04-21T16:58:24Z DEBUG fuser::mnt::fuse_pure] fusermount: fusermount3: unsafe option dev ignored
There is a MountOption::Dev sepcified, which supposedly adds support for block and character devices. However I can't seem to explain why it's being rejected. I was hopeful that I could use a patched version of libfuse3 but it seems not.
Extra info which may be useful
System Specs
Copied directly from KDE's System Info
Operating System: Kubuntu 23.10
KDE Plasma Version: 5.27.8
KDE Frameworks Version: 5.110.0
Qt Version: 5.15.10
Kernel Version: 6.5.0-28-generic (64-bit)
Graphics Platform: Wayland
Processors: 32 × 13th Gen Intel® Core™ i9-13900
Memory: 31,1 GiB of RAM
Graphics Processor: AMD Radeon RX 7900 XT
Manufacturer: ASUS
Generic writing operations also fail
A suggestion was to check whether mkfs fails in case fat32 doesn't support the block device. However this doesn't seem to be the case as formatting with any other filesystem produces the same results.
I'm also using mkfs as a testing platform because I currently don't know of any other readily-available system utilities that write to block devices directly, and mkfs is something I intend to use anyway.
Bad news :(
While reading the manpages I came across this paragraph which made my heart sink:
Most of the generic mount options described in mount are supported (ro, rw, suid, nosuid, dev, nodev, exec, noexec, atime, noatime, sync, async, dirsync).
Filesystems are mounted with nodev,nosuid by default, which can only be overridden by a privileged user.
So it looks like this isn't possible. Still, any insights here - any slimmer of hope - would be highly appreciated.
|
If I understand correctly, you expose a block deice via your FUSE filesystem, which doesn't work for security reasons.
In Unix-like systems, files are much more than just files. Letting an unprivileged user access arbitrary block devices is problematic because, by granting access to a block device corresponding to the rootfs device, the system can be compromised.
In your case, it would probably be best to expose the partitions as regular files, since you are already working with FUSE and mkfs works just as well with these anyway.
PS. If there are other tools that truly require the partition to be a block device, one can emulate this using a similar approach to that of the fakeroot program (i.e. hooking libc calls and emulating their results)
| EPERM when formatting block device on a FUSE filesystem |
1,417,719,191,000 |
I would like to mount my Samsung Galaxy S7 to an folder using simple-mtpfs and I cannot do it as I used to (on previous Fedora and older Galaxy S4).
If I simply plug S7 to my computer, I can browse it using Nautilius, but I cannot access it in terminal as ordinary folder, what is exactly what I want to achieve.
Every time I plug S7 I check twice that it works in MTP mode, so that isn't the problem.
In the past, I simply plugged S4 and typed:
simple-mtpfs /home/adam/S4
Now, I can perform it and even my phone ask me to confirm MTP choice, but the catalogue S7 is still empty.
I also tried to mount it as root or ordinary user and by device number, but with no result.
# simple-mtpfs --list-devices
1: SamsungGalaxy models (MTP)
$ simple-mtpfs --device 1 /home/adam/S7
# simple-mtpfs --device 1 /media/s7
$ simple-mtpfs /dev/libmtp-3-1 /home/adam/s7
# simple-mtpfs /dev/libmtp-3-1 /media/s7
I even tried to do it by udev rules:
# dmesg | tail
[16821.258485] usb 3-1: Product: SAMSUNG_Android
[16821.258487] usb 3-1: Manufacturer: SAMSUNG
[16821.258489] usb 3-1: SerialNumber: 98867?????????????
[16827.556099] usb 3-1: USB disconnect, device number 29
[16830.383366] usb 3-1: new high-speed USB device number 30 using xhci_hcd
[16830.548882] usb 3-1: New USB device found, idVendor=04e8, idProduct=6860
[16830.548887] usb 3-1: New USB device strings: Mfr=2, Product=3, SerialNumber=4
[16830.548903] usb 3-1: Product: SAMSUNG_Android
[16830.548905] usb 3-1: Manufacturer: SAMSUNG
[16830.548907] usb 3-1: SerialNumber: 98867?????????????
# touch /etc/udev/rules.d/10-phone.rules
Content of /etc/udev/rules.d/10-phone.rules is set to:
SUBSYSTEM=="usb", ATTR{idVendor}=="04e8", ATTR{idProduct}="6860", SYMLINK="S7"
After reloading rules I have /dev/S7 and I've tried to mount it:
# udevadm control --reload-rules
# ls -l /dev/S7
lrwxrwxrwx. 1 root root 15 10-20 15:03 /dev/S7 -> bus/usb/003/075
# ls -l /dev/libmtp-3-1
lrwxrwxrwx. 1 root root 15 10-20 15:03 /dev/libmtp-3-1 -> bus/usb/003/075
# simple-mtpfs /dev/S7 /media/s7
And still without any result. Mounting doesn't give and errors, but the directory where I about to mount is still empty.
The details about my setup:
# uname -r
4.7.7-200.fc24.x86_64
# rpm -qa | grep mtp
simple-mtpfs-0.2-6.fc24.x86_64
libmtp-1.1.11-1.fc24.x86_64
gvfs-mtp-1.28.3-1.fc24.x86_64
# rpm -qa | grep fuse
fuse-libs-2.9.7-1.fc24.x86_64
glusterfs-fuse-3.8.4-1.fc24.x86_64
fuse-2.9.7-1.fc24.x86_64
gvfs-fuse-1.28.3-1.fc24.x86_64
Extract from system log (Fedora's journalctl) after plugging the phone and typing simple-mtpfs /media/s7 :
# journalctl -n 53
-- Logs begin at śro 2016-10-19 21:29:20 CEST, end at sob 2016-10-22 09:26:43 CEST. --
paź 22 09:24:31 PRZEDNICZEK01 kernel: usb 3-1: USB disconnect, device number 10
paź 22 09:24:31 PRZEDNICZEK01 PackageKit[1559]: get-updates transaction /384_eccedcee from uid 1000 finished with success after 45ms
paź 22 09:24:32 PRZEDNICZEK01 kernel: usb 3-1: new high-speed USB device number 11 using xhci_hcd
paź 22 09:24:32 PRZEDNICZEK01 kernel: usb 3-1: New USB device found, idVendor=04e8, idProduct=6860
paź 22 09:24:32 PRZEDNICZEK01 kernel: usb 3-1: New USB device strings: Mfr=2, Product=3, SerialNumber=4
paź 22 09:24:32 PRZEDNICZEK01 kernel: usb 3-1: Product: SAMSUNG_Android
paź 22 09:24:32 PRZEDNICZEK01 kernel: usb 3-1: Manufacturer: SAMSUNG
paź 22 09:24:32 PRZEDNICZEK01 kernel: usb 3-1: SerialNumber: 98867?????????????
paź 22 09:24:32 PRZEDNICZEK01 gvfsd[1813]: PTP: reading event an error 0x02ff occurredDevice 0 (VID=04e8 and PID=6860) is a Samsung Galaxy models (MTP).
paź 22 09:24:32 PRZEDNICZEK01 gvfsd[1813]: LIBMTP ERROR: couldnt parse extension samsung.com/devicestatus:0
paź 22 09:24:32 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: Could not find parent node for URI:'mtp://[usb:003,011]/'
paź 22 09:24:32 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: NOTE: URI theme may be outside scheme expected, for example, expecting 'file://' when given 'http://' prefix.
paź 22 09:24:32 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: Could not find parent node for URI:'mtp://[usb:003,011]/'
paź 22 09:24:32 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: NOTE: URI theme may be outside scheme expected, for example, expecting 'file://' when given 'http://' prefix.
paź 22 09:24:32 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-CRITICAL **: Could not set mount point in database 'urn:nepomuk:datasource:5e7b19a6b9795726a5c47a99a89757bf', GDBus.Error:org.freedesktop.Tracker1.SparqlError.Internal: UNIQUE constraint
paź 22 09:24:32 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-CRITICAL **: Could not set mount point in database 'urn:nepomuk:datasource:5c7e6bb78b9a6691c3ecea3925b2971d', GDBus.Error:org.freedesktop.Tracker1.SparqlError.Internal: UNIQUE constraint
paź 22 09:24:34 PRZEDNICZEK01 org.gnome.Shell.desktop[1832]: (gnome-shell:1832): Gjs-WARNING **: JS ERROR: TypeError: is null
paź 22 09:24:34 PRZEDNICZEK01 org.gnome.Shell.desktop[1832]: ContentTypeDiscoverer<._onContentTypeGuessed/<@resource:///org/gnome/shell/ui/components/autorunManager.js:133
paź 22 09:24:34 PRZEDNICZEK01 org.gnome.Shell.desktop[1832]: _proxyInvoker/asyncCallback@resource:///org/gnome/gjs/modules/overrides/Gio.js:86
paź 22 09:24:34 PRZEDNICZEK01 gvfsd[1813]: ** (process:3243): WARNING **: send_infos_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/18 (g-dbus-error-quark, 19)
paź 22 09:24:34 PRZEDNICZEK01 gvfsd[1813]: ** (process:3243): WARNING **: send_infos_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/18 (g-dbus-error-quark, 19)
paź 22 09:24:34 PRZEDNICZEK01 gvfsd[1813]: ** (process:3243): WARNING **: send_infos_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/18 (g-dbus-error-quark, 19)
paź 22 09:24:34 PRZEDNICZEK01 gvfsd[1813]: ** (process:3243): WARNING **: send_done_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/18 (g-dbus-error-quark, 19)
paź 22 09:24:35 PRZEDNICZEK01 PackageKit[1559]: get-updates transaction /385_decdbbba from uid 1000 finished with success after 45ms
paź 22 09:26:37 PRZEDNICZEK01 kernel: usb 3-1: usbfs: process 3385 (simple-mtpfs) did not claim interface 0 before use
paź 22 09:26:37 PRZEDNICZEK01 kernel: usb 3-1: reset high-speed USB device number 11 using xhci_hcd
paź 22 09:26:38 PRZEDNICZEK01 kernel: usb 3-1: usbfs: process 3385 (simple-mtpfs) did not claim interface 0 before use
paź 22 09:26:38 PRZEDNICZEK01 kernel: usb 3-1: usbfs: process 3250 (events) did not claim interface 0 before use
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: USB disconnect, device number 11
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: new high-speed USB device number 12 using xhci_hcd
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: New USB device found, idVendor=04e8, idProduct=6860
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: New USB device strings: Mfr=2, Product=3, SerialNumber=4
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: Product: SAMSUNG_Android
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: Manufacturer: SAMSUNG
paź 22 09:26:40 PRZEDNICZEK01 kernel: usb 3-1: SerialNumber: 98867?????????????
paź 22 09:26:41 PRZEDNICZEK01 gvfsd[1813]: PTP: reading event an error 0x02ff occurredDevice 0 (VID=04e8 and PID=6860) is a Samsung Galaxy models (MTP).
paź 22 09:26:41 PRZEDNICZEK01 gvfsd[1813]: LIBMTP ERROR: couldnt parse extension samsung.com/devicestatus:0
paź 22 09:26:41 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: Could not find parent node for URI:'mtp://[usb:003,012]/'
paź 22 09:26:41 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: NOTE: URI theme may be outside scheme expected, for example, expecting 'file://' when given 'http://' prefix.
paź 22 09:26:41 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: Could not find parent node for URI:'mtp://[usb:003,012]/'
paź 22 09:26:41 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-WARNING **: NOTE: URI theme may be outside scheme expected, for example, expecting 'file://' when given 'http://' prefix.
paź 22 09:26:41 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-CRITICAL **: Could not set mount point in database 'urn:nepomuk:datasource:0e6a8582e05ac627e4014d1ca1e6ec87', GDBus.Error:org.freedesktop.Tracker1.SparqlError.Internal: UNIQUE constraint
paź 22 09:26:41 PRZEDNICZEK01 tracker-miner-fs.desktop[2001]: (tracker-miner-fs:2001): Tracker-CRITICAL **: Could not set mount point in database 'urn:nepomuk:datasource:5c7e6bb78b9a6691c3ecea3925b2971d', GDBus.Error:org.freedesktop.Tracker1.SparqlError.Internal: UNIQUE constraint
paź 22 09:26:41 PRZEDNICZEK01 dbus-daemon[1760]: [session uid=1000 pid=1760] Activating service name='org.gnome.Shell.HotplugSniffer' requested by ':1.16' (uid=1000 pid=1832 comm="/usr/bin/gnome-shell " label="unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023")
paź 22 09:26:41 PRZEDNICZEK01 dbus-daemon[1760]: [session uid=1000 pid=1760] Successfully activated service 'org.gnome.Shell.HotplugSniffer'
paź 22 09:26:42 PRZEDNICZEK01 org.gnome.Shell.desktop[1832]: (gnome-shell:1832): Gjs-WARNING **: JS ERROR: TypeError: is null
paź 22 09:26:42 PRZEDNICZEK01 org.gnome.Shell.desktop[1832]: ContentTypeDiscoverer<._onContentTypeGuessed/<@resource:///org/gnome/shell/ui/components/autorunManager.js:133
paź 22 09:26:42 PRZEDNICZEK01 org.gnome.Shell.desktop[1832]: _proxyInvoker/asyncCallback@resource:///org/gnome/gjs/modules/overrides/Gio.js:86
paź 22 09:26:43 PRZEDNICZEK01 gvfsd[1813]: ** (process:3399): WARNING **: send_infos_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/17 (g-dbus-error-quark, 19)
paź 22 09:26:43 PRZEDNICZEK01 gvfsd[1813]: ** (process:3399): WARNING **: send_infos_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/17 (g-dbus-error-quark, 19)
paź 22 09:26:43 PRZEDNICZEK01 gvfsd[1813]: ** (process:3399): WARNING **: send_infos_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/17 (g-dbus-error-quark, 19)
paź 22 09:26:43 PRZEDNICZEK01 gvfsd[1813]: ** (process:3399): WARNING **: send_done_cb: No such interface 'org.gtk.vfs.Enumerator' on object at path /org/gtk/vfs/client/enumerator/17 (g-dbus-error-quark, 19)
paź 22 09:26:43 PRZEDNICZEK01 PackageKit[1559]: get-updates transaction /386_acdeddea from uid 1000 finished with success after 48ms
|
This is an old question, anyway, now in 2023 I can manage a Samsung phone with Android version 9 via Firefox as well as by command line (gnome-terminal) in Ubuntu Desktop 22.04.x LTS.
When the phone is connected via USB (and mounted automatically), I find the path to it using this command,
find /run/user/*/gvfs -maxdepth 1 -name 'mtp:*'
Some standard commands do not work, but then I use gio, for example
gio mount ...
gio copy ...
See man gio for details.
I made a small shellscript, than can mount, read, write and unmount the phone. It is taylored for my specific needs, but it might help you create your own shellscript, so if you wish I can copy my shellscript into this answer.
| Mount Samsung Galaxy S7 using simple-mtpfs |
1,417,719,191,000 |
Everything I've tried (always with superuser privileges) has failed:
# rm -rf /path/to/undeletable
rm: cannot remove ‘/path/to/undeletable’: Is a directory
# rmdir /path/to/undeletable
rmdir: failed to remove ‘/path/to/undeletable’: Device or resource busy
# lsof +D /path/to/undeletable
lsof: WARNING: can't stat(/path/to/undeletable): Permission denied
lsof 4.86
latest revision: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/
latest FAQ: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/FAQ
latest man page: ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_man
usage: [-?abhKlnNoOPRtUvVX] [+|-c c] [+|-d s] [+D D] [+|-f[gG]] [+|-e s]
[-F [f]] [-g [s]] [-i [i]] [+|-L [l]] [+m [m]] [+|-M] [-o [o]] [-p s]
[+|-r [t]] [-s [p:s]] [-S [t]] [-T [t]] [-u s] [+|-w] [-x [fl]] [--] [names]
Use the ``-h'' option to get more help information.
When I try any of the above without superuser privileges the results are basically the same. The only difference is that the initial WARNING message from the lsof command has Input/output error instead of Permission denied. (This minor difference is in itself already puzzling enough, but, whatever...)
How can I obliterate this directory?
|
# rm -rf /path/to/undeletable
rm: cannot remove ‘/path/to/undeletable’: Is a directory
rm calls stat(2) to check whether /path/to/undeletable is a directory (to be deleted by rmdir(2)) or a file (to be deleted by unlink(2). Since the stat call fails (we'll see why in a minute), rm decides to use unlink, which explains the error message.
# rmdir /path/to/undeletable
rmdir: failed to remove ‘/path/to/undeletable’: Device or resource busy
“Device or resource busy”, not “Directory not empty”. So the problem is that the directory is used by something, not that it contains files. The most obvious “used by something” is that it's a mount point.
# lsof +D /path/to/undeletable
lsof: WARNING: can't stat(/path/to/undeletable): Permission denied
This confirms that stat on the directory failed. Why would root lack permission? That's a limitation of FUSE: unless mounted with the allow_other option, FUSE filesystems can only be accessed by processes with the same user ID as the process that provides the FUSE driver. Even root is hit by this.
So you have a FUSE filesystem mounted by a non-root user. What do you want to do?
Most likely you're just annoyed by that directory and want to unmount it. Root can do that.
umount /path/to/undeletable
If you want to get rid of the mount point but keep the mount, move it with mount --move. (Linux only)
mkdir /elsewhere/undeletable
chown bob /elsewhere/undeletable
mount --move /path/to/undeletable /elsewhere/undeletable
mail bob -s 'I moved your mount point'
If you want to delete the files on that filesystem, use su or any other method to switch to that user, then delete the files.
su bob -c 'rm -rf /path/to/undeletable'
If you want to delete the files that are hidden by the mount point without disrupting the mount, create another view without the mount point and delete the files from there. (Linux only)
mount --bind /path/to /mnt
rm -rf /mnt/undeletable/* /mnt/undeletable/.[!.]* /mnt/undeletable/..?*
umount /mnt
| Undeletable directory |
1,417,719,191,000 |
I want sshfs connections to be terminated after some time, an hour at the maximum, and this means launching the at command with fusermount -u in the same command or script that I use to launch the sshfs mount.
However I notice that the fusermount can fail if the files are in use, although I want the at command to work whether there are files still opened in the connection or not.
In my experience the only reliable way is to run the fusermount commands as root or terminate the sshfs connection via the kill command.
Is there some other way to force a disconnection without being running as root? The problem is I may not be at the terminal when the files are still open, but for the sake of security the connection must be broken unless I cancel the at command.
|
If you only want to free up the mount point, and don't care about terminating the SSH connection, you can run fusermount -z /mount/point. This performs a lazy unmount: the mount point is no longer associated with the mount, but the mount doesn't disappear until all open files on that mount are closed.
If you only want to close the SSH connection and don't care about the mount, you can simply kill the SFTP server process. That's the beauty of FUSE: kill the process offering the service and it's gone. A mount can't lock up a system resource. The processes that try to access the filesystems will get an error (ENOTCONN ”Transport endpoint is not connected”).
If you want to kill the processes that have files open on that filesystem, you can use the fuser command (no relation with FUSE except for the F being the first letter of “file”).
fuser -k /mount/point
| Is there a user-level foolproof way to force termination of sshfs connections? |
1,417,719,191,000 |
I'd like to recreate a feature of Mac OS X called sparse bundles (disk images made out of smaller files, making them easy to backup after a small change). For that I'm looking for a way to 'virtually' create a single file made by concatenation of smaller ones (big.file shouldn't use all this space, just link to .files):
4096 0.file
4096 1.file
4096 2.file
4096 3.file
4096 4.file
20480 big.file
so that I'd be able to mount big.file using loop device, format as btrfs and upon writing to this disk, data should be written only to certain .files, allowing me to backup easily.
Any suggestions how I could accomplish that? Perhaps something FUSE-related?
|
One way to do this would be to make each file an LVM physical volume, and join those physical volumes in a volume group and make an LVM logical volume using that space. But it's cumbersome: you need to associate the file with a loop device.
dd if=/dev/zero of=0.file bs=1024k count=4
losetup /dev/loop0 0.file
pvcreate /dev/loop0
# … repeat for all parts …
vgcreate -s 1m foo /dev/loop0 /dev/loop1 …
lvcreate -l 19 -n big foo
mkfs.btrfs /dev/mapper/foo-big
Reassembling the parts is not likely to be directly supported by your boot scripts, so you'd have to code quite a few things manually.
I don't see the point: how does splitting files facilitate backups? Many changes are likely to be spread over the whole volume (for example, several parts will contain copies of the superblock). You won't gain much by only backing up the parts that have changed: you'll need to look further inside the parts anyway.
If you want to make incremental backups, make them at the filesystem level.
If you want to make full backups of the whole image but ignore empty space, make sure to create a sparse file, use backup tools that manipulate sparse files efficiently, and periodically fill the empty space in the filesystem with zeroes and sparsify it.
| Virtual file made out of smaller ones (for mac-like sparse bundle solution) |
1,417,719,191,000 |
Trying to use veracrypt (console) in WSL.
I make a volume, seems to work OK... but when I try to mount it:
Done: 100.000% Speed: 5.0 MiB/s Left: 0 s
The VeraCrypt volume has been successfully created.
m17awl@M17A:/media/mike$ veracrypt /mnt/e/test.vc /media/mike/rsync_vc_drive_e/
Enter password for /mnt/e/test.vc:
Enter PIM for /mnt/e/test.vc:
Enter keyfile [none]:
Protect hidden volume (if any)? (y=Yes/n=No) [No]:
Error: fuse: device not found, try 'modprobe fuse' first
NB have seen this question, but when I try these commands I get this:
m17awl@M17A:/media/mike$ modprobe fuse
modprobe: FATAL: Module fuse not found in directory /lib/modules/4.4.0-19041-Microsoft
m17awl@M17A:/media/mike$ modprobe loop
modprobe: FATAL: Module loop not found in directory /lib/modules/4.4.0-19041-Microsoft
m17awl@M17A:/media/mike$ lsmod
libkmod: ERROR ../libkmod/libkmod-module.c:1668 kmod_module_new_from_loaded: could not open /proc/modules: No such file or directory
Error: could not get list of modules: No such file or directory
... obviously these problems may be WSL-specific. I have no idea, and have never heard of these Linux "modules" (am low-level, sorry!).
As a workaround I installed the W10 version of veracrypt console (the point of wanting to use the console version being that I want to mount and dismount from scripts). This also ran into a problem, as documented here, although I've managed to find a sub-optimal way of mounting, here, which at least works...
|
fuse is not supported in WSL 1
From WSL Issue #2869, a comment by therealkenc
No Linux modules on WSL because no Linux kernel in WSL.
fuse is compiled into WSL 2
From MSPoweruser article Windows Subsystem for Linux (WSL) 2 support coming to Windows 10 version 1903 and 1909
Full Linux kernel built into WSL 2
And from WSL Issue #17, a comment by therealkenc
FUSE is statically compiled into the WSL2 kernel. In general modprobe is not applicable in WSL2 by-design
Credit @Steve Bennett.
| "modprobe fuse" on WSL? |
1,417,719,191,000 |
I've heard that FUSE-based filesystems are notoriously slow because they are implemented in a userspace program. What is it about userspace that is slower than the kernel?
|
Code executes at the same speed whether it's in the kernel or in user land, but there are things that the kernel code can do directly while user land code has to jump through hoops. In particular, kernel code can map application memory directly, so it can directly copy the file contents between the application memory and the internal buffers from or to which the hardware copies. User code has to either make an extra copy via a pipe or socket, or make a more complex memory sharing operation.
Furthermore each file operation has to go through the kernel — the only way for a process to interact with anything is via a system call. If the file operation is performed entirely inside the kernel, there's only one user/kernel transition and one kernel/user transition to perform, which is pretty fast. If the file operation is performed by another process, there has to be a context switch between processes, which requires a far more expensive operation in the MMU.
The speed performance is still negligible against most hardware access times, but it can be observed when the hardware isn't the bottleneck, especially as many hardware operations can be performed asynchronously while the main processor is doing something else, whereas context switches and data copies between processes keep the CPU busy.
| Why is userspace slower than the kernel? |
1,417,719,191,000 |
TL;DR
When a fuse filesystem is mounted via the mount command, the environment variables are not passed to the fuse script. Why?
Context
I am trying to mount hdfs (hadoop file system) via fuse.
This is easy on the command line:
# Short example version:
LD_LIBRARY_PATH=blah hadoop-fuse-dfs -onotrash dfs://ambari:8020 /mnt/hdfsfuse
# Actual version with full path for completeness
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/hadoop/yarn/local/filecache/11/mapreduce.tar.gz/hadoop/lib/native /usr/hdp/2.5.0.0-1245/hadoop/bin/hadoop-fuse-dfs -onotrash dfs://ambari:8020 /mnt/hdfsfuse
This is all and well, but if I put the definition of the FS in /etc/fstab to then use the mount command, I end up with:
fuse_dfs: error while loading shared libraries: libjvm.so: cannot open
shared object file: No such file or directory
Looking at the hadoop-fuse-dfs script and adding debug output, I see that LD_LIBRARY_PATH is seen as empty in this script (this is true both if I export LD_LIBRARY_PATH first or if I add it at the start of the command)
System
hdp 2.5, centos 7
Question
Short of rewriting the mount script to hardcode LD_LIBRARY_PATH, how can I have environment variables in general passed via mount?
|
You can write your own mount fuse helper, which then calls the real fuse script. In the simple case of an fstab entry like:
dfs://ambari:8020 /mnt/hdfsfuse fuse.mydfshelper flag,flag,...
then your script /usr/bin/mydfshelper is called with args
dfs://ambari:8020 /mnt/hdfsfuse -o flag,flag,...
So you just need to write a one-line mydfshelper holding something like:
#!/bin/bash
LD_LIBRARY_PATH=blah hadoop-fuse-dfs -onotrash "$@"
| LD_LIBRARY_PATH lost when using mount command |
1,417,719,191,000 |
I'm trying to implement a toy file system and I'm struggling to understand how to correctly implement the readdir() operation in an efficient, scalable way. To understand the interface used by FUSE, I'm mainly reading the documentation of pyfuse3, but I don't think that my issues would be solved by using any other FUSE wrapper.
What I understand is that when my implementation for readdir() is called, I'm expected to call readdir_reply() with successive directory entries until that method returns False. While doing that, I'm expected to associate each entry with a unique[1] 64-Bit ID, called next_id. On the next call to readdir(), I'll be passed one of those IDs and I'm expected to return directory entries starting after the entry that I've previously associated with that ID.
If the directory changes (e.g. entries are added or removed) between calls to readdir(), I'm allowed to freely choose whether I want to include added items and/or omit removed items in successive calls, but all other items must keep their ID so that s won't be skipped or returned twice.
Semantically, this all seems fine to me. The simplest implementation that I can think of would just read all the directory entries into an array on opendir() and then use each entry's index in the array as its ID. To avoid having to read all entries at once, the array could be build up successively in each readdir() call. But it won't be able to clear the array until the file handle is released[2].
Modern file systems have no trouble handling directories with 10s of millions of files. I'm assuming that these implementations wouldn't be tolerated to allocate memory in the order of the number of directory entries for each directory file handle (e.g. 10 million files × 100 bytes per entry = 1 GB). These file systems generally are almost exclusively used with implementation in the kernel and not via FUSE.
All this leaves me to conclude that at least one of these statements is true:
I'm misunderstanding the requirements of the FUSE readdir() operation.
There is a more efficient solution to meet those requirements that I'm not seeing.
File systems inside the kernel have a better API they can implement, which does not require all this state to be kept.
File systems don't implement the equivalent of readdir() correctly, but in a way that applications generally don't care about.
File systems just do allocate gigabytes of memory when traversing a directory but no-one's bothered by it.
So which one is it?
I would like to understand how to implement the readdir() FUSE operation efficiently in a way that meets all expectations generally met by other file system implementations.
[1]: Unique within a single file handle.
[2]: Or maybe when readdir() is called with start_id set to 0.
|
For efficiency in the presence of concurrent modification and hardlinks, you need a cookie btree on the side. I'm not aware of another off_t approach that's correct under POSIX.
I would guess this comment provides most of the answer: https://github.com/facebookexperimental/eden/blob/5cc682e8ff24ef182be2dbe07e484396539e80f4/eden/fs/inodes/TreeInode.cpp#L1798-L1833
I'll duplicate it here, including its reference links:
Implementing readdir correctly in the presence of concurrent modifications
to the directory is nontrivial. This function will be called multiple
times. The off_t value given is either 0, on the first read, or the value
corresponding to the last entry's offset. (Or an arbitrary entry's offset
value, given seekdir and telldir).
POSIX compliance requires that, given a sequence of readdir calls across the
entire directory stream, all entries that are not modified are
returned exactly once. Entries that are added or removed between readdir
calls may be returned, but don't have to be.
Thus, off_t as an index into an ordered list of entries is not sufficient.
If an entry is unlinked, the next readdir will skip entries.
One option might be to populate off_t with a hash of the entry name. off_t
has 63 usable bits (minus the 0 value which is reserved for the initial
request). 63 bits of SpookyHashV2 is probably sufficient in practice, but
it would be possible to create a directory containing collisions, causing
duplicate entries or an infinite loop. Also it's unclear how to handle
the entry at off being removed before the next readdir. (How do you find
where to restart in the stream?).
Today, Eden does not support hard links. Therefore, in the short term, we
can store inode numbers in off_t and treat them as an index into an
inode-sorted list of entries. This has quadratic time complexity without an
additional index but is correct.
In the long term, especially when Eden's tree directory structure is stored
in SQLite or something similar, we should maintain a seekdir/readdir cookie
index and use said cookies to enumerate entries.
https://oss.oracle.com/pipermail/btrfs-devel/2008-January/000463.html
https://yarchive.net/comp/linux/readdir_nonatomicity.html
https://lwn.net/Articles/544520/
| Correctly implementing seeking in FUSE readdir() operation |
1,417,719,191,000 |
Kernel space device drivers usually implement directories and file that show through /sys or /proc. Can the long running user space programs do this as well?
I have a daemon or long running program that needs to be able to be queried for some data and have some data set by external programs while it runs.
I could do a full blown sockets interface, but that's a lot of overhead for the program and the external requestors.
As the linux kernel developers found, using the "everything is a file" model was useful for tweaking kernel setting. I'd like to do the same.
Some may think the /sys directory is the sacred space of the kernel, but I don't see an important line between what is what is the "system" and some other services/servers/applications.
Using FUSE...
I've decided to use FUSE, the 'File system in USErspace' package libfuse3.so.
(After writing a wrapper for it...) I can define an array of structs, one per access variable/file:
struct fileObj files[] = {
{"mode", mode, getFunc, putFunc},
{"numbProcs", numbProcs, getFunc, putFunc},
{"svrHostPort", hostPort, getFunc, putFunc},
{"somethingWO", jakeBuf, NULL, putFunc}, // Write only file (why?)
{"timestamp", NULL, getTimestampFunc, NULL}, // Returns timestamp, R/O
{0}
};
The mountpoint for the FUSE filesystem is '/ssm/fuse'... The 'ls -l' shows that each entry in the 'files' array shows up as a file, some R/O, some R/W, one W/O. The 'getTimestampFunc in the 'get' function position shows that a special function can be associated with a file to perform calculate repsonses.
ribo@box:~/c$ ls -l /ssm/fuse
total 0
-rw-r--r-- 1 ribo ribo 10 Dec 28 17:17 mode
-rw-r--r-- 1 ribo ribo 1 Dec 28 17:17 numbProcs
--w------- 1 ribo ribo 3 Dec 28 17:17 somethingWO
-rw-r--r-- 1 ribo ribo 5 Dec 28 17:17 svrHostPort
-r--r--r-- 1 ribo ribo 32 Dec 28 17:17 timestamp
ribo@box:~/c$ cat /ssm/fuse/timestamp
18/12/28 17:17:27ribo@box:~/c$cat /ssm/fuse/mode
hyperSpeedribo@box:~/c$ echo slow >/ssm/fuse/mode
ribo@box:~/c$ cat /ssm/fuse/mode
slow
The 'echo >' shows passing a value into the program. So its easy for me to peek and poke various parameters of the program as it runs.
|
I don’t think there’s any way to add /sys or /proc entries outside the kernel. For /sys it wouldn’t make much sense anyway — it’s a direct representation of kobject data structures.
You can however provide similar interfaces from userspace, for example using FIFOs; see mkfifo for details. You can see an implementation of this in sysvinit with its initctl FIFO.
| Can user space programs provide/implement sysfs or procfs files to pass data to and from a program? |
1,417,719,191,000 |
I installed on Debian Wheezy zfs-fuse file system and enabled compression gzip-9 on one dataset ("storage/backup"). When I check if the compression is enabled on this dataset, it shows YES:
$: zfs get compression storage/backup
NAME PROPERTY VALUE SOURCE
storage/backup compression gzip-9 local
However, when I check the compression rate with du -ah or with sfx get compressratio no any compression can be seen.
All files, including well compressible ones (e.g. text files), take up exactly the same disk size as uncompressed ones:
$: zfs get compressratio storage/backup
NAME PROPERTY VALUE SOURCE
stor/backup compressratio 1.00x -
Why does this situation occur?
Here some info from zfs get all about the dataset:
compressratio 1.00x -
mounted yes -
quota none default
reservation none default
recordsize 128K default
mountpoint /storage/backup default
sharenfs off default
checksum on default
compression gzip-9 local
atime on default
devices on default
|
It looks like zfs-fuse will update the compressratio data every 30 seconds with limited IO occurring but there is another trigger to the update as background IO or really large files cause the data update to occur sooner.
I've put some test functions up on a gist. They require a clean (no files) file system that will start at 1.00x.
If the scripts pause forever on the first test then your compression counters are never updating and you have an issue with your install.
Running the scripts on a Debian wheezy box:
$ uname -a
Linux zfs-fuse 3.2.0-4-686-pae #1 SMP Debian 3.2.54-2 i686 GNU/Linux
Results in the following:
$ test_compression compress
Testing [compress]
Testing size [4096]
Waited 0 seconds for [compressratio_is_one]
4096 bytes made up of 1*4096 blocks
Waited 20 seconds for [compresstario_is_not_one]
1.12x
Testing size [16384]
Waited 30 seconds for [compressratio_is_one]
16384 bytes made up of 1*16384 blocks
Waited 30 seconds for [compresstario_is_not_one]
1.53x
Testing size [1048576]
Waited 30 seconds for [compressratio_is_one]
1048576 bytes made up of 1*131072 blocks
Waited 30 seconds for [compresstario_is_not_one]
31.44x
Testing size [33161216]
Waited 30 seconds for [compressratio_is_one]
33161216 bytes made up of 255*131072 blocks
Waited 0 seconds for [compresstario_is_not_one]
202.31x
You can reduce this, normally by about half by doing something intensive in the background which probably triggers the counter update.
In the background
$ while true; do touch somefile; rm somefile; done
Then testing again:
$ test_compression compress
Testing [compress]
Testing size [4096]
Waited 0 seconds for [compressratio_is_one]
4096 bytes made up of 1*4096 blocks
Waited 5 seconds for [compresstario_is_not_one]
1.11x
Testing size [16384]
Waited 17 seconds for [compressratio_is_one]
16384 bytes made up of 1*16384 blocks
Waited 17 seconds for [compresstario_is_not_one]
1.50x
Testing size [1048576]
Waited 16 seconds for [compressratio_is_one]
1048576 bytes made up of 1*131072 blocks
Waited 10 seconds for [compresstario_is_not_one]
29.73x
Testing size [33161216]
Waited 0 seconds for [compressratio_is_one]
33161216 bytes made up of 244*131072 blocks
Waited 0 seconds for [compresstario_is_not_one]
201.35x
Of note, on FreeBSD the update happens ~ every 5 seconds:
$ test_compression giggidy/compress
Testing [giggidy/compress]
Testing size [4096]
Waited 0 seconds for [compressratio_is_one]
4096 bytes made up of 1*4096 blocks
Waited 4 seconds for [compresstario_is_not_one]
1.21x
Testing size [16384]
Waited 5 seconds for [compressratio_is_one]
16384 bytes made up of 1*16384 blocks
Waited 5 seconds for [compresstario_is_not_one]
1.91x
Testing size [1048576]
Waited 5 seconds for [compressratio_is_one]
1048576 bytes made up of 1*131072 blocks
Waited 5 seconds for [compresstario_is_not_one]
39.33x
Testing size [33161216]
Waited 5 seconds for [compressratio_is_one]
33161216 bytes made up of 1*131072 blocks
Waited 4 seconds for [compresstario_is_not_one]
114.25x
I will add a Solaris based example when I can get on a box.
| zfs-fuse: enabling compression has no effect |
1,433,760,573,000 |
I do not hold a deep understanding of computer science concepts but would like to learn more about how the utility encfs works. I have a few question regarding the concept of filesystem in regards to encfs. It is said that encfs is a cryptographic filesystem wiki link.
1)To encrypt the files encfs is moving around blocks of the files to be encrypted, so am I correct to see this 'scrambled' version of the files as a new perspective which justifies the term of a new filesystem?
2)In the man pages of encfs in the section CEVEATS link to man of encfs online, it says that encfs is not a true file system. How should I understand this? Is that because some necessary features common to all file systems is missing in encfs' file system? Or is because of some other more substantial reason?
3)The man pages say that it creates a virtual encrypted file system. There are two questions here; what is it that makes it virtual is it that it is a file system within a file system? and the encrypted is that there is not a straight forward way to map the file blocks into a format to be read by other programs?
4)How does the command fusermount relate to encfs?
|
I think that behind your description, there is a misconception. The unencrypted data is not stored on the disk at any point. When you write to a file in the encfs filesystem, the write instruction goes to the encfs process; the encfs process encrypts the data (in memory) and writes the ciphertext to a file. The file names, as well as the file contents, are encrypted. Reading a file undergoes the opposite process: encfs reads the encrypted data from the disk file, decrypts it in memory and passes the plaintext to the requesting application.
When you run the encfs command, it does not decrypt any data. It only uses the password that you supply to unlock the filesystem's secret key. (This is actually a decryption operation, cryptographically speaking, but a different type from what happens with the file data. I will not go into more details here.)
1) Encfs is not exactly “moving blocks around”; it is decoding blocks when it reads them. Encfs is a filesystem because it behaves like one: you can store files on it, when it's mounted.
2) Encfs is not a “true” filesystem because it doesn't work independently. Encfs only provides an encryption layer; it uses an underlying filesystem to actually store data and metadata (metadata is auxiliary information about files such as permissions and modification times).
3) Virtual filesystem is another way to say that encfs itself doesn't store any data, it needs an underlying filesystem (see (2) above) for that. Encrypted means just that: encfs stores the data that you put in it in an encrypted form, which cannot be decrypted without the key. Another program could read the data stored by encfs if and only if that other program had access to the key (which requires the password that the key is protected with).
4) The fusermount command sets up a FUSE mount point. You would not normally call it directly, because a FUSE filesystem is implemented by a user-mode process which you have to start anyway, and that process (e.g. encfs) will take care of setting up the mount point. Unmounting a FUSE filesystem, on the other hand, is a generic operation, you can always do it by calling fusermount -u.
| How to understand the filesystem concepts used by encfs? |
1,433,760,573,000 |
I have the feeling that I cannot modify the date of symlinks on bindfs.
See the following transcript of what I tried.
On EXT4:
nailor@needle:~$ mkdir /tmp/ex
nailor@needle:~$ cd /tmp/ex
nailor@needle:/tmp/ex$ touch realfile
nailor@needle:/tmp/ex$ ln -s realfile linkfile
nailor@needle:/tmp/ex$ stat realfile linkfile
File: `realfile'
Size: 0 Blocks: 0 IO Block: 4096 regular empty file
Device: 801h/2049d Inode: 22678377 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ nailor) Gid: ( 1000/ nailor)
Access: 2013-09-09 00:46:15.356004837 +0200
Modify: 2013-09-09 00:46:15.356004837 +0200
Change: 2013-09-09 00:46:15.356004837 +0200
Birth: -
File: `linkfile' -> `realfile'
Size: 8 Blocks: 0 IO Block: 4096 symbolic link
Device: 801h/2049d Inode: 22678380 Links: 1
Access: (0777/lrwxrwxrwx) Uid: ( 1000/ nailor) Gid: ( 1000/ nailor)
Access: 2013-09-09 00:46:34.299766676 +0200
Modify: 2013-09-09 00:46:27.227855586 +0200
Change: 2013-09-09 00:46:27.227855586 +0200
Birth: -
nailor@needle:/tmp/ex$ touch -h realfile linkfile
nailor@needle:/tmp/ex$ stat realfile linkfile
File: `realfile'
Size: 0 Blocks: 0 IO Block: 4096 regular empty file
Device: 801h/2049d Inode: 22678377 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ nailor) Gid: ( 1000/ nailor)
Access: 2013-09-09 00:46:46.931607877 +0200
Modify: 2013-09-09 00:46:46.931607877 +0200
Change: 2013-09-09 00:46:46.931607877 +0200
Birth: -
File: `linkfile' -> `realfile'
Size: 8 Blocks: 0 IO Block: 4096 symbolic link
Device: 801h/2049d Inode: 22678380 Links: 1
Access: (0777/lrwxrwxrwx) Uid: ( 1000/ nailor) Gid: ( 1000/ nailor)
Access: 2013-09-09 00:46:49.899570563 +0200
Modify: 2013-09-09 00:46:46.931607877 +0200
Change: 2013-09-09 00:46:46.931607877 +0200
Birth: -
On bindfs:
nailor@needle:/tmp/ex$ mkdir sub
nailor@needle:/tmp/ex$ bindfs -n . sub
nailor@needle:/tmp/ex$ cd sub
nailor@needle:/tmp/ex/sub$ touch -h realfile linkfile
nailor@needle:/tmp/ex/sub$ stat realfile linkfile
File: `realfile'
Size: 0 Blocks: 0 IO Block: 4096 regular empty file
Device: 17h/23d Inode: 2 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ nailor) Gid: ( 1000/ nailor)
Access: 2013-09-09 00:47:34.000000000 +0200
Modify: 2013-09-09 00:47:34.000000000 +0200
Change: 2013-09-09 00:47:34.755006803 +0200
Birth: -
File: `linkfile' -> `realfile'
Size: 8 Blocks: 0 IO Block: 4096 symbolic link
Device: 17h/23d Inode: 3 Links: 1
Access: (0777/lrwxrwxrwx) Uid: ( 1000/ nailor) Gid: ( 1000/ nailor)
Access: 2013-09-09 00:46:49.899570563 +0200
Modify: 2013-09-09 00:46:46.931607877 +0200
Change: 2013-09-09 00:46:46.931607877 +0200
Birth: -
As you can see, the times for the symlink did not change on bindfs.
This is a problem with e.g. rsync, because this way I get:
rsync: failed to set times on "link1": No such file or directory (2)
rsync: failed to set times on "link2": No such file or directory (2)
...
I found this to be a known problem with sshfs ( http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=640038 ) but found nothing mentioning bindfs. Now I am wondering if there is any such mentioning and/or and explanation for the missing functionality and/or an answer if this affect fuse in general...
|
Filesystems where you can't change the date of a symlink are common. This in itself is not a bug of bindfs or sshfs.
Rsync is designed to cope with that. It ignores failures to change the time and other metadata of symbolic links if the underlying filesystem doesn't support it.
Under Linux, rsync calls utimensat with the AT_SYMLINK_NOFOLLOW flag to change the times of the symbolic link. As far as I can tell, the problem is that the FUSE API has no corresponding flag for utimens (or utime), so the filesystem implementation only sees a request to change the time and no indication of whether to follow the symbolic link or not. Lacking any specific indication, both bindfs and sshfs act in a backward-compatible way: they modify the target of the symbolic link. For a broken symbolic link, this results in an ENOENT error.
At first glance, I thought this was a bug in FUSE: since FUSE is unable to pass the AT_SYMLINK_NOFOLLOW flag, it should return an error (EINVAL, or ENOTSUP). However, from a cursory reading of the Linux VFS code, it looks like the filesystem-specific code is invoked either on the symbolic link or on its target, and should therefore never follow any symbolic link. This makes perfect sense: the target of the symbolic link may be on a different filesystem.
So I think this is a bug in bindfs and sshfs (and probably in many other FUSE filesystems): if instructed to change the metadata of a symbolic link, they should only affect that symbolic link, or return an error if the requested change is not possible.
| modify date of symlink on bindfs |
1,433,760,573,000 |
When I run podman with --storage-opt ignore_chown_errors=true I am getting
Error: kernel does not support overlay fs: 'overlay' is not supported over extfs at /home/user/.local/share/containers/storage/overlay: backing file system is unsupported for this graph driver
|
This is because on Debian you do not have a kernel driver for overlayfs: so you'll need to use a userspace filesystem driver for overlayfs. First make sure it's installed,
sudo apt install fuse-overlayfs
Then add this argument to podman (NOT podman run),
--storage-opt mount_program=/usr/bin/fuse-overlayfs
In your case it should look like this
podman --storage-opt mount_program=/usr/bin/fuse-overlayfs --storage-opt ignore_chown_errors=true run [...]
This option can also be set in ~/.config/containers/storage.conf under mount_program
| Error: kernel does not support overlay fs: 'overlay' is not supported over extfs |
1,433,760,573,000 |
I found out about fuseiso a while ago but I need to mount UDF images, and it seems like fuseiso doesn't support it after failed attempts on my part to mount a UDF image with it. I need to be able to do this as a regular user for arbitrary images and I have to be able to unmount them as well, preferably with the mount-points scoped within a particular user directory (assuming that's not a problem, e.g. /home/user/mounted/*), so directly using mount doesn't work. Is there a way to accomplish this?
I'm on Ubuntu and while investigating this I found out about pmount but it seems like it doesn't fit my needs because 1) I'm trying to mount an .iso file and not a /dev block device 2) I wouldn't be able to mount it at a user location (so I could then unmount it as a user, such as by using fusermount -u if it were a fuse fs).
POLICY
The mount will succeed if all of the following conditions are met:
· device is a block device in /dev/
· device is not in /etc/fstab (if it is, pmount executes mount device as the calling user to handle this
transparently). See below for more details.
· device is not already mounted according to /etc/mtab and /proc/mounts
· if the mount point already exists, there is no device already mounted at it and the directory is empty
· device is removable (USB, FireWire, or MMC device, or /sys/block/drive/removable is 1) or whitelisted in
/etc/pmount.allow.
· device is not locked
What are any options I have? In the worst and most discouraged case I imagine as a final resort I would be able to write a custom setuid script to accomplish this? I'm hoping I don't have to risk that though.
|
I use udisksctl loop-setup -f /full/path/to/iso for that from the udisks2 package.
udisksctl loop-setup -f /media/myname/dvd/avatar/buch-1/AVATAR_BK1_VOL1_EUR.iso
Mapped file /media/myname/dvd/avatar/buch-1/AVATAR_BK1_VOL1_EUR.iso as /dev/loop1.
It mounts the iso in /media/$USER/.
If not, you also need to type udisksctl mount -b /dev/loop1
$ mount | grep udf
/media/myname/dvd/avatar/buch-1/AVATAR_BK1_VOL1_EUR.iso on /media/myname/AVATAR_BK1_VOL1_EUR type udf (ro,nosuid,nodev,relatime,uid=1000,gid=1000,iocharset=utf8,uhelper=udisks2)
Umount with udisksctl unmount -b /dev/loop1 if the iso was mapped on /dev/loop1.
Should work without gui too.
| mounting and unmounting UDF .iso images as a regular user |
1,433,760,573,000 |
I've mounted a networked filesystem in GNOME by clicking on the icon in the left of Nautilus. However, when I use the terminal, I can't figure out how to access that filesystem. Is it possible?
|
Nautilus uses GVFS to mount networked filesystems. Unlike its predecessor GnomeVFS, GVFS includes a FUSE bridge so that non GVFS-aware applications can still access GVFS data.
That means that there are two ways to do this: using the FUSE bridge, or using the native GVFS tools.
Using the FUSE bridge
According to man gvfsd-fuse, the GVFS daemon will mount bridges either at $XDG_RUNTIME_DIR/gvfs or $HOME/.gvfs. You should first check in $HOME/.gvfs.
$ ls ~/.gvfs
If it's there, great. All your networked, Nautilus-mounted filesystems should be shown as subdirectories.
However, on my system (Arch GNU/Linux, GNOME 3.10), that directory doesn't exist. Therefore, you need to look in $XDG_RUNTIME_DIR/gvfs. On my system, this ends up being /run/user/$UID/gvfs, where $UID is your user id. As above, your mounts will be a subdirectory of this directory. You can use ordinary tools, like ls, cat, $EDITOR, etc. to work with the contents of these subdirectories.
Using the native GVFS tools
GVFS provides the gvfs-* family of tools to natively interact with GVFS. For example, gvfs-cat is just like regular cat, but it is GVFS-aware.
All network mounts are referenced in the special GVFS computer:/// location. We need to get what they reference.
$ gvfs-tree computer:///
computer:///
|-- APPLE SD Card Reader.drive
|-- HL-DT-STDVDRW GA32N.drive -> burn:///
|-- ST31000528AS.drive -> file:///run/media/alex/Macintosh%20HD
|-- root.link -> file:///
`-- [email protected] -> davs://[email protected]/remote.php/webdav
In this listing, you can see my SD card reader, my optical drive, a different partition on my internal drive (mounted), a representation of the filesystem root, and finally, the networked filesystem that we're interested in (an OwnCloud account). Notice that this command indicates links.
Now that we have the address for the networked filesystem, we can use GVFS tools to look at it. For example, let's list the contents of my OwnCloud.
$ gvfs-ls davs://[email protected]/remote.php/webdav
Introduction to Arch Linux.odp
Looks like I don't have too much there. Let's create a new file. Now, GVFS doesn't have a tool like touch, but it does have a tool to save files. We can just save an empty file.
$ gvfs-save davs://[email protected]/remote.php/webdav/foobar.txt
gvfs-save will wait for you to type something. Since we don't actually want anything to be in this file, hit Ctrl-D to save.
Now we can open this file with the default handler for it.
$ gvfs-open davs://[email protected]/remote.php/webdav/foobar.txt
It's worth noting that if you don't give it a file extension, the file won't open. This is because gvfs-open will throw an error about not knowing which application should be used to handle the file. (If you made this mistake, fix it with gvfs-move.)
You can list all the GVFS commandline tools with a simple ls.
$ ls /usr/bin/gvfs-*
| How can I access networked filesystems that I've mounted in Nautilus? |
1,433,760,573,000 |
I'm trying to run the borg mount in Borg Backup, but it's saying that fusermount3 is not installed.
fuse: failed to exec fusermount3: No such file or directory
Googling this problem isn't helping. I have fuse installed:
fuse is already the newest version (2.9.9-3).
as is libfuse:
libfuse2 is already the newest version (2.9.9-3).
I'm running Linux Mint 20 with Kernel 5.4.0-113-generic
|
Figured it out: I had to install fuse3
sudo apt-get install fuse3
| BorgBackup: fuse: failed to exec fusermount3: No such file or directory |
1,433,760,573,000 |
After unmounting a remote file system with fusermount -u ~/sshfs_mount/ and then calling systemctl suspend my Arch Linux 4.20.2 froze for about 20 seconds.
After those 20 seconds, the system became responsive again (it didn't suspend). Then I tried to suspend once more which succeeded this time.
Checking out journalctl, I found a lot of these messages:
Jan 21 10:10:45 me systemd-logind[510]: Power key pressed.
Jan 21 10:10:45 me kernel: PM: suspend exit
Jan 21 10:10:45 me kernel: PM: suspend entry (s2idle)
Jan 21 10:11:05 me kernel: PM: Syncing filesystems ... done.
Jan 21 10:11:05 me kernel: Freezing user space processes ...
Jan 21 10:11:05 me kernel: Freezing of tasks failed after 20.002 seconds (15 tasks refusing to freeze, wq_busy=0):
Jan 21 10:11:05 me kernel: pool D 0 10812 5584 0x00000084
Jan 21 10:11:05 me kernel: Call Trace:
Jan 21 10:11:05 me kernel: ? __schedule+0x29b/0x8b0
Jan 21 10:11:05 me kernel: ? __wake_up_common+0x77/0x140
Jan 21 10:11:05 me kernel: ? preempt_count_add+0x79/0xb0
Jan 21 10:11:05 me kernel: schedule+0x32/0x90
Jan 21 10:11:05 me kernel: request_wait_answer+0xaa/0x1f0 [fuse]
Jan 21 10:11:05 me kernel: ? wait_woken+0x80/0x80
Jan 21 10:11:05 me kernel: __fuse_request_send+0x61/0x80 [fuse]
Jan 21 10:11:05 me kernel: fuse_simple_request+0xcd/0x190 [fuse]
Jan 21 10:11:05 me kernel: fuse_statfs+0xde/0x140 [fuse]
Jan 21 10:11:05 me kernel: statfs_by_dentry+0x67/0x90
Jan 21 10:11:05 me kernel: vfs_statfs+0x16/0xc0
Jan 21 10:11:05 me kernel: user_statfs+0x54/0xa0
Jan 21 10:11:05 me kernel: __se_sys_statfs+0x25/0x60
Jan 21 10:11:05 me kernel: do_syscall_64+0x5b/0x170
Jan 21 10:11:05 me kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9
Jan 21 10:11:05 me kernel: RIP: 0033:0x7fe2aa8571ab
Jan 21 10:11:05 me kernel: Code: Bad RIP value.
Jan 21 10:11:05 me kernel: RSP: 002b:00007fe221efecf8 EFLAGS: 00000246 ORIG_RAX: 0000000000000089
Jan 21 10:11:05 me kernel: RAX: ffffffffffffffda RBX: 00007fe27258e3a0 RCX: 00007fe2aa8571ab
Jan 21 10:11:05 me kernel: RDX: 00007fe2725869b0 RSI: 00007fe221efed20 RDI: 00007fe2689573a0
Jan 21 10:11:05 me kernel: RBP: 00007fe221efee80 R08: 00007fe29713ee58 R09: 00007fe29713ee60
Jan 21 10:11:05 me kernel: R10: 00007fe29714e078 R11: 0000000000000246 R12: 00007fe268957040
Jan 21 10:11:05 me kernel: R13: 00007ffc0f96f75f R14: 00007fe221eff700 R15: 000000000000001e
Jan 21 10:11:05 me kernel: pool D 0 10813 5584 0x00000084
There's also this:
Jan 21 10:11:05 me kernel: OOM killer enabled.
Jan 21 10:11:05 me kernel: Restarting tasks ... done.
Jan 21 10:11:05 me systemd-sleep[23193]: Failed to suspend system. System resumed again: Device or resource busy
Jan 21 10:11:05 me kernel: PM: suspend exit
Jan 21 10:11:05 me systemd[1]: systemd-suspend.service: Main process exited, code=exited, status=1/FAILURE
Jan 21 10:11:05 me systemd[1]: systemd-suspend.service: Failed with result 'exit-code'.
Jan 21 10:11:05 me systemd[1]: Failed to start Suspend.
Jan 21 10:11:05 me systemd[1]: Dependency failed for Suspend.
Jan 21 10:11:05 me systemd[1]: suspend.target: Job suspend.target/start failed with result 'dependency'.
Jan 21 10:11:05 me audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-suspend comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jan 21 10:11:05 me systemd[1]: Stopped target Sleep.
Jan 21 10:11:05 me kernel: audit: type=1130 audit(1548061865.860:643): pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=systemd-suspend comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jan 21 10:11:05 me systemd-logind[510]: Operation 'sleep' finished.
According to pacman -Qi systemd, I got version 240.34-3.
I don't know if there's a causal relationship between fusermount and the symptoms but I reckon there is, due to all the mentions of fuse in journalctl.
This issue is mentioned here with the latest non-automated reply in 2012 suggesting to unmounting the remote filesystem before suspending; but that's what I did before the machine froze.
Here is another report of the issue, not containing a workaround or solution.
The answer to this question, while being accepted and upvoted, does not contain actionable advice for me on how to avoid the issue in the future.
|
My gut feeling on this is that there is some caching in sshfs which is still being flushed (many) seconds after you unmounted.
It would be legitimate for a kernel thread to refuse to sleep while attempting to flush a cache, especially where that requires a network connection.
I can't find documentation on whether or not sync will flush caches for fusermount file systems, but do try this first. Ie:
fusermount -u ~/sshfs_mount
sync
systemctl suspend
You could also try mounting the sshfs with -o cache=no as mentioned here:
https://superuser.com/questions/542444/ubuntu-sshfs-doesnt-sync
This might hurt performance with sshfs though.
| After fuse unmount: Freezing of tasks failed |
1,433,760,573,000 |
I downloaded WSL (Windows Subsystem for Linux) and tried to run an AppImage, but received an error message that said
AppImage needs FUSE to run
When I tried the --appimage-extract and --appimage-extract-and-run options, neither of them worked. It seems that FUSE is not supported in WSL.
How can I run an AppImage on WSL if it requires FUSE and FUSE is not supported in WSL?
|
You don't mention which Ubuntu version you are using, but I'm guessing Ubuntu 22.04 since that release doesn't include FUSE by default. See this answer on Ask Ubuntu. I tested with the KeePassXC AppImage on WSL on both Ubuntu 20.04 and 22.04. It works fine on 20.04, but I get the same error as you on 22.04. To quote the entire error for searchability:
dlopen(): error loading libfuse.so.2
AppImages require FUSE to run.
You might still be able to extract the contents of this AppImage
if you run it with the --appimage-extract option.
See https://github.com/AppImage/AppImageKit/wiki/FUSE
for more information
Again, this isn't a WSL issue -- You'd see the same thing on any installation of Ubuntu 22.04.
The solution is straightforward:
sudo apt install libfuse2
However, on WSL you may find that you need additional dependencies for graphical apps, since the WSL Ubuntu distribution is based on Ubuntu Server and doesn't include graphical libraries by default.
For instance, for KeePassXC, there are a number of graphical dependencies in the AppImage that just aren't available with Ubuntu Server.
I'm honestly not even sure what all of the dependencies are, since I tried to install them piecemeal without success. However, if you:
sudo apt install xterm
... then it will also come with all of the needed graphical libraries for (at least) KeePassXC (and probably others).
However, there are almost certainly AppImages that have other dependencies, such as a desktop environment (e.g. Gnome or KDE).
| Running AppImage on WSL: How to resolve error requiring FUSE? |
1,433,760,573,000 |
So I goofed when using sshfs and the folder I was using as a mountpoint for the server has been borked. The server wasn't unmounted correctly (I think due to a network drop out).
consequently, when I ls my /Volumes/ where I had originally made the mountpoint folder I now get an I/O error:
joehealey@Joes-MacBook-Pro:/Volumes$ ls -al
ls: mountpoint: Input/output error
total 24
drwxrwxrwt@ 7 root admin 238 21 Oct 13:08 ./
drwxr-xr-x 37 root wheel 1326 3 Oct 12:38 ../
-rw-r--r--@ 1 joehealey admin 6148 22 Sep 2014 .DS_Store
drwxr-xr-x 1 joehealey staff 8192 28 Jul 20:04 BOOTCAMP/
lrwxr-xr-x 1 root admin 1 15 Oct 08:52 Macintosh HD@ -> /
drwxrwxrwx 0 root wheel 0 21 Oct 13:08 MobileBackups/
joehealey@Joes-MacBook-Pro:/Volumes$ mkdir mountpoint
mkdir: mountpoint: File exists
joehealey@Joes-MacBook-Pro:/Volumes$
I've seen similar problems in thread such as this where the suggestions are to nuke the whole disk etc. Now, I'm not so concerned by this that I'm prepared to go that far, so I'm just wondering if there is any way to force-remove and resolve this specific instance?
|
Simply using:
umount /Volumes/mountpoint
Has solved it. No idea why fsusermount -u wasn't an option for my install. Perhaps someone else will know(?).
For full reference:
Before
joehealey@Joes-MacBook-Pro:/Volumes$ mount
/dev/disk0s2 on / (hfs, local, journaled)
devfs on /dev (devfs, local, nobrowse)
map -hosts on /net (autofs, nosuid, automounted, nobrowse)
map auto_home on /home (autofs, automounted, nobrowse)
/dev/disk0s4 on /Volumes/BOOTCAMP (ntfs, local, read-only, noowners)
localhost:/nWFBTycSJIUVhjjjh8YMP4 on /Volumes/MobileBackups (mtmfs, nosuid, read-only, nobrowse)
wms_joe@DMI:/home/wms_joe/ on /Volumes/mountpoint (osxfusefs, nodev, nosuid, synchronous, mounted by joehealey)
The wms_joe@DMI: server on mountpoint is the offending article.
Unmounting
joehealey@Joes-MacBook-Pro:/Volumes$ umount /Volumes/mountpoint
After
joehealey@Joes-MacBook-Pro:/Volumes$ mount
/dev/disk0s2 on / (hfs, local, journaled)
devfs on /dev (devfs, local, nobrowse)
map -hosts on /net (autofs, nosuid, automounted, nobrowse)
map auto_home on /home (autofs, automounted, nobrowse)
/dev/disk0s4 on /Volumes/BOOTCAMP (ntfs, local, read-only, noowners)
localhost:/nWFBTycSJIUVhjjjh8YMP4 on /Volumes/MobileBackups (mtmfs, nosuid, read-only, nobrowse)
Now able to remake the previously denied folder
joehealey@Joes-MacBook-Pro:/Volumes$ mkdir mountpoint
joehealey@Joes-MacBook-Pro:/Volumes$ ls
BOOTCAMP Macintosh HD MobileBackups mountpoint
| Input/Output Error when rm/mkdir |
1,433,760,573,000 |
I'm currently trying to write a couple of systemd/udev configuration files that will allow me to automount/unmount MTP Android devices on my Arch Linux laptop. It took me some time, but so far it works pretty well.
Now, I would like for any user with fuse permissions to be able to unmount the device. So far, it's only possible for the same user as the one go-mtpfs was started as.
I'm well aware that MTP is designed so that you can just unplug the device without consequences, but having an error message pop up when clicking "Eject" in Nautilus is kind of unexpected and not really nice.
I tried the following, but failed :
Add myself to the fuse group, start go-mtpfs as root and try to unmount as myself
Start go-mtpfs as the fuse user and group, and try to unmount as myself, also in the fuse group
Any idea? Also, if you have an elegant way to achieve the same thing without having to rely on the fuse group, I'd love to hear about it!
systemd service (/etc/systemd/system/android-mtp.service) :
[Service]
Type=forking
ExecStartPre=/bin/mkdir -p /media/Android
ExecStart=/usr/sbin/daemonize -l /var/lock/go-mtpfs.lock /usr/bin/go-mtpfs -allow-other=true /media/Android
ExecStop=/bin/umount /media/Android
ExecStopPost=/bin/rmdir /media/Android
udev rule (/etc/udev/rules.d/99-android-mtp.rules) :
# Google Nexus 7 16 Gb Bootloader & recovery mode
SUBSYSTEM=="usb", ATTR{idVendor}=="18d1", ATTR{idProduct}=="4e40", MODE="0666" # Bootloader
SUBSYSTEM=="usb", ATTR{idVendor}=="18d1", ATTR{idProduct}=="d001", MODE="0666" # Recovery
# Google Nexus 7 16 Gb PTP mode (camera)
SUBSYSTEM=="usb", ATTR{idVendor}=="18d1", ATTR{idProduct}=="4e43", MODE="0666" # PTP media
SUBSYSTEM=="usb", ATTR{idVendor}=="18d1", ATTR{idProduct}=="4e44", MODE="0666" # PTP media with USB debug on
# Google Nexus 7 16 Gb MTP mode (multimedia device)
SUBSYSTEM=="usb", ATTR{idVendor}=="18d1", ATTR{idProduct}=="4e41", MODE="0666" # MTP media
SUBSYSTEM=="usb", ATTR{idVendor}=="18d1", ATTR{idProduct}=="4e42", MODE="0666" # MTP media with USB debug on
# Google Nexus 7 MTP mode : automatic unmount when unplugged (all android versions)
ENV{ID_MODEL}=="Nexus", ENV{ID_MODEL_ID}=="4e41", ACTION=="remove", RUN+="/usr/bin/systemctl stop android-mtp.service"
ENV{ID_MODEL}=="Nexus", ENV{ID_MODEL_ID}=="4e42", ACTION=="remove", RUN+="/usr/bin/systemctl stop android-mtp.service"
ENV{ID_MODEL}=="Nexus_7", ENV{ID_MODEL_ID}=="4e41", ACTION=="remove", RUN+="/usr/bin/systemctl stop android-mtp.service"
ENV{ID_MODEL}=="Nexus_7", ENV{ID_MODEL_ID}=="4e42", ACTION=="remove", RUN+="/usr/bin/systemctl stop android-mtp.service"
# Google Nexus 7 MTP mode : automatic mount when plugged (all android versions)
ENV{ID_MODEL}=="Nexus", ENV{ID_MODEL_ID}=="4e41", ACTION=="add", TAG+="systemd", ENV{SYSTEMD_WANTS}="android-mtp.service"
ENV{ID_MODEL}=="Nexus", ENV{ID_MODEL_ID}=="4e42", ACTION=="add", TAG+="systemd", ENV{SYSTEMD_WANTS}="android-mtp.service"
ENV{ID_MODEL}=="Nexus_7", ENV{ID_MODEL_ID}=="4e41", ACTION=="add", TAG+="systemd", ENV{SYSTEMD_WANTS}="android-mtp.service"
ENV{ID_MODEL}=="Nexus_7", ENV{ID_MODEL_ID}=="4e42", ACTION=="add", TAG+="systemd", ENV{SYSTEMD_WANTS}="android-mtp.service"
|
The fuse group is intended to indicate who can mount FUSE filesystems. The intent is not that anyone in that group can unmount filesystems mounted by others. Only the user doing the mounting, or root, can unmount the filesystem.
You can use sudo to authorize users in the fuse group to run an unmount command as the same user who did the mounting. Run visudo to add a line like:
%fuse ALL = (fuse) fusermount -u /media/Android
Why aren't you doing the mounting as yourself? That's the usual way to use FUSE.
| Allow any user in the fuse group to unmount |
1,433,760,573,000 |
I wonder if there is something like "user specific /etc/fstab" for fusermount? ~/.fstab, ~/.config/fstab, something the like, which would work in cooperation with FUSE.
I used
sshfs foo.bar: foo.bar/
from the home dir to connect to the remote dir (there is foo.bar directory, and I have .ssh/config set accordingly). But I didn't like the repeating of foo.bar, wanted to use simple command [cmd] foo.bar/ to mount the remote directory. After some googling I found that simple "mount foo.bar/" can be made to work with the following line in /etc/fstab (also needed to enable "user_allow_other" in /etc/fuse.conf)
[email protected]: /home/user/foo.bar fuse.sshfs user,IdentityFile=/home/user/.ssh/id_rsa,port=12345,allow_other 0 0
Now "mount foo.bar" works as intended (and "umount" works as well). But it seems kind of odd to edit system-wide file for user-specific purpose; also the settings already in .ssh/config are repeated there (port), the identity file has to be specified. Maintaining this for more sites (users) seems inconvenient and evidently not what /etc/fstab is for. Another oddity - FUSE is run by root (afaictl) when using this solution.
I would much prefer something like "fusermount foo.bar/", with user specific fstab.
Is there such a thing?
|
There's no per-user equivalent of /etc/fstab. You can write a shell script that reads a file of your choice and calls the appropriate mounting command. Note that from the argument foo.bar, you have to deduce multiple pieces of information: the server location foo.bar, the directory on the server (here your home directory), and first and foremost the fact that it's an SSHFS mount.
#!/bin/bash
####
if [ -e ~/.fstab ]; then
args=("$@")
((i=${#args[@]}-1))
target=${args[$i]}
while read filesystem mount_point command options comments; do
if [[ $filesystem = \#* ]]; then continue; fi
if [[ $mount_point = "$target" || $filesystem = "$target" ]]; then
if [[ -n $options ]]; then
args[$((i++))]=-o
args[$((i++))]=$options
fi
args[$((i++))]=$filesystem
args[$((i++))]=$mount_point
exec "$3" "${args[@]}"
fi
done
fi
## Fall back to mount, which looks in /etc/fstab
mount "$@"
(Warning: untested code.)
This snippet parses a file ~/.fstab with a syntax reminiscent of /etc/fstab: “device”, mount point, filesystem type, options. Note that here the filesystem type is a command to execute and the “device” is filesystem-dependent. Not all FUSE filesystem commands use this syntax with a “device” followed by a mount point, though it's a common convention.
SSH options like the identity file, the remote username, etc. can stay in ~/.ssh/config. The only reason to put them in /etc/fstab is to allow these options to be used by all users.
| User specific fstab for fusermount |
1,433,760,573,000 |
I want to be able to mount, say /home/$USER/workspace to /usr/local/workspace.
Right now I'm using the python package pyfilesystem which uses fuse to do that. My problem is, that inside that mount I am not able to create symlinks. I don't even need symlinks going outside the mount, but even a symlink that normally is created for a shared library during compilation, will not be created inside the mount.
So I'm either looking for a totally different approach, or a tool (preferably written in Python) that does exactly what pyfilesystem is doing and supports the creation of symlinks.
Further constraints:
Using a simple symlink instead of a mount does not work for me as the mounted directory will actually be inside a chroot.
Mounting must not require root privileges.
Changing fstab is not an option.
Thus using mount bind is not an option.
|
After searching some more I stumbled upon proot which combines chroot with the ability to mount any directory into the new root. It supports any file operation inside its chroot, yes even symlinks, that will happily work even after proot unmounted the directory.
It doesn't need root privileges and made my complicated setup of schroot + pyfilesystem unnecessary.
| Creating a local workspace for development/testing |
1,433,760,573,000 |
If I have a script that relies on one of the following being present: overlayfs, aufs, unionfs - what is the best way to determine which is available from a bash script? I would like to use overlayfs, but fall back to aufs or unionfs if it is not available. I can look at the kernel version as a guess - but just because it's a >= 3.18 kernel doesn't mean that overlayfs was built in - is there a reasonably foolproof way to check?
|
Under Linux, you can see which filesystem types are available in the running kernel in the file /proc/filesystems. The content of this file is built in real time by the kernel when it's read, so it reflects the current status. The format of this file is a bit annoying: each line contains either 8 spaces followed by a filesystem type, or nodev followed by 3 spaces followed by a filesystem type. Strip off the first 8 characters of each line to get just the available filesystem types, or use grep -w (as long as you aren't looking for a filesystem type called nodev).
if grep -qw aufs /proc/filesystems; then
echo aufs is available
fi
This isn't the complete story, because the filesystem driver could be available in the form of a module that isn't currently loaded. Assuming you can load modules, if the filesystem isn't available, try loading it. Filesystem modules have an alias of the form fs-FSTYPE (the actual module name is often the name of the filesystem type, but not always).
if grep -qw aufs /proc/filesystems; then
echo aufs is available
elif modprobe fs-aufs; then
echo now aufs is available
fi
This is for kernel filesystems. For FUSE filessytems, check whether the fuse filesystem is available, and look for the executable that implements the filesystem. While you can look for fuse in /proc/filesystems, that only tells you whether it's available, not whether your program has enough privileges to use it. A more reliable test in practice is to check whether you can write to /dev/fuse.
if [ -w /dev/fuse ]; then
echo FUSE is available
if type unionfs-fuse >/dev/null 2>/dev/null; then
echo unionfs-fuse is available
fi
fi
| How do you determine filesystem availability from a bash script? |
1,433,760,573,000 |
I'd previously used aufs2 in aufs-tools with some luck, but apparently this package has been "superseded" (this is strange term to use for a package which seems to have been removed only because it no longer compiles, but never mind).
Okay, so I thought I would try to use unionfs-fuse. I can't for the life of me figure out how to make it work for users though.
I'm using this command to make my unified mount:
unionfs-fuse /mnt/disk1-pool=RW:/mnt/disk3-pool=RW /mnt/union-pool
When I run this as root, I cannot access this share as joe user:
$ ls -al /mnt
ls: cannot access /mnt/union-pool: Permission denied
...
d?????????? ? ? ? ? ? union-pool
When I run it as joe user, I cannot access this share as root. I basically get the exact same output as above. This is a little weird to me, root being root.
Both root (obviously) and joe user can access the /mnt/disk1-pool and /mnt/disk3-pool mounts.
If anybody has any info about aufs-tools for natty I'd also be interested. I am quite fond of this package because it worked.
|
I suppose (but not tried) that the fuse option -o allow_other, also shown in the example in the unionfs-fuse's man page, could be of help.
Edit
Try this
sudo mount -t aufs -o br:/mnt/disk1-pool=RW:/mnt/disk3-pool=RW \
none /mnt/union-pool
that seems to work also without aufs-tools package.
| How can I create a unionfs-fuse mount that is readable by all? |
1,433,760,573,000 |
So I have a permission problem with my sshfs mount:
root@server01:/mnt# sshfs -o uid=$(id -u www-data) -o gid=$(id -g www-data) user@host:/path mountpoint
root@server01:/mnt# ls -Zlah
total 12K
drwxr-xr-x 3 root root ? 4.0K Nov 29 20:00 .
drwxr-xr-x 23 root 1001 ? 4.0K Nov 29 13:03 ..
drwxrwxrwx 1 www-data www-data ? 4.0K Nov 29 18:53 mountpoint
root@server01:/mnt# getfacl mountpoint/
# file: mountpoint/
# owner: www-data
# group: www-data
user::rwx
group::rwx
other::rwx
root@server01:/mnt# sudo -u www-data ls -lah
ls: cannot access mountpoint: Permission denied
total 8.0K
drwxr-xr-x 3 root root 4.0K Nov 29 20:00 .
drwxr-xr-x 23 root 1001 4.0K Nov 29 13:03 ..
d????????? ? ? ? ? ? mountpoint
Maybe the problem lies here:
root@server01:/mnt# mount
# unrelated stuff skipped
user@host:/path/ on /mnt/mountpoint type fuse.sshfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
Here it says the uid and gid of the mount are both 0, which is root. But on my mount command and when using ls as root, it tells me everything belongs to gid/uid 33 which is www-data.
|
sshfs = FUSE, you are mounting as root, then trying to access using another user.
for a joke / test, you can sshfs as regular user, then switch to root, cd, ohh permission denied, how can root be denied, it's root...
run sshfs as the user you want to access.
update with example:
**test**@mike-laptop4:/mnt$ sshfs [email protected]:/home/mike moo
test@mike-laptop4:/mnt$ ls moo/
src
mike@mike-laptop4:/mnt$ ls moo
ls: cannot access 'moo': Permission denied
mike@mike-laptop4:/mnt$ sudo su
root@mike-laptop4:/mnt# ls moo
ls: cannot access 'moo': Permission denied
and vice versa:
**mike**@mike-laptop4:/mnt$ sshfs [email protected]:/home/mike moo
mike@mike-laptop4:/mnt$ ls moo
src
test@mike-laptop4:/mnt$ ls moo
ls: cannot access 'moo': Permission denied
mike@mike-laptop4:/mnt$ sudo su
root@mike-laptop4:/mnt# ls moo
ls: cannot access 'moo': Permission denied
UPDATE, Expand on solutions:
Solution 1: mount as the user required to access the data (security preference).
$ sshfs [email protected]:/home/mike moo
Using this option will allow only the mounting user to access the data.
The following 2x solution require (unless mounting as root, root shouldn't be used for sshfs);
/etc/fuse.conf
user_allow_other
Solution 2: allow any user on the box access
$ sshfs -o allow_other [email protected]:/home/mike moo
Literally any user on the source host can create,edit,delete files, this is a terrible idea in most circumstances, and I can't imaging would ever be allowed in a PCI environment.
Not only do you risk all the data on the remote, but you risk a local user manipulating data that can be later used by another local user.
Solution 3: allow any user on the box, but honor local filesystem perms.
$ sshfs -o allow_other,default_permissions [email protected]:/home/mike moo
This option is much more acceptable than the last owing to the fact that only users authorized by the local filesystem will be allowed to access / edit files in the mount.
It would also be possible to setup group based permissions.
| Owner of sshfs-mounted directory with 777 permission can't open it (no ACL, no SELinux) |
1,433,760,573,000 |
I am going to implement a filesystem in FUSE, and later in the kernel. I am not sure what to make out of Direct IO. Different sources emphasize on different things that this flag supposedly implies.
Is it safe for a filesystem to just ignore O_DIRECT?
Read and write operations would proceed like normal. Open would ignore it and not fail.
Data checksums would still be be verified. Therefore a read operation may fail due to checksum even if hard disk returned OK.
Written data would be subject to copy on write and delayed allocation.
Write operations would return OK immediately. Writeback would happen after a delay, or never in case of power out. Durability is not guaranteed, but this is O_SYNC sementic anyway.
Few issues come to mind.
Caching file content in the Buffer/Page cache is, to my understanding, a responsibility of the VFS and not the filesystem. Does VFS also interpret this flag?
According to one answer, flag may fail on future kernels. Comment under the answer explains that Direct IO is contrary to data journaling mode.
|
Per the open(2) man page:
O_DIRECT (since Linux 2.4.10)
Try to minimize cache effects of the I/O to and from this
file. In general this will degrade performance, but it is
useful in special situations, such as when applications do
their own caching. File I/O is done directly to/from user-
space buffers. The O_DIRECT flag on its own makes an effort
to transfer data synchronously, but does not give the
guarantees of the O_SYNC flag that data and necessary metadata
are transferred. To guarantee synchronous I/O, O_SYNC must be
used in addition to O_DIRECT. See NOTES below for further
discussion.
From the NOTES section:
O_DIRECT support was added under Linux in kernel version 2.4.10.
Older Linux kernels simply ignore this flag. Some filesystems may
not implement the flag and open() will fail with EINVAL if it is
used.
So O_DIRECT used to be simply ignored. And from the LKML, just a couple of months ago:
Who cares how a filesystem implements O_DIRECT as long as it does
not corrupt data? ext3 fell back to buffered IO in many situations,
yet the only complaints about that were performance. IOWs, it's long been
true that if the user cares about O_DIRECT performance then they
have to be careful about their choice of filesystem.
But if it's only 5 lines of code per filesystem to support O_DIRECT
correctly via buffered IO, then exactly why should userspace have
to jump through hoops to explicitly handle open(O_DIRECT) failure?
Especially when you consider that all they can do is fall back to
buffered IO themselves....
I had written counterpoints for all of this, but I thought better of
it. Old versions of the kernel simply ignore O_DIRECT, so clearly
there's precedent.
Given that, it seems that you're safe to simply ignore it. The key phrase seems to be to "not corrupt data".
For now.
Note also that your linked question has answers that say O_DIRECT isn't useful for performance reasons. That is simply incorrect. Passing data through the page cache is slower than not passing it through the page cache. That can be significant on hardware capable of transferring gigabytes per second. And if you only handle each bit of data one time, the caching is literally useless yet it will needlessly impact the entire system.
It's been a few years since I wrote a Linux filesystem module. Unfortunately I don't recall how the VFS systems handle caching.
| Is it safe to ignore O_DIRECT? |
1,433,760,573,000 |
In some distributions /dev/fuse is owned by root:fuse while in other distributions /dev/fuse is owned by root:root. I'm using CentOS which belongs to the former set of distributions. And I'm wondering if it is secure for me to change the ownership on /dev/fuse to root:fuse.
|
Since FUSE is "File System in User Space", this could derive in undesired mounts, or virtual file system structures in your system that you didn't foresee/want to be there at all.
Changing ownership of devices implies that other users may use directly these devices without needing administrative rights (root/sudo). Changing the group ownership to root:fuse will be a security issue if you cannot control who is a member of the group fuse.
But if the group fuse is a limited, controlled group of users in which you trust (and/or which are the only ones who actually need to use the device), then the security issue is translated into the security of the users (how easy can anyone steal their identities).
Generally speaking, the more you share something, the less secure it becomes... but also, the more tighter security, the less usability.
So, in the end, it falls down to bring it to a desired balance.
| Changing ownership on /dev/fuse - security issues? |
1,433,760,573,000 |
While creating incremental backups is relatively simple (and can be automated, e.g. via rdiff-backup), in order to access a specific state of a file one first has to manually restore the backup, which is both not-simple and tedious if you need to browse through multiple states. So is there a FUSE which allows to transparently access previous states e.g. via some filename@2013-01-23 (the backup made at that date, if existing) or filename@{-2} (two backups ago) syntax while the current and backup files reside on arbitrary filesystems (including remote ones, e.g. nfs backups while the current state is on a local ext3)?
|
$ apt-cache search rdiff fuse
rdiff-backup-fs - Fuse filesystem for accessing rdiff-backup archives
(untested). http://code.google.com/p/rdiff-backup-fs/
| Is there a FUSE which permits transparently accessing incremental backups? |
1,433,760,573,000 |
I have been trying now for several days to get my new server up and running. I am running CentOS with MergerFS to pool my drives and samba to host to my windows machines. All of this running in Proxmox as well.
Over the weekend I got a couple of hard drives to start my server out with and am unable to get the shares to work correctly with samba. I have narrowed down the issue and it is being caused by labels. SELinux requires my mergerfs pool to have a label of samba_share_t but for some reason, mergerfs is not letting me change it from fusefs_t. All of my drives are ext4, I am seeing a lot of posts online that say this can be caused by using ntfs but that can't be my issue.
Things I have tried:
I have attempted to modify the fstab to include an option to set the
context to samba_share_t, but when I do that I get an error say that
fuse (used by mergerfs) init does not support the option "content".
I have tried manually changing the label of the pool with chcon and I
get an error that the operation is not supported.
I have tried adding the pool folder with semange and then manually running
restorecon and it still doesn't make a change to that specific folder.
Windows being able to see the folder but not able to access it is such a tease, so close yet so far away. If possible, I would like to not have to disable SELinux.
|
I was able to resolve the issue with just a simple setting change
setsebool -P samba_share_fusefs=1
and then restarting the smb service.
| SELinux + MergerFS (fuse) not working well together |
1,433,760,573,000 |
I have some bridge host, which allows access to protected network. I connect to it using this command:
ssh sergius@bridge_host -D 3128
Thus, I can turn on socks proxy in browser and it works. I can login to hosts on that network with this command:
ssh -o 'ProxyCommand /bin/nc.openbsd -x localhost:3128 %h %p' sergius@any_internal_host
It works properly, but I can't mount via sshfs any of these hosts. Probably, I can't get how to use this ssh options in sshfs command. I tried even so silly tricks:
sshfs -o "ssh_command=\"ssh -o 'ProxyCommand /bin/nc.openbsd -x localhost:3128 %h %p'\"" sergius@$host /home/sergius/work/SSHFS/$host/
sshfs -o 'SSHOPT=ProxyCommand /bin/nc.openbsd -x localhost:3128 %h %p' sergius@$host: /home/sergius/work/SSHFS/$host/
sshfs -o 'port=3128' sergius@$host: /home/sergius/work/SSHFS/$host/
One command returns "Connection reset by peer", another - unknown option `SSHOPT=ProxyCommand /bin/nc.openbsd -x localhost:3128 %h %p'
I didn't manage to find any info on the web. Please, help.
===
I feel so stupid, but still can't understand why I got such error:
sshfs -d -o sshfs_debug -o LogLevel=DEBUG3 -o ProxyCommand="/bin/nc.openbsd --proxy localhost:3128 --proxy-type socks5 %h %p" sergius@$host:~ /home/sergius/work/SSHFS/$host/
SSHFS version 2.4
FUSE library version: 2.9.0
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
executing <ssh> <-x> <-a> <-oClearAllForwardings=yes> <-oLogLevel=DEBUG3> <-oProxyCommand=/bin/nc.openbsd --proxy localhost:3128 --proxy-type socks5 %h %p> <-2> <sergius@dev-host003> <-s> <sftp>
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 19: Applying options for *
debug2: ssh_connect: needpriv 0
debug1: Executing proxy command: exec /bin/nc.openbsd --proxy localhost:3128 --proxy-type socks5 dev-host003 22
debug1: permanently_drop_suid: 1000
debug1: identity file /home/sergius/.ssh/id_rsa type -1
debug1: identity file /home/sergius/.ssh/id_rsa-cert type -1
debug1: identity file /home/sergius/.ssh/id_dsa type -1
debug1: identity file /home/sergius/.ssh/id_dsa-cert type -1
debug1: identity file /home/sergius/.ssh/id_ecdsa type -1
debug1: identity file /home/sergius/.ssh/id_ecdsa-cert type -1
/bin/nc.openbsd: invalid option -- '-'
usage: nc [-46bCDdhjklnrStUuvZz] [-I length] [-i interval] [-O length]
[-P proxy_username] [-p source_port] [-q seconds] [-s source]
[-T toskeyword] [-V rtable] [-w timeout] [-X proxy_protocol]
[-x proxy_address[:port]] [destination] [port]
ssh_exchange_identification: Connection closed by remote host
read: Connection reset by peer
=================
Wow! I managed to make it work!!! Great thanks for clarification with options. I read all allowed options and make it via "-x proxy_adress[:port]":
sshfs -o ProxyCommand="/bin/nc.openbsd -x localhost:3128 %h %p" sergius@$host:/home/sergius /home/sergius/work/SSHFS/$host/
|
I was answering similar question not a long time ago. I didn't try it, but this one should work for you:
sshfs -o ProxyCommand="/bin/nc.openbsd --proxy localhost:3128 \
--proxy-type socks5 %h %p" sergius@$host: /home/sergius/work/SSHFS/$host/
The SSHOPT=VAL just the format of option you want to use. You need to replace it with the specific key-value pair.
Also you need to tell the nc what type of proxy is that
| HOWTO: sshfs via socks proxy |
1,433,760,573,000 |
The Fuse packages that are available by default on CentOS 7.3 are a bit dated. The compilation process for Fuse 3 and s3fs should be pretty straight forward. Fuse compiles and installs fine:
mkdir ~/src && cd src
# Most recent version: https://github.com/libfuse/libfuse/releases
wget https://github.com/libfuse/libfuse/releases/download/fuse-3.0.0/fuse-3.0.0.tar.gz
tar xvf fuse-3.0.0.tar.gz && cd fuse-3.0.0
./configure --prefix=/usr
make
make install
export PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/lib64
ldconfig
modprobe fuse
pkg-config –modversion fuse
No problems there... Things show up where they should it seems,
$ ls /usr/lib:
libfuse3.a
libfuse3.la
libfuse3.so
libfuse3.so.3
libfuse3.so.3.0.0
pkgconfig
udev
$ ls /usr/local/lib/pkgconfig/:
fuse3.pc
$ which fusermount3:
/usr/bin/fusermount3
So I proceed to install s3fs:
cd ~/src
git clone https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure --prefix=/usr
And then every time, I hit this:
...
configure: error: Package requirements (fuse >= 2.8.4 libcurl >= 7.0 libxml-2.0 >= 2.6) were not met:
No package 'fuse' found
Consider adjusting the PKG_CONFIG_PATH environment variable if you
installed software in a non-standard prefix.
Alternatively, you may set the environment variables common_lib_checking_CFLAGS
and common_lib_checking_LIBS to avoid the need to call pkg-config.
See the pkg-config man page for more details.
Any idea why s3fs is not finding Fuse properly?
|
Version 1.8 of s3fs doesn't support fuse3. I learnt it rather hard way.
I edited s3fs configure script to replace fuse with fuse3 in the version check. configure script went well after that. However, s3fs compilation fails with some error around incompatibility with fuse functions used. (I don't have the exact compilation error - didn't save the error).
I ended up installing fuse 2.9.x and s3fs installation went well.
| s3fs refuses to compile on CentOS 7, why's it not finding Fuse? |
1,433,760,573,000 |
I'm running GlusterFS using 2 servers (ST0 & ST1) and 1 client (STC), and the volname is rep-volume.
I surfed the net, and read all articles explaining how to fix mounting issues but unfortunately nothing could help me.
The first time I used the following command, it worked perfectly and I had write access:
$ mount.glusterfs ST0:/rep-volume /mnt/replica/
But after rebooting the client, I cannot mount it again, here is the result:
$ mount.glusterfs ST0:/rep-volume /mnt/replica/
Mount failed. Please check the log file for more details.
The log file is shown below:
$ cat /var/log/glusterfs/mnt-replica.log
[2016-09-25 04:54:12.438020] I [MSGID: 100030] [glusterfsd.c:2408:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.4 (args: /usr/sbin/glusterfs --volfile-server=ST0 --volfile-id=/rep-volume /mnt/replica)
[2016-09-25 04:54:12.444256] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2016-09-25 04:54:12.449300] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2
[2016-09-25 04:54:12.449704] I [MSGID: 114020] [client.c:2356:notify] 0-rep-volume-client-0: parent translators are ready, attempting connect on transport
[2016-09-25 04:54:12.451504] I [MSGID: 114020] [client.c:2356:notify] 0-rep-volume-client-1: parent translators are ready, attempting connect on transport
[2016-09-25 04:54:12.451861] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-rep-volume-client-0: changing port to 49152 (from 0)
Final graph:
+------------------------------------------------------------------------------+
1: volume rep-volume-client-0
2: type protocol/client
3: option ping-timeout 42
4: option remote-host ST0
5: option remote-subvolume /replica1
6: option transport-type socket
7: option transport.address-family inet
8: option send-gids true
9: end-volume
10:
11: volume rep-volume-client-1
12: type protocol/client
13: option ping-timeout 42
14: option remote-host ST1
15: option remote-subvolume /replica2
16: option transport-type socket
17: option transport.address-family inet
18: option send-gids true
19: end-volume
20:
21: volume rep-volume-replicate-0
22: type cluster/replicate
23: subvolumes rep-volume-client-0 rep-volume-client-1
24: end-volume
25:
26: volume rep-volume-dht
27: type cluster/distribute
28: option lock-migration off
29: subvolumes rep-volume-replicate-0
30: end-volume
31:
32: volume rep-volume-write-behind
33: type performance/write-behind
34: subvolumes rep-volume-dht
35: end-volume
36:
37: volume rep-volume-read-ahead
38: type performance/read-ahead
39: subvolumes rep-volume-write-behind
40: end-volume
41:
42: volume rep-volume-readdir-ahead
43: type performance/readdir-ahead
44: subvolumes rep-volume-read-ahead
45: end-volume
46:
47: volume rep-volume-io-cache
48: type performance/io-cache
49: subvolumes rep-volume-readdir-ahead
50: end-volume
51:
52: volume rep-volume-quick-read
53: type performance/quick-read
54: subvolumes rep-volume-io-cache
55: end-volume
56:
57: volume rep-volume-open-behind
58: type performance/open-behind
59: subvolumes rep-volume-quick-read
60: end-volume
61:
62: volume rep-volume-md-cache
63: type performance/md-cache
64: subvolumes rep-volume-open-behind
65: end-volume
66:
67: volume rep-volume
68: type debug/io-stats
69: option log-level INFO
70: option latency-measurement off
71: option count-fop-hits off
72: subvolumes rep-volume-md-cache
73: end-volume
74:
75: volume meta-autoload
76: type meta
77: subvolumes rep-volume
78: end-volume
79:
+------------------------------------------------------------------------------+
[2016-09-25 04:54:12.453806] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-rep-volume-client-1: changing port to 49152 (from 0)
[2016-09-25 04:54:12.455009] I [MSGID: 114057] [client-handshake.c:1446:select_server_supported_programs] 0-rep-volume-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330)
[2016-09-25 04:54:12.455225] W [MSGID: 114043] [client-handshake.c:1111:client_setvolume_cbk] 0-rep-volume-client-0: failed to set the volume [Permission denied]
[2016-09-25 04:54:12.455239] W [MSGID: 114007] [client-handshake.c:1140:client_setvolume_cbk] 0-rep-volume-client-0: failed to get 'process-uuid' from reply dict [Invalid argument]
[2016-09-25 04:54:12.455243] E [MSGID: 114044] [client-handshake.c:1146:client_setvolume_cbk] 0-rep-volume-client-0: SETVOLUME on remote-host failed [Permission denied]
[2016-09-25 04:54:12.455256] I [MSGID: 114049] [client-handshake.c:1249:client_setvolume_cbk] 0-rep-volume-client-0: sending AUTH_FAILED event
[2016-09-25 04:54:12.455270] E [fuse-bridge.c:5318:notify] 0-fuse: Server authenication failed. Shutting down.
[2016-09-25 04:54:12.455278] I [fuse-bridge.c:5793:fini] 0-fuse: Unmounting '/mnt/replica'.
[2016-09-25 04:54:12.456149] W [glusterfsd.c:1286:cleanup_and_exit] (-->/lib64/libpthread.so.0(+0x7dc5) [0x7f039192adc5] -->/usr/sbin/glusterfs(glusterfs_sigwaiter+0xe5) [0x7f0392fbec45] -->/usr/sbin/glusterfs(cleanup_and_exit+0x6b) [0x7f0392fbeabb] ) 0-: received signum (15), shutting down
Here is gluster volume info on Server1 (ST0) :
ST0: ~ root # gluster volume info
Volume Name: rep-volume
Type: Replicate
Volume ID: 566324fc-668b-48cb-a3ee-0f9830cb03e0
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: ST0:/replica1
Brick2: ST1:/replica2
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
auth.allow: STC
I'll be highly grateful if anyone can help me. Thanks.
UPDATE:
The answer @FarazX did provide was real helpful and solved my problem, but I'm still interested in finding the reason that why this condition worked while I was doing the same on the Client server (with no success)? I read many things on bugzilla.redhat.com but the reason is still a bit vague to me.
|
I had the same problem, but I tried to mount the clients on the same servers and it worked perfectly.
In your case it would be done by running the following command on ST0 and ST1 respectively:
ST0: ~ root # mkdir /mnt/replica
ST0: ~ root # mount.glusterfs ST0:/rep-volume /mnt/replica/
ST0: ~ root # echo 'ST0:/rep-volume /mnt/replica glusterfs _netdev,fetch-attempts=10 0 0' >> /etc/fstab
&
ST1: ~ root # mkdir /mnt/replica
ST1: ~ root # mount.glusterfs ST1:/rep-volume /mnt/replica/
ST1: ~ root # echo 'ST1:/rep-volume /mnt/replica glusterfs _netdev,fetch-attempts=10 0 0' >> /etc/fstab
N.B. Check your firewall configuration and rules.
I hope this can solve your problem.
| GlusterFS replicated volume - mounting issue |
1,553,706,533,000 |
I need help for a bash script that counts files and folders in a specified directory on a Linux system (Debian), but I want to exclude a specified folder.
I have a main directory named workdir with different script files and folders. Inside workdir, I have a directory named mysshfs. I use fuse/sshfs to mount an external folder in the mysshfs folder.
Now I start some commands to get information about file/directory count and file/directory size, but I want to exclude the directoy mysshfs.
My bash commands that work:
get the full size of workdir | no fuse/sshfs in use
$ du -hs workdir
get the full size of workdir, excluding mysshfs | fuse/sshfs in use
$ du -hs --exclude=mysshf workdir
count files in workdir | no fuse/sshfs in use
$ find workdir -type f | wc -l
count folders in workdir | no fuse/sshfs in use
$ find workdir -type d | wc -l
count files in workdir, excluding mysshfs | no fuse/sshfs in use
$ find workdir -type f -not -path "*mysshfs*" | wc -l
count folders in workdir, excluding mysshfs | no fuse/sshfs in use
$ find workdir -type d -not -path "*mysshfs*" | wc -l
When I use commands 5 & 6 and the remote directory is mounted under the mysshfs directory, the commands hang.
The commands eventually works and show the correct output, but it looks like the commands are still looking inside the excluded directory even though they shouldn't be, so it takes a long time to display the result.
Where is my error or did I forget something in my commands 5 & 6? Or can I use other commands for my results?
I need to count files and directories using 2 separate commands
and exclude a specified folder that is mounted over fuse/sshfs to get a fast result.
|
You can use -prune to avoid descending into subdirectories. Try these commands instead:
find workdir -path "*/mysshfs/*" -prune -o \( -type f -print \) | wc -l
find workdir -path "*/mysshfs/*" -prune \( -type d -print \) | wc -l
| bash count files and directory, summary size and EXCLUDE folders that are fuse|sshfs |
1,553,706,533,000 |
Tried to install curlftpfs in debian 12 says that the package is missing.
While I understand that the package is still not active developed I use often curlftpfs inside some virtual machines transfer to transfer between various fs eg. windows/linux VMs that pass files through filezilla server and so on.
So to me would be still useful use it even in a non secure FTPS fashion of the protocol.
Is there a way to install that package o a replacing package like that without break my debian 12 installation?
Thanks
|
curlftpfs ships a single, simple binary package; you can safely install the Debian 11 version in Debian 12. Assuming you’re using amd64:
wget http://deb.debian.org/debian/pool/main/c/curlftpfs/curlftpfs_0.9.2-9+b1_amd64.deb
sudo apt install ./curlftpfs_0.9.2-9+b1_amd64.deb
| Is curlftpfs missing in debian 12? |
1,553,706,533,000 |
I can normaly mount/umount FTP as file system using following commands:
└──> curlftpfs -o codepage=windows-1250 anonymous:[email protected] /home/marek/ftpfs
└──> ls /home/marek/ftpfs/
1 2 3
└──> fusermount -u /home/marek/ftpfs
└──> ls /home/marek/ftpfs/
└──>
But when I issue curlftpfs with strace then nothing is mounted and the process exits with status 1:
└──> strace -f curlftpfs -o codepage=windows-1250 anonymous:[email protected] /home/marek/ftpfs
└──> echo $0
1
└──> ls /home/marek/ftpfs/
└──>
Last lines from strace (full output is here):
[pid 9619] mprotect(0x7f08780b2000, 4096, PROT_READ) = 0
[pid 9619] mprotect(0x7f08782bd000, 4096, PROT_READ) = 0
[pid 9619] munmap(0x7f0878e8d000, 135950) = 0
[pid 9619] open("/etc/passwd", O_RDONLY|O_CLOEXEC) = 6
[pid 9619] lseek(6, 0, SEEK_CUR) = 0
[pid 9619] fstat(6, {st_mode=S_IFREG|0644, st_size=2290, ...}) = 0
[pid 9619] mmap(NULL, 2290, PROT_READ, MAP_SHARED, 6, 0) = 0x7f0878eae000
[pid 9619] lseek(6, 2290, SEEK_SET) = 2290
[pid 9619] munmap(0x7f0878eae000, 2290) = 0
[pid 9619] close(6) = 0
[pid 9619] getgid() = 1000
[pid 9619] getuid() = 1000
[pid 9619] openat(AT_FDCWD, ".", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 6
[pid 9619] getdents(6, /* 2 entries */, 32768) = 48
[pid 9619] getdents(6, /* 0 entries */, 32768) = 0
[pid 9619] close(6) = 0
[pid 9619] mount("curlftpfs#ftp://anonymous:[email protected]/", ".", "fuse", MS_NOSUID|MS_NODEV, "fd=3,rootmode=40000,user_id=1000"...) = -1 EPERM (Operation not permitted)
[pid 9619] write(2, "fusermount: mount failed: Operat"..., 50fusermount: mount failed: Operation not permitted
) = 50
[pid 9619] close(3) = 0
[pid 9619] exit_group(1) = ?
[pid 9618] <... recvmsg resumed> {msg_name(0)=NULL, msg_iov(1)=[{"", 1}], msg_controllen=0, msg_flags=0}, 0) = 0
[pid 9618] close(6) = 0
[pid 9618] wait4(9619, <unfinished ...>
[pid 9619] +++ exited with 1 +++
<... wait4 resumed> NULL, 0, NULL) = 9619
--- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=9619, si_uid=1000, si_status=1, si_utime=0, si_stime=0} ---
sendto(4, "QUIT\r\n", 6, MSG_NOSIGNAL, NULL, 0) = 6
poll([{fd=4, events=POLLIN|POLLPRI|POLLRDNORM|POLLRDBAND}], 1, 1000) = 1 ([{fd=4, revents=POLLIN|POLLRDNORM}])
recvfrom(4, "221 Bye\r\n", 16384, 0, NULL, NULL) = 9
close(4) = 0
close(3) = 0
exit_group(1) = ?
+++ exited with 1 +++
|
I am not familiar with this executable, but my guess is that it needs to run with privilege (probably suid root or similar). strace -f cannot run such a process with privilege unless strace itself is run as root and you may need the -u option.
| exit status of command is different when it is run via strace |
1,553,706,533,000 |
The following line:
/path1 /path2 posixovl none 0 0
fails with the error:
/sbin/mount.posixovl: invalid option -- 'o'
Usage: /sbin/mount.posixovl [-F] [-S source] mountpoint [-- fuseoptions]
This is because mount.posixovl uses a non standard mount syntax, and fstab will call it assuming default mount syntax, eg.
mount.posixovl /path1 /path2 -o [whatsoever_/etc/fstab_options]
EDIT #1:
Same problem, solved with an uglier hack in this linuxquestions.org Q&A titled: [SOLVED] How to get a fuse-posixovl partition mounted at bootup?
|
I wrote a wrapper for mount.posixovl that enables it to be used with fstab
First, rename /sbin/mount.posixovl to something else, like /sbin/mount.posixovl.orig
Finally, create a new file /sbin/mount.posixovl whith the following contents:
#!/bin/bash
# wrapper for mount.posixovl to conform with common mount syntax
# with this wrapper posixovl can be used in fstab
# location of the original mount.posixovl
origposixovl="/sbin/mount.posixovl.orig"
# gather inputs
while [ $# -gt 0 ]; do
if [[ "$1" == -* ]]; then
# var is an input switch
# we can only use the -o or -F switches
if [[ "$1" == *F* ]]; then
optsF="-F"
else
optsF=""
fi
if [[ "$1" == *o* ]]; then
shift
optsfuse="-- -o $1"
else
optsfuse=""
fi
shift
else
# var is a main argument
sourcedir="$1"
shift
if [[ "$1" != -* ]]; then
targetdir="$1"
shift
else
targetdir="$sourcedir"
fi
fi
done
# verify inputs
if [ "$sourcedir" == "" ]; then
echo "no source specified"
exit 1
fi
if [ "$targetdir" == "" ]; then
echo "no target specified"
exit 1
fi
# build mount.posixovl command
"$origposixovl" $optsF -S "$sourcedir" "$targetdir" $optsfuse
Naturally, set the newly created /sbin/mount.posixovl to be executeable (chmod +x /sbin/mount.posixovl)
It is useful mounting posixovl trough fstab
| Mount posixovl using fstab |
1,553,706,533,000 |
Is it possible to do an overlay mount when one of the paths has a colon in it? All of the FUSE overlay mounting solutions I've looked at use a colon to separate the paths in the overlay, and I can't find a way to escape it.
|
Directory Structure
Let's say we're trying to overlay foo:bar, and bar:baz. The mount point will be union
foo
└── a
bar
└── b
foo:bar
└── c
bar:baz
└── d
union
mergerfs
No matter what escaping you try to do, you can see from the source that it won't work. Annoyingly if you try to guess a way to escape it:
$ mergerfs 'foo\:bar':'bar\:baz' union
it won't throw an error, but will silently ignore directories that don't exist:
$ ls union
b
unionfs-fuse
Same problem as mergerfs, no way to escape a colon. At least it'll fail with an error though if a directory doesn't exist:
$ unionfs-fuse 'foo\:bar':'bar\:baz' union
Failed to open /foo\/: No such file or directory. Aborting!
overlayfs
overlayfs does allow escaping colons in paths, but it's not a FUSE filesystem.
$ mount -t overlay overlay -o lowerdir='foo\:bar':'bar\:baz' union
$ ls union
c d
Workaround
A simple workaround that works with both mergerfs and unionfs-fuse is to use a symlink:
$ ln -s foo:bar foo_bar
$ ln -s bar:baz bar_baz
$ unionfs-fuse foo_bar:bar_baz union
$ ls union
c d
| FUSE overlay mount with colon in path |
1,553,706,533,000 |
I'm experimenting with different union/overlay filesystem types. I've found unionfs-fuse package in Ubuntu which allowed me to use unionfs mount command as non-root user. But it seems aufs, which is created to provided similar options as unionfs, cannot be used as non-root user. I need to give sudo password for aufs mount.
Can I use aufs without giving root password?
|
In researching this the answer appears to be: no.
In looking at the man page for aufs I don't see any options that would allow it to mount as anything but the root user.
In looking at the filesystems that libfuse supports I don't see aufs listed there either.
Lastly if you look at the userspace filesystems it's not listed their either: Filesystem in Userspace on Wikipedia, it's not listed as a option there either.
| Can aufs be used as fuse filesystem like unionfs-fuse? |
1,553,706,533,000 |
When an encrypted directory is mounted using EncFS as a regular user, you cannot execute a script in it with sudo (as root):
$ sudo /run/media/yeti/usbdrive/encfs/test.sh
sudo: /run/media/yeti/usbdrive/encfs/test.sh: command not found
This is a security feature, but how can I still grant root permissions to this mounted directory (without mounting as root)?
More details
I am using Arch Linux, and I have an encrypted directory using EncFS:
sudo pacman -S encfs
usbpath="/run/media/yeti/usbdrive"
encfs "$usbpath/.encfs" "$usbpath/encfs"
echo 'echo hello world' > "$usbpath/encfs/test.sh"
sudo chmod +x "$usbpath/encfs/test.sh"
Then this command works just like expected:
$ /run/media/yeti/usbdrive/encfs/test.sh
hello world
But when I use sudo, I get an error:
$ sudo /run/media/yeti/usbdrive/encfs/test.sh
sudo: /run/media/yeti/usbdrive/encfs/test.sh: command not found
Then I realized that this is a security feature of EncFS, which is actually quite good. When I do a directory listing as root (after su), I find the following:
$ ls /run/media/yeti/usbdrive/encfs/
ls: cannot access '/run/media/yeti/usbdrive/encfs': Permission denied
[...]
d?????????? ? ? ? ? ? encfs
drwxrwxrwx 1 yeti yeti 0 Sep 30 00:31 .encfs
[...]
But in my case, I am on a system where I am in fact root, and where sudo could be passwordless. Therefore, this security feature is only getting in the way. However, I do not want to mount the encrypted directory as root either (because then I'd need to run my filemanager and other applications as root too).
What I did as a workaround to this problem is to copy the file outside of the encrypted directory (cp "$usbpath/encfs/test.sh" /tmp/test.sh), and then execute it as root (sudo /tmp/test.sh).
Next to documenting this question for other people who may experience the same issue, the question I still have left is: Is there a better way to do this?
|
Encfs uses fuse under the hood. It includes an -o option to pass options to fuse. Adding -o allow_root will allow root to access the filesystem in addition to the user mounting (also note the similar but exclusive allow_others flag). To use this option, you will need to enable it in fuse config. In /etc/fuse.conf, add the user_allow_other directive
See mount.fuse(8) and encfs(1)
| Sudo says "command not found" for script in EncFS (no root access to EncFS mount?) |
1,553,706,533,000 |
I'm trying to load the fuse kernel module but for some reason it seems like it's not getting loaded. But I also don't get any error message. Can someone explain to me what's going on?
root@my-host:~# modprobe fuse
root@my-host:~# echo $?
0
root@my-host:~# lsmod | grep fuse
root@my-host:~# modinfo fuse
modinfo: ERROR: Module fuse not found.
root@my-host:~# ls /lib/modules/$(uname -r)/kernel/fs/fuse/
cuse.ko
root@my-host:~#
I'm on a cloud VM:
root@my-host:~# uname -r
4.15.0-213-generic
root@my-host:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
I also rebooted the host before I ran these commands to make sure the correct kernel is running.
EDIT: In response to the comments I ran these commands:
root@my-host:~# grep fuse /lib/modules/$(uname -r)/modules.builtin
kernel/fs/fuse/fuse.ko
root@my-host:~# systemd-detect-virt
kvm
|
Generally, if loading a module succeeds but that module doesn’t appear in lsmod’s output, it’s because the module is built-in — i.e. it’s part of the main kernel image and is always available¹.
To check whether that’s the case, look in /lib/modules/$(uname -r)/modules.builtin:
grep fuse /lib/modules/$(uname -r)/modules.builtin
If this shows a kernel module path matching the module you expect, it means the corresponding “module” is built-in.
¹ Many built-in modules can still be disabled if necessary, see disable kernel module which is compiled in kernel (not loaded).
| modprobe fuse doesn't seem to load module |
1,553,706,533,000 |
I want to mount files that are hosted on an HTTP server such as a video file or an ISO file like NFS shares but with HTTP. For example there is a Linux ISO and I want to see it on my files and copy it to other disks from the server etc. How can I do this? Is it possible to use FUSE filesystem for this?
|
You are searching for something like https://github.com/fangfufu/httpdirfs
While thinking about it, I realize that WebDAV is just an extension of http. As long as you stay on basic operations you might end up just using http functionality. With this prerequisite, mounting your server via WebDAV might be an option, too. Check out https://savannah.nongnu.org/projects/davfs2
Warning: I quickly used Google and stumbled over it. I do not use such a solution and have no experience with it.
| How to mount http files? |
1,553,706,533,000 |
I often use sshfs to mount a remote directory tree (say myhost:~/workspace/) to a local one (say ~/workspace-mount/), and open remote files in a local editor.
It's not that uncommon that I get disconnected and that the remote directory tree is unmounted without my realizing it. If I unwittingly save the open files in my local editor, my editor will silently save the files to my local disk, recreating the remote directory structure as needed. This then becomes a recipe for confusion since I unwittingly now have forked copies of files.
If I lose the mount, I'd much prefer that saves fail with, say, a permission error.
I've tried removing write permission to ~/workspace-mount/, but fusermount refuses to mount over it without write access.
The best alternative I can think of is to locally recreate the immediate child directories of myhost:~/workspace/, and then remove write permission to those, but that's hard to maintain, and it wouldn't prevent accidentally forking files that reside directly in myhost:~/workspace/.
Is there any way that I can prevent accidentally writing to my local mount point when it's unmounted?
|
Use the reconnect flag. That will keep the filesystem mounted. If you are disconnected, processes with pending operations on the filesystem will hang and eventually fail with a generic I/O error, unless the connection is restablished.
Depending on how you set it up, after you are disconnected you might actually be reconnected (if you use ssh keys) or you might have a broken mount point (if you use passwords). If you get I/O errors right away (meaning the reconnection is not succesful) you might have to issue a fusermount -u ~/workspace-mount/ before trying to mount again.
In any case, you, and your programs will know about the disconection.
sshfs -o reconnect myhost:~/workspace ~/workspace-mount/
In order to test it, you can crudely simulate the disconnection by killing the sftp-server at the server side.
| Is there any way to prevent writing to an unmounted mount point? |
1,553,706,533,000 |
Dir created inside a loop fs denies access, but has correct permissions.
init.sh - creates an fs image and mounts it (user and group ids are 1000):
#!/bin/bash
mkdir -p out-dir
dd if=/dev/zero of=out-dir.img bs=1024 count=125
/sbin/mkfs.ext4 out-dir.img
guestmount -o uid=$(id -u) -o gid=$(id -g) -a out-dir.img -m/dev/sda out-dir
create.sh - creates a dir and does cd:
#!/bin/bash
mkdir -m 700 out-dir/test
cd out-dir/test
The cd gives:
./create.sh: line 4: cd: out-dir/test: Permission denied
Then, ls -lan out-dir:
drwxr-xr-x 4 1000 1000 1024 Mar 21 15:27 .
drwxrwxr-x 3 1000 1000 4096 Mar 21 15:27 ..
drwx------ 2 1000 1000 12288 Mar 21 15:27 lost+found
drwx------ 2 1000 1000 1024 Mar 21 15:27 test
How to establish the correct mapping?
|
This is the option: -o default_permissions.
guestmount --fuse-help:
...
-o default_permissions enable permission checking by kernel
| guestunmount: can't cd into a dir, but the permissions are ok |
1,553,706,533,000 |
hello I have been installed libfuse and sshfs in my ubuntu and the kernel version is 4.4.0-38.
And now I want to sshfs user@localhost:/dir /mnt. but it always shows the error message:
read: Connection reset by peer
How come it always happened? Is there anyway to mount a disk by fuse?
|
Running the pure ssh in debug mode (ssh -vvv user@localhost) will give you a guide what is wrong. In this case, you need to install the openssh-server package to have where to connect.
| Can sshfs mount the local disks? |
1,553,706,533,000 |
I'M trying to rsync between two dirs using:
rsync -atO --ignore-existing /src 1.1.1.1:/target/
The target dir is mounted via cloudfuse and the source dir is a regular one.
I get an error:
rsync: failed to set times on "/target/somefile": Function not implemented (38)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1070) [sender=3.0.9]
|
Apparently cloudfuse doesn't support setting modification times on existing files (the "not implemented" error).
Hence you need to tell rsync not to try it:
rsync -a --no-times --ignore-existing /src 1.1.1.1:/target/
The -t you supplied was implied by -a and needs to be turned off, hence --no-times. Also -0 makes no sense as you're not using *from/filter files.
| rsync with cloudfuse |
1,553,706,533,000 |
After updating the server with apt-get update && apt-get upgrade this command return an error
command
echo "the-password" | sshfs [email protected]:/var/www /remote_mount -o password_stdin
OS
Debian 3.2.60-1+deb7u3 x86_64 (wheezy)
error
fuse: device not found, try 'modprobe fuse' first
modprobe fuse
root@dyntest-amd-3700-2gb ~ # modprobe fuse
modprobe: ERROR: could not insert 'fuse': Unknown symbol in module, or unknown parameter (see dmesg)
root@dyntest-amd-3700-2gb ~ # dmesg | grep fuse
[ 20.126156] fuse: Unknown symbol nosteal_pipe_buf_ops (err 0)
[1607702.343086] fuse: Unknown symbol nosteal_pipe_buf_ops (err 0)
[1607745.824310] fuse: Unknown symbol nosteal_pipe_buf_ops (err 0)
[1607908.188559] fuse: Unknown symbol nosteal_pipe_buf_ops (err 0)
[1608724.690945] fuse: Unknown symbol nosteal_pipe_buf_ops (err 0)
[1608741.684927] fuse: Unknown symbol nosteal_pipe_buf_ops (err 0)
[2565283.964259] fuse: Unknown symbol nosteal_pipe_buf_ops (err 0)
Kernel version
root@dyntest-amd-3700-2gb ~ # cat /proc/version
Linux version 3.2.0-4-amd64 ([email protected]) (gcc version 4.6.3 (Debian 4.6.3-14) ) #1 SMP Debian 3.2.54-2
root@dyntest-amd-3700-2gb ~ # locate -i -r /fuse
/bin/fuser
/bin/fusermount
/etc/fuse.conf
/lib/modules/3.2.0-4-amd64/kernel/fs/fuse
/lib/modules/3.2.0-4-amd64/kernel/fs/fuse/cuse.ko
/lib/modules/3.2.0-4-amd64/kernel/fs/fuse/fuse.ko
/lib/modules-load.d/fuse.conf
/usr/include/boost/fusion/functional/adapter/fused.hpp
/usr/include/boost/fusion/functional/adapter/fused_function_object.hpp
/usr/include/boost/fusion/functional/adapter/fused_procedure.hpp
/usr/include/boost/fusion/include/fused.hpp
/usr/include/boost/fusion/include/fused_function_object.hpp
/usr/include/boost/fusion/include/fused_procedure.hpp
/usr/include/linux/fuse.h
/usr/share/bash-completion/completions/fusermount
/usr/share/doc/fuse
/usr/share/doc/fuse/changelog.Debian.gz
/usr/share/doc/fuse/changelog.gz
/usr/share/doc/fuse/copyright
/usr/share/initramfs-tools/hooks/fuse
/usr/share/lintian/overrides/fuse
/usr/share/man/man1/fuser.1.gz
/usr/share/man/man1/fusermount.1.gz
/var/cache/apt/archives/fuse_2.9.0-2+deb7u1_amd64.deb
/var/cache/apt/archives/fuse_2.9.3-14_amd64.deb
/var/cache/apt/archives/fuse_2.9.3-15_amd64.deb
/var/cache/apt/archives/fuse_2.9.3-9_amd64.deb
/var/lib/dpkg/info/fuse.conffiles
/var/lib/dpkg/info/fuse.list
/var/lib/dpkg/info/fuse.md5sums
/var/lib/dpkg/info/fuse.postinst
/var/lib/dpkg/info/fuse.postrm
/var/lib/dpkg/info/fuse.preinst
update
root@dyntest-amd-3700-2gb ~ # apt-get install --reinstall linux-image-generic linux-image
Reading package lists... Done
Building dependency tree
Reading state information... Done
Package linux-image is a virtual package provided by:
linux-image-3.2.0-4-rt-amd64 3.2.60-1+deb7u3
linux-image-3.2.0-4-amd64 3.2.60-1+deb7u3
You should explicitly select one to install.
E: Unable to locate package linux-image-generic
E: Package 'linux-image' has no installation candidate
|
This post solves the problem..
Just need to update the kernel
http://forums.debian.net/viewtopic.php?t=113906
| sshfs - device not found |
1,568,533,746,000 |
There do seem to be several questions already about moving /var to another directory or another partition or device. What I would like to do is move it to a fuse-pooled fs.
My goal: To install Linux server onto a USB, and have a fuse fs to manage the mounted JBOD's. But I would like to move /var to the storage pool because a lot of people warn against too many writes to the USB, thus shortening its life. If I move /var to the attached /storage pool then the sticks life is greatly lenthened.
The problem is when i added a bind mount to my /etc/fstab pointing /storage/var to /var, the OS hung on reboot. I had to go to recovery mode to reverse my changes.
Here was my /etc/fstab before I recovered it.
# SnapRAID Dsks
/dev/disk/by-id/ata-abc-part1 /mnt/data/disk1 ext4 defaults 0 2
/dev/disk/by-id/ata-def-part1 /mnt/data/disk2 ext4 defaults 0 2
/dev/disk/by-id/ata-ghi-part1 /mnt/data/disk3 ext4 defaults 0 2
/dev/disk/by-id/ata-jkl-part1 /mnt/data/disk4 ext4 defaults 0 2
# Parity Disks
/dev/disk/by-id/ata-lmn-part1 /mnt/data/disk5 ext4 defaults 0 2
# MergerFS
/mnt/data/* /storage fuse.mergerfs category.create=eplfs,defaults,allow_other,minfreespace=20G,fsname=mergerfsPool 0 00
# bind mount
/storage/var /var ext4 defaults 0 0
Is this impossible, or should I directly bind mount it to one disk directly instead.
|
Instead of moving /var to the pool, a better solution would be to move /var to the tmpfs. After reading this from Chris Newland, I am going to go with moving /var to tmpfs, and adding noatime to the root install drive.
# /etc/fstab: static file system information.
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/dev/sda1 / ext2 noatime,errors=remount-ro 0 1
tmpfs /tmp tmpfs defaults,noatime 0 0
tmpfs /var/log tmpfs defaults,noatime 0 0
tmpfs /var/tmp tmpfs defaults,noatime 0 0
tmpfs /var/run tmpfs defaults,noatime 0 0
tmpfs /var/spool tmpfs defaults,noatime 0 0
tmpfs /var/lock tmpfs defaults,noatime 0 0
tmpfs /var/cache tmpfs defaults,noatime 0 0
I also followed up with additional configurations due to some programs complaining about not having a temp place to write to (also from Chris Newland's page)...
This ensures that apache2, postgresql, and debconf all operate correctly when /var/log and /var/cache are mounted on a tmpfs filesystem:
# Put these commands into /etc/init.d/make-tmpfs-dirs
#!/bin/sh
mkdir /var/cache/debconf
mkdir /var/log/apache2
chown root:adm /var/log/apache2
chmod 750 /var/log/apache2
mkdir /var/log/postgresql
chown root:postgres /var/log/postgresql
chmod 774 /var/log/postgresql
exit 0
Now make that executable:
chmod u+x /etc/init.d/make-tmpfs-dirs
...link to the make-tmpfs-dirs script from the correct rc.d runlevel directory
cd /etc/rc2.d
ln -s ../init.d/make-tmpfs-dirs S02make-tmpfs-dirs
Finally instead of getting rid of swap, I will reconfigure it to not be used until its absolutely necessary (when I am running out of memory):
To change the system swappiness value in ubuntu, open /etc/sysctl.conf as root. Then, change or add this line to the file:
vm.swappiness = 10
| Moving /var to fuse pooled fs |
1,568,533,746,000 |
Here the type is fuseblk:
$ mount
/dev/sdb1 on /media/me/MY-DEVICE type fuseblk (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other,blksize=4096,uhelper=udisks2)
Can see below that the Partition Type is HPFS/NTFS and conents is exFAT.
Why the difference?
|
The mount on /media is a FUSE mount - a userspace filesystem mount. The underlying filesystem being mounted by the FUSE driver could be anything, including filesystems that may not be supported by the kernel. The Gnome desktop makes use of it for mounting USB keys and other removable media.
FUSE allows safe mounting of filesystems without granting root access to users. You could alternatively mount a FAT fs (but not exFAT, which is not supported by a kernel module in the way the various older FAT filesystems are) with e.g.
sudo mount -t vfat /dev/sdb1 /mnt/myfs
| Difference in file system type on mount and Disk Utility |
1,568,533,746,000 |
In the context of an application that transfers some files to an ovh storage, and I am experiencing some problems mounting a fuse file system.
This is the structure of my mount point in fstab
mystorage /mnt/openstack svfs username=my-user-name,password=my-password,tenant=my-tenant,region=BHS1,container=my-container-name,noauto,users,user,suid,rw 1
According to my understanding, this line would allow executing
mount /mnt/openstack
to any user. In effect, I successfully mount the fuse file system. Unfortunately, and this is my specific problem, after mounting I cannot neither copy nor read what is inside the /mnt/openstack directory because I do not have permissions. However, I am sure the file system is mounted because through sudo I can manipulate the directory without any problem. For instance, the commands:
sudo ls -l /mnt/openstack
which oputputs:
total 0
-rwx------ 1 root root 432249 May 2 09:02 26-14818.jpg
-rwx------ 1 root root 447 Apr 29 11:14 401error.html
-rwx------ 1 root root 438 Apr 29 11:14 404error.html
-rwx------ 1 root root 468 Apr 29 11:14 503error.html
drwx------ 1 root root 4096 Aug 30 1754 images
-rwx------ 1 root root 313 Apr 29 11:14 index.html
-rwx------ 1 root root 1876 Apr 29 11:14 listing.css
drwx------ 1 root root 4096 Aug 30 1754 styles
and
sudo cp a-file /mnt/openstack
work perfectly.
EDITED:
The output of sudo ls -ld /mnt/openstack
drwx------ 1 root root 665505 May 3 08:53 /mnt/openstack
A remark: the permissions of /mnt/openstack before mounting are 777`
So my questions are: what am I doing wrong? What could I do in order to manage mounting and manipulate the mounted directory as the user?
|
I think mount does not support this use of user with the default fuse security setting (or allow_root). I think the resulting permissions are the same as if you used sudo mount. To allow access by multiple non-root users, you could set allow_other, allowing access by any user.
If this raised concerns, it would be possible to set default_permissions to enable permissions checking, set a group with the gid= mount option, add selected users to that group, and set mode=770 to allow full read-write access by the group.
Or if you only need access by your user, you could simply use allow_other,default_permissions,uid=user. Note that in this case there's no particular need to use fstab. You could save your mount command as a script instead, call it something like mount-openstack and put it ~/.local/bin.
| How mount a svfs file system with user permissions? (without sudo) |
1,568,533,746,000 |
While attempting to create a program that reads some configuration before launching programs as a normal user and then as the root user, I noticed this odd behavior. I can't seem to find mention of it anywhere else. Normal filesystems use the effective UID/GID for access checks, but it looks like FUSE seem to check all three of the effective, real, and saved(!!) UID/GID for access. I had initially just dropped the effective uid so that I could recover it later, but this kept me getting permissions errors until I realized what was going on.
Why is this this case? Why does FUSE care about the saved uid/gid?
(I'm aware I can set allow_root on FUSE and avoid this, that isn't what this question is about)
Example C code to demonstrate:
#define _GNU_SOURCE
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#define measure() getresuid(&ruid, &euid, &suid); getresgid(&rgid, &egid, &sgid); printf("UID: %4d, %4d, %4d. GID: %4d, %4d, %4d \t\t", ruid, euid, suid, rgid, egid, sgid); fflush(stdout)
#define set(r,e,s) if (setresuid(0,0,0 ) != 0) return 1; if (setresgid(r,e,s ) != 0) return 1; if (setresuid(r, e, s) != 0) return 1;
#define attempt(r,e,s) set(r,e,s); measure(); test(argv[1])
void test(char* arg)
{
struct stat sb;
if (stat(arg, &sb) == -1)
perror("fail");
else
printf("Success\n");
}
int main(int argc, char *argv[])
{
uid_t ruid, euid, suid; gid_t rgid, egid, sgid;
measure();
printf("\n\n");
attempt(1000,0,0); // Expect: Fail. Actual: Fail
attempt(0, 1000,0); // Expect: ok. Actual: Fail
attempt(0, 0, 1000); // Expect: Fail. Actual: Fail
attempt(1000,1000,0); // Expect: ok. Actual: Fail
attempt(1000,0,1000); // Expect: Fail. Actual: Fail
attempt(0,1000,1000); // Expect: ok. Actual: Fail
attempt(1000,1000,1000); // Expect: ok. Actual: ok
return 0;
}
Output:
$ sshfs some-other-machine:/ /tmp/testit # I think any FUSE filesystem should "work"
$ gcc test.c -o test
$ sudo ./test /tmp/testit
UID: 0, 0, 0. GID: 0, 0, 0
UID: 1000, 0, 0. GID: 1000, 0, 0 fail: Permission denied
UID: 0, 1000, 0. GID: 0, 1000, 0 fail: Permission denied
UID: 0, 0, 1000. GID: 0, 0, 1000 fail: Permission denied
UID: 1000, 1000, 0. GID: 1000, 1000, 0 fail: Permission denied
UID: 1000, 0, 1000. GID: 1000, 0, 1000 fail: Permission denied
UID: 0, 1000, 1000. GID: 0, 1000, 1000 fail: Permission denied
UID: 1000, 1000, 1000. GID: 1000, 1000, 1000 Success
$
|
As you have noticed, without the allow_root/allow_other options, other processes are not allowed to access the filesystem. This is not meant to protect your filesystem, but to protect the other processes. For this reason, if the accessing process has a shred of another identity, the access can't be allowed.
That's the relevant code in the kernel for this behavior (fs/fuse/dir.c):
/*
* Calling into a user-controlled filesystem gives the filesystem
* daemon ptrace-like capabilities over the current process. This
* means, that the filesystem daemon is able to record the exact
* filesystem operations performed, and can also control the behavior
* of the requester process in otherwise impossible ways. For example
* it can delay the operation for arbitrary length of time allowing
* DoS against the requester.
*
* For this reason only those processes can call into the filesystem,
* for which the owner of the mount has ptrace privilege. This
* excludes processes started by other users, suid or sgid processes.
*/
int fuse_allow_current_process(struct fuse_conn *fc)
{
const struct cred *cred;
if (fc->allow_other)
return current_in_userns(fc->user_ns);
cred = current_cred();
if (uid_eq(cred->euid, fc->user_id) &&
uid_eq(cred->suid, fc->user_id) &&
uid_eq(cred->uid, fc->user_id) &&
gid_eq(cred->egid, fc->group_id) &&
gid_eq(cred->sgid, fc->group_id) &&
gid_eq(cred->gid, fc->group_id))
return 1;
return 0;
}
| FUSE filesystems look at saved UID/GID? |
1,568,533,746,000 |
Hello Linux FUSE (Filesystem in Userspace) support O_DIRECT?
because I use fio benchmark to test fuse but it always shows errors when I use directIO
Mine machine is Ubuntu 4.4.0-38 x86_64
fio_version = 2.14
Below is my config file
[global]
ioengine=libaio
**direct=1**
time_based
runtime=60
ramp_time=30
size=64g
group_reporting
[S_100RW_1M_R]
rw=read
numjobs=1
iodepth=32
bs=1m
stonewall
[S_100RW_1M_W]
rw=write
numjobs=1
iodepth=32
bs=1m
stonewall
when I execute :sudo fio fio.cfg and it done
it shows the result of seq. read without seq. write.
it shows below:
fio: io_u error on file xxxxx : Invalid argument: write offset=0, buflen=1048576
I tried every times and the results are the same even I changed the tested device.
how come it happened?
thanks a lot
|
Yes, since version 2.4:
What is new in 2.4
...
Allow 'direct_io' and 'keep_cache' options to be set on a case-by-case basis on open.
I'd venture one of several things is likely happening:
Your version of fuse isn't new enough.
The actual underlying file system doesn't support direct IO, and fuse is simply returning a pass-through error. (This does assume fuse passes the direct IO request through to the underlying file system that actually holds the data on disk somewhere.)
A bug somewhere in fuse code. Direct IO on Linux can be very particular/quirky.
| Does FUSE support O_DIRECT/directI/O |
1,568,533,746,000 |
I have read that support for the exfat filesystem has been incorporated in the Linux kernel since kernel ver 5.4 was released in late 2019 - early 2020. I'm confused about what this means wrt the exfat-fuse package. AFAIK, the exfat-fuse package existed prior to kernel ver 5.4, and was the ad-hoc method for mounting exfat partitions.
Does incorporation of support for exfat filesystems mean that the exfat-fuse package is no longer required? Conversely, if exfat-fuse is still required, what was meant/accomplished by incorporating exfat support in the kernel?
A related question is wrt the documentation for this - specifically man mount, and its FILESYSTEM-SPECIFIC MOUNT OPTIONS section. There is no mention of a filesystem-specific manual for exfat, nor is there a "Mount options for exfat" sub-section. Which leads me to ask, "Where are these mount options for exfat covered?" Should users rely upon the "Mount options for fat" sub-section in man mount, or should they rely upon man mount.exfat-fuse, or on something else?
|
Fuse was added on 2005-09-09, that's probably Linux ~2.6.18, far earlier than Linux 5.4
Does incorporation of support for exfat filesystems mean that the exfat-fuse package is no longer required?
Both can be used but exfat-fuse has essentially been deprecated and superseded.
There is no mention of a filesystem-specific manual for exfat, nor is there a "Mount options for exfat" sub-section.
The man pages are not always kept in sync with what the kernel contains. There's a separate team maintaining them.
Should users rely upon the "Mount options for fat" sub-section in man mount, or should they rely upon man mount.exfat-fuse, or on something else?
Mount options for fuse-exfat and the kernel native exfat driver are not related. They can be similar/the same but that's just happenstance.
You think of these projects as similar/related while they are only similar in name and functionality. Code bases are different and written by different people.
| Kernel-mounted vs FUSE-mounted exfat filesystem |
1,568,533,746,000 |
I have two Linux systems: NFSServer1 (RHEL) and NFSClient1 (Ubuntu).
On NFSServer1, ntfs-3g driver and ldmtool is installed. The NTFS device partitions are mounted by executing the command:
mount -t ntfs-3g -o ro,noatime $devPath $mountPath
Note: The two partitions /dev/mapper/ldm_vol_VishalWDD-Dg0_Volume1 and /dev/mapper/ldm_vol_VishalWDD-Dg0_Volume2 are Windows Dynamic disk partitions derived using ldmtool
[root@ROADQAScaleNFS2 ~]# df -Th
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdc4 fuseblk 127G 11G 117G 9% /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc4
/dev/sdc2 fuseblk 450M 13M 438M 3% /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc2
/dev/mapper/ldm_vol_VishalWDD-Dg0_Volume1 fuseblk 10G 5.8G 4.3G 58% /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume1
/dev/mapper/ldm_vol_VishalWDD-Dg0_Volume2 fuseblk 10G 6.0G 4.1G 60% /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume2
[root@ROADQAScaleNFS2 ~]# mount
/dev/sdc4 on /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc4 type fuseblk (ro,noatime,user_id=0,group_id=0,allow_other,blksize=4096)
/dev/sdc2 on /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc2 type fuseblk (ro,noatime,user_id=0,group_id=0,allow_other,blksize=4096)
/dev/mapper/ldm_vol_VishalWDD-Dg0_Volume1 on /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume1 type fuseblk (ro,noatime,user_id=0,group_id=0,allow_other,blksize=4096)
/dev/mapper/ldm_vol_VishalWDD-Dg0_Volume2 on /monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume2 type fuseblk (ro,noatime,user_id=0,group_id=0,allow_other,blksize=4096)
Totally all these partitions have about 10 million files.
These mounted partitions are accessed from NFSClient1 as NFS shares:
[root@NFSClient ~]# mount
10.4.0.5:/monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc4 on /monitor1/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc4 type nfs4 (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.148.66.49,local_lock=none,addr=10.4.0.5)
10.4.0.5:/monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc2 on /monitor1/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/sdc2 type nfs4 (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.148.66.49,local_lock=none,addr=10.4.0.5)
10.4.0.5:/monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume1 on /monitor1/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume1 type nfs4 (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.148.66.49,local_lock=none,addr=10.4.0.5)
10.4.0.5:/monitor/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume2 on /monitor1/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume2 type nfs4 (ro,relatime,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.148.66.49,local_lock=none,addr=10.4.0.5)
The number of NFS daemon threads on NFS server is set to 64.
Next, on the NFS client, when we issue a stat a fuseblk partition using a find command:
find -H /monitor1/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume2 -printf '%p|' | xargs -d '|' stat --printf="%F, %i:\t%n\t%.19x\t%.19y\t%.19z\t%.19w\t%s\t%u\t%g\n" \ |$SED -e "s|/monitor1/6f5bd42-e548-4e60-8c5d-4c52360b8dc4/mapper/ldm_vol_VishalWDD-Dg0_Volume2/||g" -e "s|directory,|d/d|g" -e "s|symbolic link,|l/l|g" -e "s|regular file,|r/r|g" -e "s|socket,|h/h|g" \ -e "s|regular empty file,|r/r|g" -e "s|fifo,|p/p|g"
Its execution is extremely slow. It takes a break of 5-6 minutes and then resumes for a few seconds. The same is true for all other mount points. The execution does not finish even after 12 hours.
This sluggish behavior is not observed for ext4 and xfs devices types.
As a test, I tried executing the same find command on the NFSServer1, it was quite fast. The whole execution finished in ~40 minutes. I don't have access to the NFS server though. I have asked the NFS server team to try different mount options as mentioned in the ntfs-3g man page, but it didn't help.
If there is any way I could improve the read performance of fuseblk partitions over NFS, I would grateful to you guys.
Many thanks!
|
FUSE is a filesystem in userland – due to context switching overhead, it's not ever going to be as fast as an in-kernel file system, and my guess is this hurts even more when you have to do very file-system intense things like a find on it.
So.
Either use the NTFS3 in-kernel driver (as available in Linux 5.15 and on, if I remember correctly), or
move all the data to a different file system once (and synchronize it to NTFS if you ever need that again), or
run a paravirtualized Windows Server VM to serve that file system via NFS
I'd personally strongly tend towards the second option. What sense does it make to permanently access something from a definitely-not-made-for-that file system? We're talking about not even 40GB of actual data – that's really nothing.
I mean, you have an NFS Team. There's people employed to make your data accessible via NFS. Why they even support directly exporting NTFS is a bit beyond me.
| Slow reading of millions of files in Fuseblk partitions shared over NFS |
1,568,533,746,000 |
Today on backup rsync give me an error about a dir($HOME.cache/doc/by-app)
I have checked it and I see this
First I go to the dir
cd $HOME.cache/doc$ cd by-app/
I do ls and..
ls
/bin/ls: error while loading shared libraries: libcap.so.2: cannot read file data: Error 21
I do cd..
cd ..
I control the dir-tree and made command file to see what contain
find by-app/
by-app/
by-app/libcap.so.2
find by-app/ |parallel file
by-app/: directory
by-app/libcap.so.2: directory
I want to remove!
rm -vfr by-app/
rm: impossible to remove 'by-app/libcap.so.2': Operation not permitted
I did this as root!
sudo rm -frv .cache/doc/by-app
Password:
rm: impossible to remove '.cache/doc/by-app': Permission denied
What is this?
System is Slackware64 15.0
|
This folder/mountpoint is created by xdg-desktop-portal, which is what flatpak uses to access resources outside of the sandboxes it runs applications on: https://docs.flatpak.org/en/latest/desktop-integration.html#portals
Without it you may break whatever you installed via flatpak.
| very strange dir/file, what is? |
1,568,533,746,000 |
I am FUSE mounting a remote FreeBSD machine with
sudo sshfs -C user@remote-ip:/home/user/ /mnt/localmnt/ -o allow_other -o SmartcardDevice=/dev/hidraw7
to authenticate via an OpenPGP smartcard device. I've tried this as both root and non-root users. This ties up standard input instead of returning to shell like a regular mount command, but the output of mount shows it is mounted:
user@remote-ip:/home/user/ on /mnt/freebsd type fuse.sshfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
I've removed default_permissions per Mount with sshfs and write file permissions.
Unfortunately sudo ls and sudo cp hang on the local mount-point, without any error output in dmesg. Why would this happen?
|
This question started with trying to authenticate sshfs with an OpenPGP smartcard, then was edited to its current form when I thought authentication was solved.
My current problem was using -o SmartcardDevice=/dev/hidraw7, which started because otherwise I was getting a password prompt from the remote host. (Users have passwords disabled and are setup for key authentication only.)
My initial problem was caused by running sudo sshfs, but I'm not sure why this is. Running sshfs as a non-root user and also omitting SmartcardDevice correctly prompts for smart-card PIN authentication, then mounts as expected with all local utilities working.
| sshfs appears to mount, but ls & cp on local mount-point hang? |
1,568,533,746,000 |
after removing the default exfat-fuse package version 1.2.5 from my Debian Stretch system and replacing it with version 1.3.0, compiled from source, running mount using type exfat results to an unknown filesystem error. Checking with /proc/filesystems reveals that exfat is not listed.
Manually mounting exfat drives with mount.exfat works fine, the executables reside in /usr/local/sbin.
How can I configure mount to use mount.exfat when appropriate?
|
Install or symlink it as /sbin/mount.exfat.
(I checked strace -f mount -t nosuchfs nowhere nowhere. It tries /sbin/mount.nosuchfs, /sbin/fs.d/mount.nosuchfs, and /sbin/fs/mount.nosuchfs only).
What's the worst that could happen :). If you forget and try to apt install exfat-fuse again, it's either going to give you a nice error message to remind you, or overwrite it.
| Configure mount to recognize self compiled fuse exfat |
1,568,533,746,000 |
How can someone remove the fuse support from OpenBSD?
Would it require to recompile the kernel?
Or just a config modifications or remove some binaries? How?
|
Kernel support for FUSE
# grep FUSE /sys/conf/GENERIC
option FUSE # FUSE
would need to be removed; assuming sys.tar.gz has been foisted onto the system and all the latest and greatest patches applied
# cd /sys/conf
# cp GENERIC NOFUSE
# (echo /FUSE; echo d; echo w; echo q) | ed NOFUSE
4048
option FUSE # FUSE
4027
# grep FUSE NOFUSE
# cd /sys/arch/`uname -m`/conf
# cp GENERIC NOFUSE
# grep GENERIC NOFUSE
# $OpenBSD: GENERIC,v 1.445 2017/08/28 19:32:53 jasper Exp $
include "../../../conf/GENERIC"
# ed NOFUSE
20842
/\/GENERIC
include "../../../conf/GENERIC"
s/GENERIC/NOFUSE
include "../../../conf/NOFUSE"
w
20841
q
# config NOFUSE
...
# cd ../compile/NOFUSE
# make
...
# make install
...
# reboot
Man pages such as config(8) and release(8) and boot(8) might be worth a peek, and that the above builds a MP or SP kernel as is appropriate for the system...
| How to "fully" remove fuse support from OpenBSD? |
1,568,533,746,000 |
I'm on debian 8 jessie with a 4.2.3 kernel. I can't seem to get fuse installed and working. When I install fuse with sudo apt-get install fuse I get MAKEDEV not installed, skipping device node creation. Also when I do sudo modprobe fuse I end up with modprobe: FATAL: Module fuse not found. I tried installing makedev but that didn't work because I already have udev. That just got me /run/udev or .udevdb or .udev presence implies active udev. Aborting MAKEDEV invocation.
|
Resolved by enabling FUSE_FS module in kernel compile .config
| Install fuse debian 8 jessie |
1,391,274,358,000 |
The Windows dir directory listing command has a line at the end showing the total amount of space taken up by the files listed. For example, dir *.exe shows all the .exe files in the current directory, their sizes, and the sum total of their sizes. I'd love to have similar functionality with my dir alias in bash, but I'm not sure exactly how to go about it.
Currently, I have alias dir='ls -FaGl' in my .bash_profile, showing
drwxr-x---+ 24 mattdmo 4096 Mar 14 16:35 ./
drwxr-x--x. 256 root 12288 Apr 8 21:29 ../
-rw------- 1 mattdmo 13795 Apr 4 17:52 .bash_history
-rw-r--r-- 1 mattdmo 18 May 10 2012 .bash_logout
-rw-r--r-- 1 mattdmo 395 Dec 9 17:33 .bash_profile
-rw-r--r-- 1 mattdmo 176 May 10 2012 .bash_profile~
-rw-r--r-- 1 mattdmo 411 Dec 9 17:33 .bashrc
-rw-r--r-- 1 mattdmo 124 May 10 2012 .bashrc~
drwx------ 2 mattdmo 4096 Mar 24 20:03 bin/
drwxrwxr-x 2 mattdmo 4096 Mar 11 16:29 download/
for example. Taking the answers from this question:
dir | awk '{ total += $4 }; END { print total }'
which gives me the total, but doesn't print the directory listing itself. Is there a way to alter this into a one-liner or shell script so I can pass any ls arguments I want to dir and get a full listing plus sum total? For example, I'd like to run dir -R *.jpg *.tif to get the listing and total size of those file types in all subdirectories. Ideally, it would be great if I could get the size of each subdirectory, but this isn't essential.
|
The following function does most of what you're asking for:
dir () { ls -FaGl "${@}" | awk '{ total += $4; print }; END { print total }'; }
... but it won't give you what you're asking for from dir -R *.jpg *.tif, because that's not how ls -R works. You might want to play around with the find utility for that.
| Show sum of file sizes in directory listing |
1,391,274,358,000 |
I am trying to sum certain numbers in a column using awk. I would like to sum just column 3 of the "smiths" to get a total of 212. I can sum the whole column using awk but not just the "smiths". I have:
awk 'BEGIN {FS = "|"} ; {sum+=$3} END {print sum}' filename.txt
Also I am using putty. Thank you for any help.
smiths|Login|2
olivert|Login|10
denniss|Payroll|100
smiths|Time|200
smiths|Logout|10
|
awk -F '|' '$1 ~ /smiths/ {sum += $3} END {print sum}' inputfilename
The -F flag sets the field separator; I put it in single quotes because it is a special shell character.
Then $1 ~ /smiths/ applies the following {code block} only to lines where the first field matches the regex /smiths/.
The rest is the same as your code.
Note that since you're not really using a regex here, just a specific value, you could just as easily use:
awk -F '|' '$1 == "smiths" {sum += $3} END {print sum}' inputfilename
Which checks string equality. This is equivalent to using the regex /^smiths$/, as mentioned in another answer, which includes the ^ anchor to only match the start of the string (the start of field 1) and the $ anchor to only match the end of the string. Not sure how familiar you are with regexes. They are very powerful, but for this case you could use a string equality check just as easily.
| Using awk to sum the values of a column, based on the values of another column |
1,391,274,358,000 |
I have a list of directories and subdirectories that contain large csv files. There are about 500 million lines in these files, each is a record. I would like to know
How many lines are in each file.
How many lines are in directory.
How many lines in total
Most importantly, I need this in 'human readable format' eg. 12,345,678 rather than 12345678
It would be nice to learn how to do this in 3 ways. Plain vanilla bash tools, awk etc., and perl (or python).
|
How many lines are in each file.
Use wc, originally for word count, I believe, but it can do lines, words, characters, bytes, and the longest line length. The -l option tells it to count lines.
wc -l <filename>
This will output the number of lines in :
$ wc -l /dir/file.txt
32724 /dir/file.txt
You can also pipe data to wc as well:
$ cat /dir/file.txt | wc -l
32724
$ curl google.com --silent | wc -l
63
How many lines are in directory.
Try:
find . -name '*.pl' | xargs wc -l
another one-liner:
( find ./ -name '*.pl' -print0 | xargs -0 cat ) | wc -l
BTW, wc command counts new lines codes, not lines. When last line in the file does not end with new line code, this will not counted.
You may use grep -c ^ , full example:
#this example prints line count for all found files
total=0
find /path -type f -name "*.php" | while read FILE; do
#you see use grep instead wc ! for properly counting
count=$(grep -c ^ < "$FILE")
echo "$FILE has $count lines"
let total=total+count #in bash, you can convert this for another shell
done
echo TOTAL LINES COUNTED: $total
How many lines in total
Not sure that I understood you request correctly. e.g. this will output results in the following format, showing the number of lines for each file:
# wc -l `find /path/to/directory/ -type f`
103 /dir/a.php
378 /dir/b/c.xml
132 /dir/d/e.xml
613 total
Alternatively, to output just the total number of new line characters without the file by file counts to following command can prove useful:
# find /path/to/directory/ -type f -exec wc -l {} \; | awk '{total += $1} END{print total}'
613
Most importantly, I need this in 'human readable format' eg.
12,345,678 rather than 12345678
Bash has a printf function built in:
printf "%0.2f\n" $T
As always, there are many different methods that could be used to achieve the same results mentioned here.
| How do you list number of lines of every file in a directory in human readable format. |
1,391,274,358,000 |
Having the following in one of my shell functions:
function _process () {
awk -v l="$line" '
BEGIN {p=0}
/'"$1"'/ {p=1}
END{ if(p) print l >> "outfile.txt" }
'
}
, so when called as _process $arg, $arg gets passed as $1, and used as a search pattern. It works this way, because shell expands $1 in place of awk pattern! Also l can be used inside awk program, being declared with -v l="$line". All fine.
Is it possible in same manner give pattern to search as a variable?
Following will not work,
awk -v l="$line" -v search="$pattern" '
BEGIN {p=0}
/search/ {p=1}
END{ if(p) print l >> "outfile.txt" }
'
,as awk will not interpret /search/ as a variable, but instead literally.
|
Use awk's ~ operator, and you don't need to provide a literal regex on the right-hand side:
function _process () {
awk -v l="$line" -v pattern="$1" '
$0 ~ pattern {p=1; exit}
END {if(p) print l >> "outfile.txt"}
'
}
Here calling exit upon the first match as we don't need to read the rest. You don't even need awk, grep would be enough and likely more efficient and avoid the problem of awk's -v var='value' doing backslash processing:
function _process () {
grep -qe "$1" && printf '%s\n' "$line"
}
Depending on the pattern, you may want grep -Eqe "$1"
| Pass shell variable as a /pattern/ to awk |
1,391,274,358,000 |
This one-liner removes duplicate lines from text input without pre-sorting.
For example:
$ cat >f
q
w
e
w
r
$ awk '!a[$0]++' <f
q
w
e
r
$
The original code I have found on the internets read:
awk '!_[$0]++'
This was even more perplexing to me as I took _ to have a special meaning in awk, like in Perl, but it turned out to be just a name of an array.
Now, I understand the logic behind the one-liner:
each input line is used as a key in a hash array, thus, upon completion, the hash contains unique lines in the order of arrival.
What I would like to learn is how exactly this notation is interpreted by awk. E.g. what the bang sign (!) means and the other elements of this code snippet.
How does it work?
|
Here is a "intuitive" answer, for a more in depth explanation of awk's mechanism see either @Cuonglm's
In this case, !a[$0]++, the post-increment ++ can be set aside for a moment, it does not change the value of the expression. So, look at only !a[$0]. Here:
a[$0]
uses the current line $0 as key to the array a, taking the value stored there. If this particular key was never referenced before, a[$0] evaluates to the empty string.
!a[$0]
The ! negates the value from before. If it was empty or zero (false), we now have a true result. If it was non-zero (true), we have a false result. If the whole expression evaluated to true, meaning that a[$0] was not set to begin with, the whole line is printed as the default action.
Also, regardless of the old value, the post-increment operator adds one to a[$0], so the next time the same value in the array is accessed, it will be positive and the whole condition will fail.
| How does awk '!a[$0]++' work? |
1,391,274,358,000 |
Trying to understand the differences between the two functions gawk vs. awk? When would one use gawk vs awk? Or are they the same in terms of usage?
Also, could one provide an example?
|
AWK is a programming language. There are several implementations of AWK (mostly in the form of interpreters). AWK has been codified in POSIX.
The main implementations in use today are:
nawk (“new awk”, an evolution of oawk, the original UNIX implementation), used on *BSD and widely available on Linux;
mawk, a fast implementation that mostly sticks to standard features;
gawk, the GNU implementation, with many extensions;
the
Busybox (small, intended for embedded systems, not many features).
If you only care about standard features, call awk, which may be Gawk or nawk or mawk or some other implementation. If you want the features in GNU awk, use gawk or Perl or Python.
| Difference between gawk vs. awk |
1,391,274,358,000 |
I'm looking for the simplest method to print the longest line in a file. I did some googling and surprisingly couldn't seem to find an answer. I frequently print the length of the longest line in a file, but I don't know how to actually print the longest line. Can anyone provide a solution to print the longest line in a file? Thanks in advance.
|
cat ./text | awk ' { if ( length > x ) { x = length; y = $0 } }END{ print y }'
UPD: summarizing all the advices in the comments
awk 'length > max_length { max_length = length; longest_line = $0 } END { print longest_line }' ./text
| How to print the longest line in a file? |
1,391,274,358,000 |
In python
re.sub(r"(?<=.)(?=(?:...)+$)", ",", stroke )
To split a number by triplets, e.g.:
echo 123456789 | python -c 'import sys;import re; print re.sub(r"(?<=.)(?=(?:...)+$)", ",", sys.stdin.read());'
123,456,789
How to do the same with bash/awk?
|
With sed:
$ echo "123456789" | sed 's/\([[:digit:]]\{3\}\)\([[:digit:]]\{3\}\)\([[:digit:]]\{3\}\)/\1,\2,\3/g'
123,456,789
(Note that this only works for exactly 9 digits!)
or this with sed:
$ echo "123456789" | sed ':a;s/\B[0-9]\{3\}\>/,&/;ta'
123,456,789
With printf:
$ LC_NUMERIC=en_US printf "%'.f\n" 123456789
123,456,789
| Add thousands separator in a number |
1,391,274,358,000 |
In my understanding, awk array is something like python dict.
So I write down the code bellow to explore it:
awk '{my_dict[$1] = $2} END { print my_dict}' zen
And I got: awk: can't read value of my_dict; it's an array name.
As the first column isn`t a number, how could I read the total content of the array or traverse it?
|
You can loop over the array's keys and extract the corresponding values:
awk '{my_dict[$1] = $2} END { for (key in my_dict) { print my_dict[key] } }' zen
To get output similar to that you'd get with a Python dictionary, you can print the key as well:
awk '{my_dict[$1] = $2} END { for (key in my_dict) { print key ": " my_dict[key] } }' zen
This works regardless of the key type.
| How to view all the content in an awk array? |
1,391,274,358,000 |
I want to grep the output of my ls -l command:
-rw-r--r-- 1 root root 1866 Feb 14 07:47 rahmu.file
-rw-r--r-- 1 rahmu user 95653 Feb 14 07:47 foo.file
-rw-r--r-- 1 rahmu user 1073822 Feb 14 21:01 bar.file
I want to run grep rahmu on column $3 only, so the output of my grep command should look like this:
-rw-r--r-- 1 rahmu user 95653 Feb 14 07:47 foo.file
-rw-r--r-- 1 rahmu user 1073822 Feb 14 21:01 bar.file
What's the simplest way to do it? The answer must be portable across many Unices, preferably focusing on Linux and Solaris.
NB: I'm not looking for a way to find all the files belonging to a given user. This example was only given to make my question clearer.
|
One more time awk saves the day!
Here's a straightforward way to do it, with a relatively simple syntax:
ls -l | awk '{if ($3 == "rahmu") print $0;}'
or even simpler: (Thanks to Peter.O in the comments)
ls -l | awk '$3 == "rahmu"'
| How to run grep on a single column? |
1,391,274,358,000 |
I need to remove the last character from a string in this command:
sudo docker stats --no-stream 39858jf8 | awk '{if (NR!=1) {print $2}}'
The result is 5.20% , I need remove the % at the end, giving 5.20. Is it possibile to do this in the same command?
|
Yes, with substr() you can do string slicing:
... | awk '{if (NR!=1) {print substr($2, 1, length($2)-1)}}'
length($2) will get us the length of the second field, deducting 1 from that to strip off the last character.
Example:
$ echo spamegg foobar | awk '{print substr($2, 1, length($2)-1)}'
fooba
| Remove last character from string captured with awk [duplicate] |
1,391,274,358,000 |
Given: there are 40 columns in a record. I want to replace the 35th column so that the 35th column will be replaced with the content of the 35th column and a "$" symbol. What came to mind is something like:
awk '{print $1" "$2" "...$35"$ "$36...$40}'
It works but because it is infeasible when the number of column is as large as 10k. I need a better way to do this.
|
You can do like this:
awk '$35=$35"$"'
| How to replace the content of a specific column with awk? |
1,391,274,358,000 |
I have the following code that I run on my Terminal.
LC_ALL=C && grep -F -f genename2.txt hg38.hgnc.bed > hg38.hgnc.goi.bed
This doesn't give me the common lines between the two files. What am I missing there?
|
Use comm -12 file1 file2 to get common lines in both files.
You may also needs your file to be sorted to comm to work as expected.
comm -12 <(sort file1) <(sort file2)
From man comm:
-1 suppress column 1 (lines unique to FILE1)
-2 suppress column 2 (lines unique to FILE2)
Or using grep command you need to add -x option to match the whole line as a matching pattern. The F option is telling grep that match pattern as a string not a regex match.
grep -Fxf file1 file2
Or using awk.
awk 'NR==FNR{seen[$0]=1; next} seen[$0]' file1 file2
This is reading the whole line of file1 into an array called seen where the key is a whole line (in awk the $0 represents the whole current line).
We used NR==FNR as a condition to run the following block only for the first input file1 and not file2 (NR is referring to the number of records across all inputs, and FNR is the file number of records for each individual input. So, FNR is unique for each input file whereas NR is unique for all inputs files.)
The next statement telling awk to not continue the rest of the code and rather start over again until NR is not equal to FNR, which means all lines of file1 are read by awk.
Then next condition seen[$0] will apply only for the second input file2. For each line in file2 it will print every line that was marked as present =1 in file1 in the array.
Another simple option is using sort and uniq:
sort file1 file2|uniq -d
This will print both files sorted then uniq -d will print only duplicated lines. BUT this is granted when there is NO duplicated lines in both files themselves, else below is always granted even if there is a lines duplicated within both files.
uniq -d <(sort <(sort -u file1) <(sort -u file2))
| Common lines between two files [duplicate] |
1,391,274,358,000 |
I have a file named Element_query containing the result of a query :
SQL> select count (*) from element;
[Output of the query which I want to keep in my file]
SQL> spool off;
I want to delete 1st line and last line using shell command.
|
Using GNU sed:
sed -i '1d;$d' Element_query
How it works :
-i option edit the file itself. You could also remove that option and redirect the output to a new file or another command if you want.
1d deletes the first line (1 to only act on the first line, d to delete it)
$d deletes the last line ($ to only act on the last line, d to delete it)
Going further :
You can also delete a range. For example, 1,5d would delete the first 5 lines.
You can also delete every line that begins with SQL> using the statement /^SQL> /d
You could delete every blank line with /^$/d
Finally, you can combine any of the statement by separating them with a semi-colon (statement1;statement2;satement3;...) or by specifying them separately on the command line (-e 'statement1' -e 'statement 2' ...)
| How do I delete the first n lines and last line of a file using shell commands? |
1,391,274,358,000 |
I have the following file:
id name age
1 ed 50
2 joe 70
I want to print just the id and age columns. Right now I just use awk:
cat file.tsv | awk '{ print $1, $3 }'
However, this requires knowing the column numbers. Is there a way to do it where I can use the name of the column (specified on the first row), instead of the column number?
|
Maybe something like this:
$ cat t.awk
NR==1 {
for (i=1; i<=NF; i++) {
ix[$i] = i
}
}
NR>1 {
print $ix[c1], $ix[c2]
}
$ awk -f t.awk c1=id c2=name input
1 ed
2 joe
$ awk -f t.awk c1=age c2=name input
50 ed
70 joe
If you want to specify the columns to print on the command line, you could do something like this:
$ cat t.awk
BEGIN {
split(cols,out,",")
}
NR==1 {
for (i=1; i<=NF; i++)
ix[$i] = i
}
NR>1 {
for(i=1; i <= length(out); i++)
printf "%s%s", $ix[out[i]], OFS
print ""
}
$ awk -f t.awk -v cols=name,age,id,name,id input
ed 1 ed 50 1
joe 2 joe 70 2
(Note the -v switch to get the variable defined in the BEGIN block.)
| How to print certain columns by name? |
1,391,274,358,000 |
I have a big file and need to split into two files. Suppose in the first file the 1000 lines should be selected and put into another file and delete those lines in the first file.
I tried using split but it is creating multiple chunks.
|
The easiest way is probably to use head and tail:
$ head -n 1000 input-file > output1
$ tail -n +1001 input-file > output2
That will put the first 1000 lines from input-file into output1, and all lines from 1001 till the end in output2
| Split a file into two |
1,391,274,358,000 |
I have file1 likes:
0 AFFX-SNP-000541 NA
0 AFFX-SNP-002255 NA
1 rs12103 0.6401
1 rs12103_1247494 0.696
1 rs12142199 0.7672
And a file2:
0 AFFX-SNP-000541 1
0 AFFX-SNP-002255 1
1 rs12103 0.5596
1 rs12103_1247494 0.5581
1 rs12142199 0.4931
And would like a file3 such that:
0 AFFX-SNP-000541 NA 1
0 AFFX-SNP-002255 NA 1
1 rs12103 0.6401 0.5596
1 rs12103_1247494 0.696 0.5581
1 rs12142199 0.7672 0.4931
Which means to put the 4th column of file2 to file1 by the name of the 2nd column.
|
This should do it:
join -j 2 -o 1.1,1.2,1.3,2.3 file1 file2
Important: this assumes your files are sorted (as in your example) according to the SNP name. If they are not, sort them first:
join -j 2 -o 1.1,1.2,1.3,2.3 <(sort -k2 file1) <(sort -k2 file2)
Output:
0 AFFX-SNP-000541 NA 1
0 AFFX-SNP-002255 NA 1
1 rs12103 0.6401 0.5596
1 rs12103_1247494 0.696 0.5581
1 rs12142199 0.7672 0.4931
Explanation (from info join):
`join' writes to standard output a line for each pair of input lines
that have identical join fields.
`-1 FIELD'
Join on field FIELD (a positive integer) of file 1.
`-2 FIELD'
Join on field FIELD (a positive integer) of file 2.
`-j FIELD'
Equivalent to `-1 FIELD -2 FIELD'.
`-o FIELD-LIST'
Otherwise, construct each output line according to the format in
FIELD-LIST. Each element in FIELD-LIST is either the single
character `0' or has the form M.N where the file number, M, is `1'
or `2' and N is a positive field number.
So, the command above joins the files on the second field and prints the 1st,2nd and 3rd field of file one, followed by the 3rd field of file2.
| How to merge two files based on the matching of two columns? |
1,391,274,358,000 |
If I have two files (with single columns), one like so (file1)
34
67
89
92
102
180
blue2
3454
And the second file (file2)
23
56
67
69
102
200
How do I find elements that are common in both files (intersection)? The expected output in this example is
67
102
Note that number of items (lines) in each file differs. Numbers and strings may be mixed. They may not be necessarily sorted. Each item only appears once.
UPDATE:
Time check based on some of the answers below.
# generate some data
>shuf -n2000000 -i1-2352452 > file1
>shuf -n2000000 -i1-2352452 > file2
#@ilkkachu
>time (join <(sort "file1") <(sort "file2") > out1)
real 0m15.391s
user 0m14.896s
sys 0m0.205s
>head out1
1
10
100
1000
1000001
#@Hauke
>time (grep -Fxf "file1" "file2" > out2)
real 0m7.652s
user 0m7.131s
sys 0m0.316s
>head out2
1047867
872652
1370463
189072
1807745
#@Roman
>time (comm -12 <(sort "file1") <(sort "file2") > out3)
real 0m13.533s
user 0m13.140s
sys 0m0.195s
>head out3
1
10
100
1000
1000001
#@ilkkachu
>time (awk 'NR==FNR { lines[$0]=1; next } $0 in lines' "file1" "file2" > out4)
real 0m4.587s
user 0m4.262s
sys 0m0.195s
>head out4
1047867
872652
1370463
189072
1807745
#@Cyrus
>time (sort file1 file2 | uniq -d > out8)
real 0m16.106s
user 0m15.629s
sys 0m0.225s
>head out8
1
10
100
1000
1000001
#@Sundeep
>time (awk 'BEGIN{while( (getline k < "file1")>0 ){a[k]}} $0 in a' file2 > out5)
real 0m4.213s
user 0m3.936s
sys 0m0.179s
>head out5
1047867
872652
1370463
189072
1807745
#@Sundeep
>time (perl -ne 'BEGIN{ $h{$_}=1 while <STDIN> } print if $h{$_}' <file1 file2 > out6)
real 0m3.467s
user 0m3.180s
sys 0m0.175s
>head out6
1047867
872652
1370463
189072
1807745
The perl version was the fastest followed by awk. All output files had the same number of rows.
For the sake of comparison, I have sorted the output numerically so that the output is identical.
#@ilkkachu
>time (join <(sort "file1") <(sort "file2") | sort -k1n > out1)
real 0m17.953s
user 0m5.306s
sys 0m0.138s
#@Hauke
>time (grep -Fxf "file1" "file2" | sort -k1n > out2)
real 0m12.477s
user 0m11.725s
sys 0m0.419s
#@Roman
>time (comm -12 <(sort "file1") <(sort "file2") | sort -k1n > out3)
real 0m16.273s
user 0m3.572s
sys 0m0.102s
#@ilkkachu
>time (awk 'NR==FNR { lines[$0]=1; next } $0 in lines' "file1" "file2" | sort -k1n > out4)
real 0m8.732s
user 0m8.320s
sys 0m0.261s
#@Cyrus
>time (sort file1 file2 | uniq -d > out8)
real 0m19.382s
user 0m18.726s
sys 0m0.295s
#@Sundeep
>time (awk 'BEGIN{while( (getline k < "file1")>0 ){a[k]}} $0 in a' file2 | sort -k1n > out5)
real 0m8.758s
user 0m8.315s
sys 0m0.255s
#@Sundeep
>time (perl -ne 'BEGIN{ $h{$_}=1 while <STDIN> } print if $h{$_}' <file1 file2 | sort -k1n > out6)
real 0m7.732s
user 0m7.300s
sys 0m0.310s
>head out1
1
2
3
4
5
All outputs are now identical.
|
In awk, this loads the first file fully in memory:
$ awk 'NR==FNR { lines[$0]=1; next } $0 in lines' file1 file2
67
102
Or, if you want to keep track of how many times a given line appears:
$ awk 'NR==FNR { lines[$0] += 1; next } lines[$0] {print; lines[$0] -= 1}' file1 file2
join could do that, though it does require the input files to be sorted, so you need to do that first, and doing it loses the original ordering:
$ join <(sort file1) <(sort file2)
102
67
| Find intersection of lines in two files [duplicate] |
1,391,274,358,000 |
We know that we can get the second column of the line we want from a file using these two techniques:
awk '/WORD/ { print $2 }' filename
or
grep WORD filename| cut -f 2 -d ' '
My questions are:
What are the differences between the two commands above?
Which one has the best performance?
What are the advantages of using awk over using cut, and vice versa?
What options does awk give us over cut and vice versa?
|
The most prominent difference between your two lines would be depending on the input. cut takes a single character in -d as the field delimiter (the default being TAB), and every single occurrence of that character starts a new field. awk, however, is more flexible. The separator is in the FS variable and can be an empty string (every input character makes a separate field), a single character, or a regular expression. The special case of a single space character (the default) means to split on any sequence of whitespace. Also, awk suppresses leading whitespace by default.
Please compare:
$ echo "abc def" | cut -f 2 -d ' '
def
$ echo "abc def" | cut -f 2 -d ' '
$ echo " abc def" | cut -f 2 -d ' '
abc
$ echo "abc def" | awk '{ print $2 }'
def
$ echo "abc def" | awk '{ print $2 }'
def
$ echo " abc def" | awk '{ print $2 }'
def
Here, awk splits on the sequence of spaces between abc and def whereas cut takes every space as a separator.
What you take would depend on what you want to achieve. Otherwise, I would expect cut to be faster since it is a smaller, single purpose tool whereas awk has its own programming language.
| What are the exact differences between awk and cut with grep? [closed] |
1,391,274,358,000 |
echo -e 'one two three\nfour five six\nseven eight nine'
one two three
four five six
seven eight nine
how can I do some "MAGIC" do get this output?:
three
six
nine
UPDATE:
I don't need it in this specific way, I need a general solution so that no matter how many columns are in a row, e.g.: awk always displays the last column.
|
It can even be done only with 'bash', without 'sed', 'awk' or 'perl':
echo -e 'one two three\nfour five six\nseven eight nine' |
while IFS=" " read -r -a line; do
nb=${#line[@]}
echo ${line[$((nb - 1))]}
done
| How to print only last column? |
1,391,274,358,000 |
I have an http link :
http://www.test.com/abc/def/efg/file.jar
and I want to save the last part file.jar to variable, so the output string is "file.jar".
Condition: link can has different length e.g.:
http://www.test.com/abc/def/file.jar.
I tried it that way:
awk -F'/' '{print $7}'
, but problem is the length of URL, so I need a command which can be used for any URL length.
|
Using awk for this would work, but it's kind of deer hunting with a howitzer. If you already have your URL bare, it's pretty simple to do what you want if you put it into a shell variable and use bash's built-in parameter substitution:
$ myurl='http://www.example.com/long/path/to/example/file.ext'
$ echo ${myurl##*/}
file.ext
The way this works is by removing a prefix that greedily matches '*/', which is what the ## operator does:
${haystack##needle} # removes any matching 'needle' from the
# beginning of the variable 'haystack'
| How to get last part of http link in Bash? |
1,391,274,358,000 |
I am trying to grep the ongoing tail of file log and get the nth word from a line. Example file:
$ cat > test.txt <<EOL
Beam goes blah
John goes hey
Beam goes what?
John goes forget it
Beam goes okay
Beam goes bye
EOL
^C
Now if I do a tail:
$ tail -f test.txt
Beam goes blah
John goes hey
Beam goes what?
John goes forget it
Beam goes okay
Beam goes bye
^C
If I grep that tail:
$ tail -f test.txt | grep Beam
Beam goes blah
Beam goes what?
Beam goes okay
Beam goes bye
^C
But if I awk that grep:
$ tail -f test.txt | grep Beam | awk '{print $3}'
Nothing no matter how long I wait. I suspect it's something to do with the way the stream works.
Anyone have any clue?
|
It's probably output buffering from grep. you can disable that with grep --line-buffered.
But you don't need to pipe output from grep into awk. awk can do regexp pattern matching all by itself.
tail -f test.txt | awk '/Beam/ {print $3}'
| Piping from grep to awk not working |
1,391,274,358,000 |
I have a file which includes comments:
foo
bar
stuff
#Do not show this...
morestuff
evenmorestuff#Or this
I want to print the file without including any of the comments:
foo
bar
stuff
morestuff
evenmorestuff
There are a lot of applications where this would be helpful. What is a good way to do it?
|
One way to remove all comments is to use grep with -o option:
grep -o '^[^#]*' file
where
-o: prints only matched part of the line
first ^: beginning of the line
[^#]*: any character except # repeated zero or more times
Note that empty lines will be removed too, but lines with only spaces will stay.
| How can I remove all comments from a file? |
1,391,274,358,000 |
How can I print all the lines between two lines starting with one pattern for the first line and ending with another pattern for the last line?
Update
I guess it was a mistake to mention that this document is HTML. I seem to have touched a nerve, so forget that. I'm not trying to parse HTML or do anything with it other than print a section of a text document.
Consider this example:
aaa
bbb
pattern1
aaa pattern2
bbb
ccc
pattern2
ddd
eee
pattern1
fff
ggg
Now, I want to print everything between the first instance of pattern1 starting at the beginning of a line and pattern2 starting at the beginning of another line. I want to include the pattern1 and pattern2 lines in my output, but I don't want anything after the pattern2 line.
pattern2 is found in one of the lines of the section. I don't want to stop there, but that's easily remedied by indicating the start of the line with ^.
pattern1 appears on another line after pattern2, but I don't want to look at that at all. I'm just looking for everything between the first instance of pattern1 and the first instance of pattern2, inclusive.
I found something that almost gets me there using sed:
sed -n '/^pattern1/,/^pattern2/p' inputfile.txt
... but that starts printing again at the next instance of pattern1
I can think of a method using grep -n ... | cut -f1 -d: twice to get the two line numbers then tail and head to get the section I want, but I'm hoping for a cleaner way. Maybe awk is a better tool for this task?
When I get this working, I hope to tie this into a git hook. I don't know how to do that yet, either, but I'm still reading and searching :)
Thank you.
|
You can make sed quit at a pattern with sed '/pattern/q', so you just need your matches and then quit at the second pattern match:
sed -n '/^pattern1/,/^pattern2/{p;/^pattern2/q}'
That way only the first block will be shown. The use of a subcommand ensures that ^pattern2 can cause sed to quit only after a match for ^pattern1. The two ^pattern2 matches can be combined:
sed -n '/^pattern1/,${p;/^pattern2/q}'
| Print lines of a file between two matching patterns [duplicate] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.