date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,428,267,066,000 |
I need to compile a C++ application to run on Red Hat 5.9, but I don't have access to a development server that runs Red Hat 5.9.
My current executable compiled on Ubuntu 10.04 produces the error message
/lib/libc.so.6: versionGLIBC_2.11' not found`
which probably means that RedHat is using an older libc. Which free Linux distribution should I use to compile for Read Hat 5.9 ?
I read that Red Hat is based on Fedora and RHEL 5.x is based on Fedora Core 6. Do I really have to use such an old system to compile for a rather recent RHEL 5.9?
|
CentOS 5.9 is binary compatible with RHEL 5.9. You can even fire it up in a VM.
| Compiling for Red Hat 5.9 |
1,428,267,066,000 |
I have a script (say Task.sh) (with executable permission):
Pseducode:
if [[ Not called from inside a script ]]; then
echo 'Not called from inside a script'
else
echo 'Called from a script'
fi
I have a script say Main.sh (with executable permission):
Value=$(Task.sh)
echo "${Value}"
Expected Outcome
$ Task.sh
Not called from inside a script
$ Main.sh
Called from a script
Help Request
Please suggest what conditional to put in the pseducode for Task.sh
|
One option is to change the requirements slightly from "am I running in a script?" to "am I connected to a terminal or a pipe/file?". This would allow for the case of Task.sh >/tmp/file: it's not called from a script, but it seems it should write to the file rather than the clipboard.
If that's acceptable then you can use a simple test for stdout connected to a terminal:
if [ -t 1 ]
then
echo "stdout is a terminal (tty)"
else
echo "not a terminal (tty)"
fi
Tools like ls and tty operate differently depending on their usage situation using a very similar approach. For example, in a directory with several entries contrast ls and ls | cat
| Change behavior of bash script if executed from another script |
1,428,267,066,000 |
Can process execute new program without the kernel knowing? Usually, when the program is executing kernel gives to it its own process after receiving the syscall (such as exec() or fork()). In this case, everything goes through the kernel, which finally starts the program with, for example, ELF handler. In practice, of course, when running a new program, you want a separate process for it, but what if it's not necessary? Can a program/process xhathat transfer (if it doesn't already have it) a executable binary from the file system (yes, with syscalls) to its own virtual memory area and start executing it inside its process? In that case, the kernel would still only know that the program/process xhathat is doing something, but it wouldn't separately know this program executed by xhathat?
As to "without the kernel knowing"....what does that mean??
What I mean by that is that the kernel doesn't actually "start/execute the program" (when normally the kernel _always_ does that, whether it's a binary or an interpreted script containing the shebang), only indirectly. Yes, it loads a new program from mass storage into the memory area of this xhathat process, but does not start/execute it and is not aware of its start/execute. It doesn't do exec(), fork(), etc. kernel system calls for various starts/executions. When the kernel doesn't consciously launch a program, the program also doesn't show up among processes, for example (because it's just a "process/execution" inside the xhathat process). Can you catch up? As far as I can see, this is also possible with binary compiled programs (and of course also interpreted programs). This only came to mind when I realized that bash or systemd or whatever that "starts/execute" the programs/process, normally the kernel actually does the start/execute (even shebang scripts, as I stated earlier). However, after learning this, I had to wonder if it always has to be this way? I take your answer that it need not be so; although of course it is usually better to do that (that the kernel starts all programs/processes) and that is what is done.
By the way, What all (simply put) kernel is needed to do what I'm describing? The only thing I came up with was downloading this new program from the mass storage, but what if the program was already in the central memory and it didn't need to be downloaded from the mass storage separately? Could the process just directly start executing a new program/"process" without any interaction with the kernel?
|
Yes, this is possible. The already-running process needs to load (or map) the new program at the appropriate locations in the process’ virtual address space, load the dynamic loader if necessary, set up the required data structures on the stack, and jump to the new program’s entry point in memory. (Many of these operations involve the kernel, but nothing specific to loading a new program.)
Processes can’t create entirely new address spaces without fork-style help from the kernel, but that’s typically not much of a problem because the initiating program shouldn’t expect to regain control after the new program runs, and therefore it doesn’t matter that the two programs share their address space.
See the grugq’s Design and Implementation of Userland Exec for a more detailed explanation.
| Can process execute new program without the kernel knowing? |
1,428,267,066,000 |
Are there any Unix/Linux filesystems that do the following?
if the file is executable, return a virtual file containing the stdout generated by executing it (it would have to be non-writable, I suppose);
otherwise behave the same as ext4 and friends and provide the file itself.
I recently had a situation where I couldn't pipe information into a process but had to pass a (multiline) file as an option. Being able to generate that file on the fly (locally or over the network) seemed to me to be an elegant alternative.
Creating a file on /tmp was not an option because the option in question was in an attribute of an LDAP entry. My LDAP entry contains an attribute value somewhat like the following, which is statically defined:
-fstype=cifs...,rw,credentials=FILENAME,... ://remote
This is passed as data to a process that evaluates it and expects a FILENAME as a parameter to the credentials= option. There is no Bash or any other script. And I have no way that I am aware of of creating a /tmp file on the fly in LDAP.
Please don't ask why I want to do it: I solved my problem with a work-around, but I'm still interested in the fundamental question.
Of course, there is always the bonus question that goes with this sort of thing: can it be done safely?
Steve
|
Ok, let's see if I got this right: You have some data in LDAP, that refers to some filename (among other things). Presumably that file is on some network share readable by all the hosts using that LDAP directory, and you'd like the contents of the file to be created dynamically. So, e.g. the directory contains credentials=/ldap/foo.cred, and when some system opens /ldap/foo.cred, they get the dynamic data.
There appears to be a program called ScriptFS, a FUSE (Filesystem in Userspace) implementation of pretty much exactly what you ask, but I'm not familiar with the tool, and don't know how well it works. (@KamilMaciorowski mentioned this in a comment and in their answer on superuser.)
ScriptFS is a new file system which merely replicates a local file system, but replaces all files detected as "scripts" by the result of their execution.
FUSE, like the name implies, allows a userspace program to implement a filesystem, like any other, and the kernel arranges for file access requests to go to the process responsible for dealing with the filesystem. This would allow arbitrary dynamic content to be generated. Presumably, it would also work over network shares, since the files appear as regular files, but I have no personal experience of using it.
Out of the more "traditional" features, the ones that get close are named pipes and named sockets.
Named pipes created with mkfifo are like the pipes you use when you do somecmd | grep foo, except that they have a name in the filesystem and can be opened from there. So you could write mkfifo p; somecmd > p & grep foo < p. Or similarly the reader could open it first. Both readers and writers block until there's someone at the other end. At first, this seems like it could do what you want, you could arrange for a program to write at the pipe, and give the current output when someone opens it for reading. However, pipes only exist once, so concurrent users would attach to the same stream. Also, I have absolutely no idea how they work over network shares. It's possible that the pipe will exist only on the client side, so both ends will need to be on the same system.
Unix domain sockets can also exist with a name in the filesystem (if you're running systemd, you can probably find a few under /run). When connected to, the opening process is connected to the process listening on the socket (as with a TCP connection). The connections here are independent, but the catch is that (AFAIK) Unix domain sockets can't be opened with open(), but need to be connected to with connect(). Hence, you can't do cat < socket, it won't work.
| Do 'executable' file systems exist? |
1,428,267,066,000 |
The context
I've downloaded scilab-6.1.0.bin.linux-x86_64.tar.gz from the official website of "Scilab" because I want to be able to use the provided tools.
Within the bin directory of the downloaded files, I've the following files
$ ls -l | cut -d ' ' -f 5-
1713591 Feb 25 05:27 modelicac
2057719 Feb 25 05:27 modelicat
44563 Feb 25 05:27 scilab
6 Feb 25 05:27 scilab-adv-cli -> scilab
24741 Feb 25 05:27 scilab-bin
6 Feb 25 05:27 scilab-cli -> scilab
20725 Feb 25 05:27 scilab-cli-bin
44563 Feb 25 05:27 scinotes
44563 Feb 25 05:27 xcos
675942 Feb 25 05:27 XML2Modelica
$ test -L scilab-adv-cli && test -L scilab-cli && echo $?
0
As we can see, both scilab-cli and scilab-adv-cli are symbolic links to scilab. Executing scilab-cli, scilab-adv-cli and scilab yields to different results (see gif below)
The question
Isn't a symbolic link (A) , which points to an executable (B), be supposed to execute (B)?
In the scenario presented above, scilab-cli and scilab-adv-cli would be (A) and scilab would be (B).
|
Running a symlink which points to an executable does indeed run the executable, but there is one important difference: the first argument given to the new process, which (in this case) stores the command given, gives the name of the symlink, not the name of the target executable. This allows programs to implement different behaviours depending on how they’re called.
One common instance which is likely to be installed on your system is apropos: it’s typically (on Linux systems at least) a symlink to whatis, but the two commands behave differently.
In your case, when scilab is run as scilab-cli, it presents its text-mode interface; when it’s run as scilab (as happens with your realpath approach), it starts its GUI.
| Executing a symbolic link yields different results than executing the file to which it points |
1,428,267,066,000 |
This is on Arch Linux. Take a look at this:
[saint-llama@hubs bin]$ lsattr
--------------e----- ./install_fnp.sh
--------------e----- ./toolkitinstall.sh
--------------e----- ./FNPLicensingService
[saint-llama@hubs bin]$ file FNPLicensingService
FNPLicensingService: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-lsb-x86-64.so.3, for GNU/Linux 2.6.18, stripped
[saint-llama@hubs bin]$ ldd FNPLicensingService
linux-vdso.so.1 (0x00007ffcbafd8000)
libdl.so.2 => /usr/lib/libdl.so.2 (0x00007f870ce06000)
librt.so.1 => /usr/lib/librt.so.1 (0x00007f870cdfb000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007f870cdd9000)
libm.so.6 => /usr/lib/libm.so.6 (0x00007f870cc93000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007f870cc79000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007f870cab2000)
/lib64/ld-lsb-x86-64.so.3 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007f870ce60000)
[saint-llama@hubs bin]$ sudo ./FNPLicensingService
sudo: unable to execute ./FNPLicensingService: No such file or directory
So it exists for sure. Ldd shows all the libs are linked. File shows that it's a 64bit elf (and I'm on a 64bit install).
What gives? Why am I getting "No such file or directory"?
|
This command fixed it for me on Arch Linux allowing me to run the elf binary:
sudo pacman -Syy ld-lsb lsb-release
For other flavors of linux,
You should either install the ld-lsb package (or lsb-compat or any similar package which contains ld-lsb-x86-64.so.3) or create a wrapper / executable script that starts your program via the existing dynamic linker:
#! /bin/sh
/usr/lib64/ld-linux-x86-64.so.2 ./FNPLicensingService "$@"
What gives? Why am I getting "No such file or directory"?
That's a well known wart. Despite displaying the path of the binary, the error message is about the dynamic linker / ELF interpreter required by the binary not existing, not about the binary itself.
The output of ldd does NOT tell you if the dynamic linker really exists; ldd nowadays uses a dynamic linker from a list of "safe paths" instead of that burned in into the binary, in order to prevent users who run ldd on random binaries from harming themselves. And its output is also confusing and misleading in the case of binaries whose interpreter doesn't exist. Simple example:
$ cp /bin/sh /tmp/sh
$ patchelf --set-interpreter /no/such/file /tmp/sh
$ /tmp/sh
bash: /tmp/sh: No such file or directory
$ ls /tmp/sh
/tmp/sh
$ file /tmp/sh
/tmp/sh: ELF 64-bit LSB ..., interpreter /no/such/file, ...
$ ldd /tmp/sh => /foo/bar => /lib64/ld-linux-x86-64.so.2
...
/no/such/file => /lib64/ld-linux-x86-64.so.2 (0x00007fc60d225000)
| File definitely exists. Get "No such file or directory" when trying to run it |
1,428,267,066,000 |
I know that a root user can read a file even if the access permissions are all set to 0 but i don't understand about the write and execute permissions in specific. Can a superuser write and execute a file having permissions as 000 ?
|
It can write the same as it can read. Being root trumps these. But with execution it's a different story. If a file is not marked as executable, then it's not considered executable. However, once it's marked executable, it doesn't have to be readable for root to be executed (if it is a script). Unlike it is with the regular users.
| Can superuser write a file having 000 access permissions? |
1,428,267,066,000 |
I can't understand what's going on. Yesterday everything was working normally. Literally all I did since then was clearing the bash history with history -cw. Today all I get when trying to execute my g++-compiled programs is complete void:
The same code works perfectly fine when compiled on another machine. I can't even think of how to search for the reason! This is very bizarre, I don't understand how this could happen.
|
Wrong direction on your slash. Use / instead:
[rinkaru@localhost ~]$ ./6.out
| What could have caused g++ (and clang++) executables to stop working? [closed] |
1,428,267,066,000 |
I thought my compliation was okay because no errors were printed out, yet when I try to run the executable, he tells me that it's unreachable...
coppan12@b048-08:~$ gcc -Wall prog.c -o prog
coppan12@b048-08:~$ prog
La commande « prog » est introuvable
any hint?
|
Try
./prog
to run prog in the current working directory, as . is typically not (nor should be) in PATH.
Also, a Makefile is perhaps much more sensible, as then you can simply type make test and have the program built (if necessary) and tested:
prog: prog.c
test: prog
echo blah de blah | ./prog
A Makefile can also integrate with emacs or vim based testing, among other advantages... (disadvantage: Makefile use tabs, so ensure any rules are tabbed in, not with spaces, sigh.)
| Is my compilation false? |
1,428,267,066,000 |
I am using the command -v "program" >/dev/null 2>&1 construct if I need to POSIX-ly find out if such program is installed in an if-statement.
From its help page is not clear, whether it checks only for path or also for the executable bit?
$ command --help
command: command [-pVv] command [arg ...]
...
-v print a description of COMMAND similar to the `type' builtin
...
Thank you.
|
command -v uses the PATH to find the executable.
It also checks the permission. If you try command -v a_non_executable_file, nothing will be printed.
You can try strace bash -c 'command -v grep' you will see that the access(2) system call is executed (this system call check the user's permissions for a file). command -v will search in the PATH the first file you can execute.
| Is `command -v program` always executable? |
1,428,267,066,000 |
Background
Using xfce4-terminal as an example, I want to run the program by just typing term.
When I type which xfce4-terminal it returns /usr/bin/xfce4-terminal.
So, I created a symbolic link: sudo ln -s /usr/bin/xfce4-terminal /usr/bin/term.
Now I can execute the program by typing term.
Question
I've been wondering if this is a reasonable way to achieve the purpose. Asking for the best method (without even researching) may be a subjective question, but here I'm asking for a standard way if any, or any pros and cons my method may has.
I have thought of the following alternatives:
Rename the program: mv xfce4-terminal term
Use environmental variables (but there are many ways to do this...)
Avoid being lazy in the first place
I already know that:
I can utilize Tab to auto-complete
(In GUI) I can utilize shortcuts like Ctrl+Alt+T
Creating symbolic link has risk of conflict if another program was named "term"
Renaming has risk of breaking dependency
Environment:
uname --all returns:
Linux debian 4.19.0-18-amd64 #1 SMP
Debian 4.19.208-1 (2021-09-29)
x86_64 GNU/Linux
|
The safest way is to use an alias, as they has already mentioned in the comments.
Symlinking will work as well, but be aware that some programs change their behaviour depending on the name by which they were called.
E.g. bash-scripts checking the content of $0 and calling different functions depending on it.
Renaming the program itself is asking for trouble:
Anything that depends on xfce4-terminal will fall flat on its face after the renaming.
And you'll have two versions of the binary on your system with the next update :-/
| Reasonable way to customize ( change ) an executable program's command ( name ) , in terminal |
1,428,267,066,000 |
I'm using Linux debian 4.9.0-kali4-amd64 #1 SMP Debian 4.9.30-1kali1 (2017-06-06) x86_64 GNU/Linux and want to make /home/pantheon/Desktop/pycrust-20170611-2151.sh run in terminal when I klick on it. The file is written in python,
#!/bin/env python
import os
os.system("cd /home/pantheon/Desktop/fluxion")
os.system("sudo ./fluxion")
I have tried with
chmod +x /home/pantheon/Desktop/pycrust-20170611-2151.sh and chmod u+x <"">
To execute in terminal, ./home/pantheon/Desktop/pycrust-20170611-2151.sh, which gives me the error
bash: ./home/pantheon/Desktop/pycrust-20170611-2151.sh: No such file or directory.
/home/pantheon/Desktop/pycrust-20170611-2151.sh gives me the output
bash: /home/pantheon/Desktop/pycrust-20170611-2151.sh: /bin/env: bad interpreter: No such file or directory. (Overlined text as it is not what I want to do. I do not know about the errors, though).
I also have tried to tweak Nautilus, but that didn't help me either, as executing the file in terminal that way results in a There was an error creating the child process for this terminal. Failed to execute child process "/home/pantheon/Desktop/pycrust-20170611-2151.py" (No such file or directory)
I have done this.
sudo ls -l /home/pantheon/Desktop/pycrust-20170611-2151.sh gives me the output -rwxr-xr-x 1 pantheon pantheon 103 Jun 11 23:02 /home/pantheon/Desktop/pycrust-20170611-2151.sh
I have looked on many other forums, but have not found an answer to my problem. I think the most easy thing to do is to just ask you for help. For example, I did either not understand or get any help from the following questions: How to automatically “Run in Terminal” for script in CentOS linux, https://stackoverflow.com/questions/19509911/how-to-make-python-script-executable-when-click-on-the-file, https://askubuntu.com/questions/138908/how-to-execute-a-script-just-by-double-clicking-like-exe-files-in-windows and so on.
I know, I can open terminal and just run it as a .py file, but that is not what I want to do. I want it to run automatically in the terminal when klicking on the .sh (or .py) file.
|
You need to use
#!/usr/bin/env python
as your shebang (note the /usr).
| Run a file automatically in terminal when you click on it |
1,428,267,066,000 |
I have a folder on my HDD /media/kalenpw/HDD/Documents/ShellScripts that is full of various scripts I would like to have accessible from any directory. My previous strategy was copying all of the files into /usr/local/bin this worked, but was tedious when updating scripts having to change in two places.
Luckily, I recently learned about symlinks and they are the perfect fit.
I made a test script in my home folder
test.sh
print "Hello"
then I did ln ~/test.sh /usr/local/bin and like expected I could execute test.sh from anywhere.
The issue I'm having is I would prefer to keep all my documents on my HDD(at the directory given earlier). However, you can't link between drives so as expected I got an error
Invalid cross-device link
so I tried doing a symbolic link like so: sudo ln -s ./test.sh /usr/local/bin/ which created a link like expected. However, I can not execute test.sh from any directory(or even at all) like I would like. To ensure the file didn't lose permissions in the linking from /usr/local/bin I did sudo chmod +x ./test.sh and got an error:
chmod: cannot access './test.sh': Too many levels of symbolic links
I can't imagine there isn't a way to do this as it seems like a common usage, but I couldn't figure out how.
Summary: how can I create a link from one file to another on a different physical drive and still retain the ability to execute the linked file.
|
1) The proper way to access lots of scripts is to just add the directory the scripts are in to your $PATH. For example, I have my personal scripts in ~/bin, so in my .profile, I have a line
export PATH=$HOME/bin:$PATH
That puts my ~/bin in front of the existings paths, so I'm able to "overwrite" other programs by having scripts with the same name. If you don't want that, put new directories after $PATH.
So just add the directory you keep your scripts in to your path, and your problem is solved - completely without symlinks.
2) Background: On a particular filesystem, files are indentified using their inode number. A directory just maps file names to inode numbers. If you use ln without -s (hardlinks), you are making a new directory entry with the inode of an existing file. So, obviously, this can only work for files on the same filesystem.
OTOH, if you use ln -s, you are making a symbolic link (symlink): A special file that has as contents the path you specify, and this path is used instead of the file when you try to access it. You don't need to be root to make symlinks.
3) When you do ln ~/test.sh /usr/local/bin, then the ln commands detects that /usr/local/bin is a directory, so it assumes your really want to execute ln ~/test.sh /usr/local/bin/test.sh. The same happens with -s. It's important to keep this in mind, because you can also make symlinks to directories. But only root can make hardlinks to directories, because you could create a circular directory structure this way (and root should know enough to not do that).
4) While a hardlink does have file mode bits, a symlink doesn't: Any attempt to chmod a symlink will just change the file mode bits on the file that it points to.
5) I don't know what happened when you couldn't execute test.sh, the Too many levels of symbolic links error message indicates you have other symbolic links somewhere, so something got messed up. I'd need to see your directory structure to find out what happened.
6) If you really want to symlink every single script in your script directory to /usr/local/bin/ instead of just setting the PATH (I don't recommend that), consider using stow instead: This program sets many symlinks at once. man stow for details.
| Execute symlink on different physical drives |
1,428,267,066,000 |
They should both get the files that are executable but I get different numbers
[user@j6727961 ~]$ sudo find /usr -perm /a=x | nl
1 /usr
2 /usr/bin
3 /usr/bin/nroff
4 /usr/bin/gzexe
5 /usr/bin/catchsegv
6 /usr/bin/diff
7 /usr/bin/gzip
8 /usr/bin/gencat
9 /usr/bin/diff3
10 /usr/bin/zcat
11 /usr/bin/getent
12 /usr/bin/sdiff
13 /usr/bin/zcmp
14 /usr/bin/iconv
15 /usr/bin/db_recover
16 /usr/bin/ldd
17 /usr/bin/unxz
18 /usr/bin/zdiff
19 /usr/bin/locale
20 /usr/bin/xz
21 /usr/bin/zgrep
22 /usr/bin/localedef
23 /usr/bin/xzcat
-
-
-
-
17112 /usr/local/share/man/man8x
17113 /usr/local/share/man/man9
17114 /usr/local/share/man/man9x
17115 /usr/local/share/man/mann
17116 /usr/local/src
17117 /usr/src
17118 /usr/src/debug
17119 /usr/src/kernels
17120 /usr/tmp
and with the -executable flag:
[user@j6727961 ~]$ sudo find /usr -executable | nl
[sudo] password for user:
1 /usr
2 /usr/bin
3 /usr/bin/nroff
4 /usr/bin/gzexe
5 /usr/bin/catchsegv
6 /usr/bin/diff
7 /usr/bin/gzip
8 /usr/bin/gencat
9 /usr/bin/diff3
10 /usr/bin/zcat
11 /usr/bin/getent
12 /usr/bin/sdiff
13 /usr/bin/zcmp
14 /usr/bin/iconv
15 /usr/bin/db_recover
16 /usr/bin/ldd
17 /usr/bin/unxz
18 /usr/bin/zdiff
-
-
-
-
12218 /usr/local/share/man/man4x
12219 /usr/local/share/man/man5
12220 /usr/local/share/man/man5x
12221 /usr/local/share/man/man6
12222 /usr/local/share/man/man6x
12223 /usr/local/share/man/man7
12224 /usr/local/share/man/man7x
12225 /usr/local/share/man/man8
12226 /usr/local/share/man/man8x
12227 /usr/local/share/man/man9
12228 /usr/local/share/man/man9x
12229 /usr/local/share/man/mann
12230 /usr/local/src
12231 /usr/src
12232 /usr/src/debug
12233 /usr/src/kernels
12234 /usr/tmp
|
According to man find:
-perm /mode
Any of the permission bits mode are set for the file.
So -perm /a+x will match a file with any executable bit set.
-executable
Matches files which are executable and directories which are
searchable (in a file name resolution sense). This takes into
account access control lists and other permissions artefacts
which the -perm test ignores. This test makes use of the
access(2) system call, and so can be fooled by NFS servers which
do UID mapping (or root-squashing), since many systems implement
access(2) in the client's kernel and so cannot make use of the
UID mapping information held on the server. Because this test
is based only on the result of the access(2) system call, there
is no guarantee that a file for which this test succeeds can
actually be executed.
So -executable will match a file that the current user can access according to the access() system call.
| What's the difference between sudo find /usr -perm /a=x and sudo find /usr -executable |
1,428,267,066,000 |
I am issuing the following rsync command (obfuscating the exact path & host), which should copy two binaries:
rsync -e "ssh -p 443" -a --info=progress2 -z user@remote:/srv/cgi-bin/ .
user@remotes's password:
5,970,149 100% 6.78MB/s 0:00:00 (xfr#2, to-chk=0/3)
From every standpoint I can tell, the command completed successfully.
root@rapunzel:/s/h/p/cgi-bin # >>> ll
total 5.7M
-rwxrwxr-x 1 marcus marcus 3.9M Sep 10 2014 german-numbers
-rwxrwxr-x 1 marcus marcus 1.9M Sep 10 2014 german-numbers-cgi
But when I attempt to run any of the binaries, I get the following error
root@rapunzel:/s/h/p/cgi-bin # >>> ./german-numbers
zsh: no such file or directory: ./german-numbers
So it seems that the binary is not "quite there", but on the other hand I can clearly open and read it:
root@rapunzel:/s/h/p/cgi-bin # >>> head -n1 ./german-numbers
ELF04��'4('$444444�=%�=%@%����t��I%������HHHDDP�tdT;%T�T�llQ�td/lib/ld-linux.so.2GNUGNU,B�]2�h$���ɢҒ�S'��� @�P
����ݣkĉ����|(�CE���K��8��]���?�������g���FcH▒3
�_��,�%}�??��>fM7�sn�������F����A{S3a��������,b��)�P▒h�wza�S~�y��*-��y�L
�����m���<�lp�6����W$%xZ�G��X{��V���� �!� �!�t����@��ԅ����I��ԅ��� ��Ș
��librt.so.1__gmon_start___Jv_RegisterClasseslibutil.so.1libdl.so.2libgmp.so.3__gmpz_tdiv_qr__gmpz_and__gmpz_tdiv_q__gmpz_tdiv_r__gmpz_fdiv_qr__gmpn_gcd_1__gmpz_fdiv_q__gmpz_fdiv_r__gmpz_ior__gmpz_mul_2exp__gmp_set_memory_functions_fini__gmpz_sub__gmpz_xor__gmpz_com__gmpz_gcd__gmpz_fdiv_q_2exp__gmpz_init__gmpz_mul__gmpz_divexact__gmpn_cmp__gmpz_addclock_gettimetimer_deletetimer_settimetimer_createdlopendlerrordlsymlibm.so.6modfldexplibc.so.6_IO_stdin_usedepoll_createfflushstrcpysprintfsetlocalefopenstrncmpftruncatestrrchrregexecpipeftruncate64mmap64siginterruptepoll_waitftellstrncpyforksigprocmaskregfreeunlinkpthread_mutex_lockselectmkdirreallocabortgetpidkillstrdupmkstempstrtodstrtolisattysetmntentmmapctime_rfeoffgetscallocstrlensigemptysetmemset__errno_locationtcsetattrfseekgetpagesizeeventfddup2pause__fxstat64sigaddsetpthread_mutex_unlockstdoutfputcgetrusagefputsregerrormemcpyfclosemprotectmallocraisegetgid__lxstat64nl_langinfohasmntopt__xstat64getenv__ctype_b_locregcompstderrsigdelsetmunmapgetuidgetegid__sysv_signalpthread_mutex_initfwritefreadgettimeofdayiconv_closesigactionepoll_ctlstatfsgeteuidlseek64strchrendmntentutimegetlineiconviconv_opentcgetattrbsearchfcntlgetmntent_rmemmovefopen64access_IO_getcstrcmpstrerror__libc_start_mainvfprintfsysconf__environ__cxa_atexit_edata__bss_start_endGLIBC_2.1GLIBC_2.0GLIBC_2.2GLIBC_2.3GLIBC_2.7GLIBC_2.3.4GLIBC_2.3.2GLIBC_2.1.3
I am out of ideas why zsh (and for the record: bash aswell) wont find that file, does anybody have an idea?
|
You are trying to run a 32-bit executable in a 64 bit system. Do do so you have to install the 32 bit libraries, in short, you have to make your system multilib.
or try to recompile by sending the sources as well through rsync.
| Executable file isn't "really there" after rsync [duplicate] |
1,428,267,066,000 |
I am in my home directory. And there is an executable a.out in there. I want to execute it like-
/bin/csh ~/a.out
^F^E@@@@@▒^A▒^A^H^C^D^B^B@^B@^\^\^A^A^E@@: Event not found.
Its not that I cannot simply run
./a.out
that works perfectly fine. But I want to know why it is not working the other way round?
Also,
/bin/csh tmp/script
works fine where script is a normal text file containing some shell commands like echo
|
/bin/csh filename tells the shell to read shell commands from filename. If you want the shell to execute the file (which is not the same thing), you should use /bin/csh -c ./a.out.
| Error while executing a.out from C shell |
1,428,267,066,000 |
I don't know if I'm permitted to ask about zPanel here, but I'll go ahead and try and hope for the best. Stackexchange has helped me a lot in different areas...
I was following the instructions to install zPanel on my freshly installed CentOS 6.4 x64 VPS, but I'm facing this error which wont let me finish step 5 from this guide: http://www.zvps.co.uk/zpanelcp/centos-6
So, this is how I'm doing it:
[root@img ~]# ./installer-10-1-0-centos-64.sh.x
-bash: ./installer-10-1-0-centos-64.sh.x: /lib64/ld-linux-x86-64.so.2: bad ELF interpreter: No such file or directory
I've read through all online solutions and tried running these commands:
yum install glibc.i686 and yum install glibc.i386
For the first package, it said it was installed. The second one wasn't found in my system.
I've also ran this as the guide instructed:
yum install ld-linux.so.2 curl
But nothing happened.
What do I do in order to proceed?
|
You're trying to run x86-64 software on a i686 platform. This will not work. Get the i686 version instead.
| ELF interpreter error - Can't install on my CentOS |
1,428,267,066,000 |
I have found a few old posts claiming that tmpfs can execute in place. Is this true? If so, how? If not, is there a ram drive alternative?
Can this be done with a ram drive that is encrypted? If so, how?
|
You're going to need some very specialized hardware to do what you're trying to do. Here are the constraints:
The program must be in RAM, because that's where the CPU can find it. It doesn't matter how it got there.
The program must not be in RAM unencrypted.
I don't know where you want to store the encryption key. Let's assume it's stored in a TPM module, because I'm sure you don't want to store it in RAM.
Therefore, to execute your program, the CPU must ask the TPM module to decrypt every single instruction it reads. This is not something you can do purely in software, unless maybe you have explicit control over your CPU's cache... which on most CPUs, you don't. For all practical purposes, you're going to need an unencrypted copy in RAM, even if an encrypted copy is in RAM alongside it.
| Execute in Place an encrypted ram drive |
1,428,267,066,000 |
I am trying to write a script python script that is supposed to run across all linux based OSes. The python script invokes executables of few existing tools(for example pathload and iperf) that I have compiled and included with the script.
For this purpose, I need to compile the executables to be run across all linux OSes and architectures. I have currently compiled it for 32bit and 64bit Ubuntu but it does not run on redhat system(dependency issues). Can someone give me a clue on what all popular OS types (debian based, redhat based, archlinux based and any others?) are present, so that I can compile the executables for them?
Also, is there any better way of achieving this task?
Let me know if I need to reword the question.
|
Also, is there any better way of achieving this task?
Yes, at least two ways come to my mind:
You can distribute source code. "Unix" has a long tradition of "portability" through source code. Note that both iperf and pathload are distribuited by sources and both use autoconf to "grant" portability (and it is not accidental). You should "only" automate the compilation inside your application-installer procedure.
If your application runs only on linux, you could use a tool like openSUSE Build Service that can be used for development of the openSUSE distribution and to offer packages from same source for Fedora, Debian, Ubuntu, SUSE Linux Enterprise and other distributions. You need "only" to write one configuration file and the OBS will compile and package your application for many unix distros.
Obviously, both solutions require some time and work to be implemented.
| How many types of architectures and OSes |
1,428,267,066,000 |
I installed a tool called herd (http://diy.inria.fr/herd/). The original version of this I think is in the global path so I can call it from anywhere by writing herd7. Now I also have a second installation which I compiled myself. It is in a different directory. I don't know how to call that second one specifically. I think it's something to do with writing out the whole path but I'm not even sure how to find out what the path is and when I guessed it and tried
./herd --help
it just told me Is a directory. So, yeah, how do I call the actual tool?
I'm new to using linux and command-line so I know I'm probably using the wrong jargon.
Edit:
Well I found that there was a directory called _build with default inside it which itself had a folder herd inside it which had inside it herd.exe. Then running ./herd.exe --help from within that folder worked (and ./herd --help did not). But still, do I need to specify the .exe? Shouldn't it have a command-line usage herd7 like the original installation?
|
It's a matter of convention. I'll use the gcc compiler as an example to explain things.
Typing which gcc in Redhat 8 results in /usr/bin/gcc. Doing gcc --version results in version 4.8.5 which is the default installed version.
If I want to download and install a later version of gcc such as version 11.4, I tell it in its config that the install directory of it will explicitly be /usr/local/gcc-11.4. This way is does not overwrite the existing gcc-4.8.5 under /usr/bin. And for when I want to run gcc-11.4 then I can either explicitly do /usr/local/gcc-11.4/bin/gcc or modify the PATH (and LD_LIBRARY_PATH) environment variables to put /usr/local/gcc-11.4/bin and /usr/local/gcc-11.4/lib into them {respectively} before /usr/bin within PATH, so that when I type just gcc it results in using the 11.4 version not the 4.8.5 version. Doing a echo $PATH will show that contents, and whichever folder an executable is first found in is what is used.
You can install your newer herd version under /usr/local or under /opt or anywhere else. Manually tacking on the version number to the folder name goes a long way in managing things when you forget what's what however many days (or hours) later.
You can update PATH and LD_LIBRARY_PATH multiple ways, either per user via your /home/<account>/.bashrc or globally for all users under /etc/profile.d/<some_name>.sh or by using modules. Overall, it's not complicated it's just a matter of understanding the digital organization and housekeeping of it.
How to disambiguate a second version on the command-line ?
that all happens with the order of folder listing under the PATH environment variable, and also the listing in LD_LIBRARY_PATH. Whichever folder your executable, or supporting library file, is found in first, is what is used. So you reorder those two environment variables, either manually (if it's just for you), or in a work environment with many users and many software versions that can conflict with each other then use modules... The Modules package is a tool that simplifies shell initialization and lets users easily modify their environment during a session using modulefiles. https://modules.readthedocs.io/en/latest/
| How to disambiguate a second version of an installation from command-line? |
1,428,267,066,000 |
I am observing the following behaviour, and want to know if it is expected.
I am trying to run disk usage ( du ) as user2 on a directory test_directory which is owned by user1.
If I change the permissions to allow everyone full access (using chmod 777 test_directory), then both user1 and user2 are able to properly see the disk usage, as expected:
However, if I restrict executable access for other users (using chmod 776 test_directory), then user2 is not able to run du and a permission error occurs:
In addition, the directory shows as having 4096 bytes size in the case of the error.
Why are executable permissions needed for a user to be able to request the disk usage with du? I would have naively expected that only read permissions are needed (i.e. chmod 774). Actually it seems that both read and execute permissions are needed to run du (i.e. chmod 775).
Why does the directory size default ot 4096 bytes in this case?
Thanks!
|
Without the execute/access (x) permission on dir, you can't call stat() on dir/foo.bin, and can't see how large it is. Having just the read (r) permission lets you only list the filenames, but the names are all you get in general.
In some systems, readdir() might give more information than just the names, but I'm not sure if any system can give the file size there. Linux filesystems can give the file type, which never changes during the lifetime of the inode and directory entry. But something constantly varying, like the size is harder to get from just directory in a filesystem where the inodes are separate from the directory listing.
Even if you did get the size with readdir(), just reading dir/ won't give you the size of dir/sub/bar.bin, you'd need to read dir/sub for that. And without access permission on dir, you can't.
The size of a directory is just the size of the list of files, it doesn't include the sizes of the files themselves. So it doesn't tell you much. That 4 kB is common for e.g. ext4, it's just one filesystem block, the minimum the filesystem needs to allocate. Same as with a file, you can get the size with just x permissions on the directory holding it. (e.g. you need x on dir/ to get the size of dir/file.txt or dir/subdir/.)
So yes, you need both read and access permissions to effectively scan the whole directory tree.
See also: Execute vs Read bit. How do directory permissions in Linux work?
| Does a user need executable permissions to be able to run du (disk usage)? [duplicate] |
1,428,267,066,000 |
I have an executable file file that I am running by typing ./file into the terminal. When the program runs, I have to type in the text "code". However, the time taken to type it, or paste it into the terminal, results in the program saying "sorry, you took too long!".
I am tying to send the text "code" to the executable file when I run it, as therefore it would be inputted into the program immediately once it's run.
I've tried ./file; "code" and ./file && "code" but have had no luck - the program still wants an input.
|
You may want to use the pipeline feature https://en.wikipedia.org/wiki/Pipeline_(Unix) :
echo "code" | ./file
| Enter text into executable file immediately after it's run |
1,428,267,066,000 |
I have 136 .vcf files in a folder; I want to extract some information from each of them and write the output in a .txt file like below
[fi1d18@cyan01 snp]$ bcftools query -f '%CHROM\t%POS\t%REF\t%ALT[\t%ID]\n' file.vcf > file.txt
I am doing that one by one manually but takes an ages; Can somebody please helps me in any script to do that for all files in Linux?
Thank you
|
Go to the folder with your files, and loop over these files. Use "$f" for input and "${f%.*}.txt" as output file name
${f%.*} will strip the extension from the filename (via).
for f in *.vcf; do
bcftools query -f '%CHROM\t%POS\t%REF\t%ALT[\t%ID]\n' "$f" > "${f%.*}.txt"
done
| How I can these .vcf files to txt at the same time |
1,428,267,066,000 |
I have an executable on my usb drive. I cd to the directory, ./app yields permission denied. So I did chmod u+x app. Then, ./app. But still, permission denied.!
Then I read something here:
That command [chmod] only changes the permissions associated with the file; it does not change the security controls associated with the entire volume. If it is security controls on the volume that are interfering with execution (for example, a noexec option may be specified for a volume in the Unix fstab file, which says not to allow execute permission for files on the volume), then you can remount the volume with options to allow execution. However, copying the file to a local volume may be a quicker and easier solution.
How would I make a program run off of a USB drive, with or without the above mentioned solution?
|
The command to remount the drive with execute allowed goes like
sudo mount $THING -o remount,exec
but with an apropriate value for $THING. You can use the device name, or the mount point.
| Running an executable off of a usb drive |
1,428,267,066,000 |
I encountered that the command bsdtar from the package libarchive (under Arch Linux, at least) does throw away executable bits of files in .zip-archives when reading from stdin, but not when directly working on the file.
On .tar-archives it preserves the executable bit also when reading from stdin.
Test case:
Create the archives:
Create the files:
touch a.txt
chmod 644 a.txt
touch a.out
chmod 755 a.out
The file permissions:
ls -ln a.out a.txt
shows
-rwxr-xr-x 1 1001 1001 0 Dec 12 11:01 a.out
-rw-r--r-- 1 1001 1001 0 Dec 12 11:01 a.txt
Pack the files into archives:
bsdtar --format=zip -cf a.zip a.out a.txt
bsdtar -cf a.tar a.out a.txt
(Creating the archives with zip and tar instead of bsdtar produces the same result.)
Extracting/ showing the archive content directly:
bsdtar -tvf a.zip
or
bsdtar -tvf - < a.zip
shows
-rwxr-xr-x 0 1001 1001 0 Dec 12 11:01 a.out
-rw-r--r-- 0 1001 1001 0 Dec 12 11:01 a.txt
The executable bit of a.out is present here.
The permissions of a.out are 755 and of a.txt 644.
Reading from stdin:
cat a.zip | bsdtar -tvf -
shows
-rw-rw-r-- 0 1001 1001 0 Dec 12 11:01 a.out
-rw-rw-r-- 0 1001 1001 0 Dec 12 11:01 a.txt
The executable bit for a.out is thrown away here.
Furthermore, both files are group-writeable, they were not packed that way.
The permissions of a.out and a.txt are both 664.
.tar-archive:
As a comparison, for a .tar-archive, the permissions in the archive are also honoured when reading from a pipe from stdin:
bsdtar --numeric-owner -tvf a.tar
and
cat a.tar | bsdtar --numeric-owner -tvf -
both show
-rwxr-xr-x 0 1001 1001 0 Dec 12 11:01 a.out
-rw-r--r-- 0 1001 1001 0 Dec 12 11:01 a.txt
(note that, when showing the contents of a ZIP archive, bsdtar shows the numeric owner by default; for a TAR archive it shows the name of the owner.)
The question is:
What is special with stdin with regard to bsdtar? And why only when reading from a pipe, and not in the fashion bsdtar -tvf - < a.zip? And why special to a .zip-archive, but not to a .tar-archive?
|
Here on the bugtracker of libarchive is the answer:
Zip archives contains two different ways to describe the content:
A per-entry header
A central directory at the end of the zip file.
libarchive (and bsdtar by extension) will use the central directory if seeking is possible on the input, otherwise it will fall back to the streaming-only logic. The entries are not necessarily consistent as you found out in your test case. There isn't really much we can or want to do about this. Note that you can replace wget with a plain cat and it will still show the same behavior.
The short version is that this is an inherent issue with streaming of zip files and something that won't be fixed.
And this comment tells how to create a consistent ZIP-file with bsdtar:
To make bsdtar create consistent information, --options zip:experimental needs to be added to bsdtar's zip file creation command:
bsdtar --format=zip --options zip:experimental -cf a.zip a.out a.txt
and then
cat a.zip | bsdtar -tvf -
shows correct permissions:
-rwxr-xr-x 0 1001 1001 0 Feb 17 21:18 a.out
-rw-r--r-- 0 1001 1001 0 Feb 17 21:18 a.txt
| Why does libarchive's bsdtar's unzip throw away the permission bits when reading a ZIP-archive from stdin, but not directly? |
1,428,267,066,000 |
I have a console program:
#include <iostream>
#include <stdio.h>
using namespace std;
int main()
{
printf("please num1:");
int a;
cin>>a;
printf("please num2:");
int b;
cin>>b;
cout<<"see the result"<<endl;
return a+b;
}
With the executable named test. When I put this line:/path/to/test test & inside the home/user/.config/openbox/autostart/ I can not see anything at startup, there is only a blank screen.
How can I see the terminal that runs this app at startup?
I should say I have tested the above method with the executable of other apps that show an image on LCD(using gtk+), or saying something in speaker(using espeak).They do these things att startup automatically. But for a console app this method doesn't work.I mean I can't see a terminal-shell at startup!
How should I solve this problem?
|
since your program is a console program and not a graphical one, as you stated and as your code shows
you need to launch it in a console, in a terminal. e.g.
gnome-terminal -- test.sh
in this case, I used gnome-terminal and the executable was test.sh.
this is the command to launch at startup
| How to start a console program at startup(inside ../openbox/autostart) |
1,428,267,066,000 |
When I compile a C program with gcc I get the file a.exe; however, to run this I have to type in the command ./a.exe. I believe it is possible to edit the .bashrc or .bash_profile so that I only need to write the command a.exe?
|
Files ending in .exe are common on windows systems. On linux systems binaries usually do not have any extension.
When running gcc with out using -o to specify the name of the output file it will (for historical reasons) usually create a file named a.out.
When trying to run a command without specifying its location linux will search the locations in the PATH environment variable for the given command.
This PATH variable will usually intensionally not contain the current directory. You could add . (the current directory) to the list, but this would lead to unexpected effects, for example if you type ls you would expect to see the content of the current directory and not running some file named "ls" which might happen to live there.
Therefore if you want to run a command from a place not listed in PATH you need to explicitly specify its path. To run a file named a.out in the current directory (.) you have to type ./a.out.
| Running a program that's been compiled with gcc |
1,428,267,066,000 |
Today I copied some files from the shared folder (Host is a Win7) to my VM ( Guest is a CentOS7). I did this with root permissions. Then I copied th files to my apache location
/var/www/html/test
I am using putty, I only see the files in green and with the following permissions.
-rwxr-x--- 1 root root 175417 Mar 15 17:50
I need to change the file from executable files (Green) to normal files should i use chown or chmod and with which extensions?
The files would be placed on a webside and should be downloadable.
to be clear i copied only a .zip file from Win7 to CentOS7
|
Being executable is a type of mode. chmod changes modes. chown changes owners (ie. the user that owns this file, which has the mode bits with the highest magnitude).
As such all you need to do is chmod a-x <path> to remove the executable bit for all ("a") users. If you just wanted to do it for your user, the owner ("o"), you can use o-x. instead.
| executable files to normal files via terminal CentOS7 |
1,415,129,156,000 |
I executed the minimal building directions of xcape:
$ sudo apt-get install git gcc make pkg-config libx11-dev libxtst-dev libxi-dev
$ mkdir xcape
$ cd xcape
$ git clone https://github.com/alols/xcape.git .
$ make
But when I press xcape it says xcape: command not found. It errors even when I'm in the xcape folder with a program that seems to be called xcape inside it. Why is this?
|
Have you tried ./xcape ? You have to execute it this way, because the location is probably not defined in the $PATH variable.
| Installing Xcape (question involving the "make" command) |
1,415,129,156,000 |
I'm not sure what to try next to resolve this issue.
~ üåù cat /etc/fstab
# /dev/sdg1 /mnt/d/ ntfs noatime,nodiratime,users,noauto,x-systemd.automount,autodefrag 0 0
/dev/md127 /mnt/d btrfs nofail,compress=zstd:1,noatime,nodiratime,users 0 0
The disk mount location /mnt/d was previously used by an ntfs drive but not sure how relevant that is since I've rebooted already
~ üåØ chmod +x ./bin/DownZemAll_v2.5.5_x86_64/install.sh
~ üå≥ ./bin/DownZemAll_v2.5.5_x86_64/install.sh
fish: The file “./bin/DownZemAll_v2.5.5_x86_64/install.sh” is not executable by this user
~ üÉâ ls -lah ./bin/DownZemAll_v2.5.5_x86_64/install.sh
Permissions Size User Date Modified Name
.rwxr-xr-x@ 977 xk 10 Jan 14:00 ./bin/DownZemAll_v2.5.5_x86_64/install.sh*
~ üåÆ lsattr ./bin/DownZemAll_v2.5.5_x86_64/install.sh
---------------------- ./bin/DownZemAll_v2.5.5_x86_64/install.sh
~ üëÄ file ./bin/DownZemAll_v2.5.5_x86_64/install.sh
./bin/DownZemAll_v2.5.5_x86_64/install.sh: Bourne-Again shell script, ASCII text executable
~ üÉà ldd ./bin/DownZemAll_v2.5.5_x86_64/install.sh
ldd: warning: you do not have execution permission for `./bin/DownZemAll_v2.5.5_x86_64/install.sh'
not a dynamic executable
~ [1] üÉå ls /mnt
Permissions Size User Date Modified Name
drwxr-xr-x - xk 5 Mar 00:18 d/
~ üÉá ls | grep 'd ->'
lrwxrwxrwx 7 xk 28 Feb 22:30 d -> /mnt/d/
~ üåè ls | grep bin
lrwxrwxrwx@ 23 xk 5 Mar 00:13 bin -> d/35_Linux_Software/bin/
~ üåå uid
1000
~ üå± ls -n ./bin/DownZemAll_v2.5.5_x86_64/install.sh
Permissions Size User Date Modified Name
.rwxr-xr-x@ 977 1000 10 Jan 14:00 ./bin/DownZemAll_v2.5.5_x86_64/install.sh*
I can read files just fine
~ üêò head -1 ./bin/SoulseekQt-2018-1-30-64bit.AppImage
ELFAI> !@@s@@@@@@@@@@dd ddada` @e@ea@e@@DDPtd00A0A
Qtd/lib64/ld-linux-x86-64.so.2GNUGNU...
When making the loader explicit things run fine:
~ üÉÅ cd ./bin/DownZemAll_v2.5.5_x86_64/
~/b/DownZemAll_v2.5.5_x86_64 üåå bash install.sh
Copying Manifest.JSON to Mozilla directory...
Not sure why it says "ELF file ABI version invalid" but I would prefer if I could exec like a normal unix person (okay that is an AppImage related thing and not relevant to my problem)
~ üå≠ file bin/SoulseekQt-2018-1-30-64bit.AppImage
bin/SoulseekQt-2018-1-30-64bit.AppImage: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.18, BuildID[sha1]=d4b0eeecada37bbc753023885a3f0f7e3bdac6cc, stripped
~ üÉå /lib64/ld-linux-x86-64.so.2 ./bin/SoulseekQt-2018-1-30-64bit.AppImage
./bin/SoulseekQt-2018-1-30-64bit.AppImage: error while loading shared libraries: ./bin/SoulseekQt-2018-1-30-64bit.AppImage: ELF file ABI version invalid
|
It's because of this:
Partition mounted noexec even though not specified in /etc/fstab
I should've looked at the output of the command mount instead of fstab
| Unable to exec programs, parent directory symbolic link, btrfs mdadm drive |
1,415,129,156,000 |
I have a ksh script I am developing for work. (I am on the newer side to shell-scripting)
I have root access but the future-users of this script will not. Say the other user is named User1.
Within the script are commands to other executables in the system. These executables throw "permission denied" for User1 when User1 runs the script. How can I temporarily allow User1 to access the executables in the script? Not just access, but actually run the commands.
I have tried using many variations of umask
This has helped with creating and reading files within the script, but doesn't have any affect on executing the necessary commands. I have also tried using chmod but that command in itself is not accessible by User1 either, and therefore throws an error when the script is run. Is there any way you can think of to go about this?
Some Background:
AIX(Putty)
$ oslevel; 7.2.0.0
ksh93
The Script:
#!/bin/ksh
# My Script Introduction
........
# commands from /bin that deny permission to User1
.....
exit
KSH syntax only please. Thank you so much in advance for any help or insight you can provide me.
|
I figured out a work around.
The commands that User1 cannot access are scripts themselves. So I am manually implementing the logic from those scripts into my script. This allows allow User1 to do the desired behavior of the commands without actually calling the commands.
Thank you all for your time.
| Allow user to run nonpermissive commands in shell script (KSH) |
1,415,129,156,000 |
I am trying to make an executable script on Mac where it makes the directory it is housed in the current directory (cd) and then runs some more commands. I started with a find command however in the end that ended up causing issues because of similarly named files.
Thanks in advance!
I don't know what code to run however it will include the cd command.
What I expect to happen is that by running a few commands I can then create files in the folder the executable is housed in.
|
You can extract the path to the directory containing the script from the name of the script. The name of the script is stored in $0 variable. You can use dirname to extract path from it. So, to change the current working directory to the location of the script when it is run, you could start it like this:
#!/bin/zsh
cd "$(dirname "$0")"
I've quoted $0 before passing to dirname as well as the substitution $(…) before passing to cd in case the path contains spaces.
| Unix Executable to the directory it is housed in Mac |
1,415,129,156,000 |
Taking a random word as input from the user, can anyone suggest a logic in shell scripting to check if any permutation of that word is actually an executable command or not?
|
Try pipelining with compgen command
| Check if a user given random word is an executable linux command or not? |
1,415,129,156,000 |
After compiling software from source code, I can usually launch the compiled binary by double-clicking on it.
Recently however, most of my compiled binaries are not responding to double-click, even if they can be launched using ./MyBinary. This doesn't seem to be a permission problem because I have already done sudo chmod +x.
It appears that my Linux system identifies compiled binaries as shared library files, and so doesn't execute them.
Does anyone know why this happens? Is it possible to change the file type to an executable to avoid this problem? Thanks in advance.
System Info
Manjaro Linux x86_64
Kernel release: 5.6.19-2
|
ELF portable executables and libraries may have the same signatures and be identified identically. I wouldn't fret about that. If Dolphin works for you, use it.
E.g.
$ file `which file` /usr/lib64/libc-2.31.so
/bin/file: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=e7df66a91efb28e483449a77221cb4242620541c, for GNU/Linux 3.2.0, stripped
/usr/lib64/libc-2.31.so: ELF 64-bit LSB shared object, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=d278249792061c6b74d1693ca59513be1def13f2, for GNU/Linux 3.2.0, not stripped
Both the binary and the glibc library are "ELF 64-bit LSB shared object".
| Executable binaries labelled as shared library files |
1,415,129,156,000 |
My goal is to manage the startup of a number of applications with an application executed by a user with elevated permissions.
The plan is to have the startup manager (a node.js script using require('child_process').exec) cd to the home dir of the app user & su <app user> and then execute the startup as <app user>.
My foremost concern is security. For instance, could the <app user> exit back to startup manager user?
Are there any other concerns worth considering or caution to take with this approach?
|
Once started as <app user>, your running application will only have the permissions that user has.
As for being able to exit, once your application has terminated, there isn't going to be anything to issue the exit.
I don't know what your script does, but generally speaking, the startup of applications is better managed by something like systemd. You can create a service file which does all the stuff you explained your script would do.
You can specify the user the service runs as, the application to start, the directory to start it in, etc.
Here's some good examples to get started with systemd service files: https://www.shellhacks.com/systemd-service-file-example/
| Are there drawbacks (security or otherwise) to using 'su <user>' into a lesser-priveleged user to start a web application? |
1,415,129,156,000 |
I am writing a small program to us in LuaLaTex. The purpose of it is to produce an qr-code with a given uuid, the qr-code is printed on the page and the uuid is stored in the pdf's meta-data.
Nevertheless I thought it would be nice to have a single executable in the texmf folder to called by the class file from my document. To generate the qr-codes i used (Linux) qrencode & convert with this lua script uuidqrcode.lua:
#!/usr/bin/env lua
function gen_qr_uuid ()
local uuid = require 'uuid'
-- uuid.seed(math.randomseed(os.time()))
local encode = uuid()
local name = encode
local format = 'pdf'
local qrencode = string.format(
[[qrencode \
--type=EPS \
--output=%s.eps \
--size=10 \
--level=H \
--margin=0 \
--casesensitive \
%s \
]],
name,
encode)
local convert = string.format(
[[convert \
%s.eps \
%s.%s \
]],
name,
name,
format)
local rmeps = string.format("rm %s.eps", name)
os.execute(qrencode)
os.execute(convert)
os.execute(rmeps)
end
for i=1, (arg[1] or 1) do
gen_qr_uuid ()
end
To convert this script to a standalone executable i used luastatic with this script makeluaexec:
#!/bin/sh
luastatic $1 `pkg-config --libs --cflags lua`
With this I have a single executable file but it sill depends on qrencode & convert, so when I move to a other Linux machine these tools has to be installed. Is there a way to pack these tools into my self generated executable?
|
Yes. The qrencode program is just a wrapper around libqrencode, and the convert command is just a wrapper around ImageMagick. Instead of calling those commands, call the library functions from your code directly. Bindings such as https://github.com/isage/lua-imagick and https://github.com/vincascm/qrencode will be useful for this. Then, when you call luastatic, just pass in the relevant static libraries.
| Lua standalone with external binary program |
1,415,129,156,000 |
I have a crontab with different time to execute some task, for example every minute, every 10 min, 1 hour, daily...
And i have a question, when some of this cron coincide in the same time for example, when 10 minutes execute, also execute 1 min cron and this cron execute in parallel... but I want to execute in sequence, for example all jobs in 1 minute, and the all jobs in 10 min... how can I do this??
|
With cron itself, I don't think you really can. I would probably script my way out of that: make a single script executed from cron every minute, and then run the tasks with separate intervals from that script. Something like this:
Crontab entry:
* * * * * /path/to/main_script.sh
And main_script.sh:
#!/bin/sh
mins=$[ $(date +%s) / 60 ] # current time, rounded to minutes
run_1min_task.sh
if [ $[ $mins % 10 ] -eq 0 ] ; then # mins divisible by 10 ?
run_10min_task.sh # run the every 10 min task
fi
if [ $[ $mins % 60 ] -eq 0 ] ; then # same for 1 hour
run_1hour_task.sh
fi
if [ $[ $mins % 1440 ] -eq 0 ] ; then # 1440 = 24*60
run_daily_task.sh
fi
You need to take into account the time zone if you care what hour the once-a-day task runs, the above should run it at 00:00 UTC. Compare to some other value than zero to change it.
Also, note that if your tasks can take more than 1 minute, you need to make sure they can cope with running simultaneously, or prevent them from doing that.
| Crontab order run with differents schedule |
1,415,129,156,000 |
I believe I've corrupted the /usr/bin/time executable as when I try to run it, this is the message that shows up:
bash: /usr/bin/time: cannot execute binary file
It was working till I inadvertently overwrote it.
How can I revert the changes I've made to this executable or get a fresh copy of it?
Thanks in advance.
|
To reinstall the time binary /usr/bin/time, reinstall the package that contains it.
First, find that package.
dpkg -S /usr/bin/time
time: /usr/bin/time
Then install it.
sudo apt-get install --reinstall time
| /usr/bin/time cannot execute binary file |
1,357,727,051,000 |
I was just reading up on the Birth section of stat and it appears ext4 should support it, but even a file I just created leaves it empty.
~ % touch test slave-iv
~ % stat test.pl slave-iv
File: ‘test.pl’
Size: 173 Blocks: 8 IO Block: 4096 regular file
Device: 903h/2307d Inode: 41943086 Links: 1
Access: (0600/-rw-------) Uid: ( 1000/xenoterracide) Gid: ( 100/ users)
Access: 2012-09-22 18:22:16.924634497 -0500
Modify: 2012-09-22 18:22:16.924634497 -0500
Change: 2012-09-22 18:22:16.947967935 -0500
Birth: -
~ % sudo tune2fs -l /dev/md3 | psp4 slave-iv
tune2fs 1.42.5 (29-Jul-2012)
Filesystem volume name: home
Last mounted on: /home
Filesystem UUID: ab2e39fb-acdd-416a-9e10-b501498056de
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: journal_data
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 59736064
Block count: 238920960
Reserved block count: 11946048
Free blocks: 34486248
Free inodes: 59610013
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 967
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
RAID stride: 128
RAID stripe width: 256
Flex block group size: 16
Filesystem created: Mon May 31 20:36:30 2010
Last mount time: Sat Oct 6 11:01:01 2012
Last write time: Sat Oct 6 11:01:01 2012
Mount count: 14
Maximum mount count: 34
Last checked: Tue Jul 10 08:26:37 2012
Check interval: 15552000 (6 months)
Next check after: Sun Jan 6 07:26:37 2013
Lifetime writes: 7255 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
First orphan inode: 55313243
Default directory hash: half_md4
Directory Hash Seed: 442c66e8-8b67-4a8c-92a6-2e2d0c220044
Journal backup: inode blocks
Why doesn't my ext4 partition populate this field?
|
The field gets populated (see below) only coreutils stat does not display it. Apparently they're waiting1 for the xstat() interface.
coreutils patches - aug. 2012 - TODO
stat(1) and ls(1) support for birth time. Dependent on xstat() being
provided by the kernel
You can get the creation time via debugfs:
debugfs -R 'stat <inode_number>' DEVICE
e.g. for my /etc/profile which is on /dev/sda2 (see How to find out what device a file is on):
stat -c %i /etc/profile
398264
debugfs -R 'stat <398264>' /dev/sda2
debugfs 1.42.5 (29-Jul-2012)
Inode: 398264 Type: regular Mode: 0644 Flags: 0x80000
Generation: 2058737571 Version: 0x00000000:00000001
User: 0 Group: 0 Size: 562
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 8
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x506b860b:19fa3c34 -- Wed Oct 3 02:25:47 2012
atime: 0x50476677:dcd84978 -- Wed Sep 5 16:49:27 2012
mtime: 0x506b860b:19fa3c34 -- Wed Oct 3 02:25:47 2012
crtime: 0x50476677:dcd84978 -- Wed Sep 5 16:49:27 2012
Size of extra inode fields: 28
EXTENTS:
(0):3308774
Time fields meaning:
ctime: file change time.
atime: file access time.
mtime: file modification time.
crtime: file creation time.
1 Linus' reply on LKML thread
| Birth is empty on ext4 |
1,357,727,051,000 |
I am not talking about recovering deleted files, but overwritten files. Namely by the following methods:
# move
mv new_file old_file
# copy
cp new_file old_file
# edit
vi existing_file
> D
> i new_content
> :x
Is it possible to retrieve anything if any of the above three actions is performed assuming no special programs are installed on the linux machine?
|
The answer is "Probably yes, but it depends on the filesystem type, and timing."
None of those three examples will overwrite the physical data blocks of old_file or existing_file, except by chance.
mv new_file old_file. This will unlink old_file. If there are additional hard links to old_file, the blocks will remain unchanged in those remaining links. Otherwise, the blocks will generally (it depends on the filesystem type) be placed on a free list. Then, if the mv requires copying (a opposed to just moving directory entries), new blocks will be allocated as mv writes.
These newly-allocated blocks may or may not be the same ones that were just freed. On filesystems like UFS, blocks are allocated, if possible, from the same cylinder group as the directory the file was created in. So there's a chance that unlinking a file from a directory and creating a file in that same directory will re-use (and overwrite) some of the same blocks that were just freed. This is why the standard advice to people who accidentally remove a file is to not write any new data to files in their directory tree (and preferably not to the entire filesystem) until someone can attempt file recovery.
cp new_file old_file will do the following (you can use strace to see the system calls):
open("old_file", O_WRONLY|O_TRUNC) = 4
The O_TRUNC flag will cause all the data blocks to be freed, just like mv did above. And as above, they will generally be added to a free list, and may or may not get reused by the subsequent writes done by the cp command.
vi existing_file. If vi is actually vim, the :x command does the following:
unlink("existing_file~") = -1 ENOENT (No such file or directory)
rename("existing_file", "existing_file~") = 0
open("existing_file", O_WRONLY|O_CREAT|O_TRUNC, 0664) = 3
So it doesn't even remove the old data; the data is preserved in a backup file.
On FreeBSD, vi does open("existing_file",O_WRONLY|O_CREAT|O_TRUNC, 0664), which will have the same semantics as cp, above.
You can recover some or all of the data without special programs; all you need is grep and dd, and access to the raw device.
For small text files, the single grep command in the answer from @Steven D in the question you linked to is the easiest way:
grep -i -a -B100 -A100 'text in the deleted file' /dev/sda1
But for larger files that may be in multiple non-contiguous blocks, I do this:
grep -a -b "text in the deleted file" /dev/sda1
13813610612:this is some text in the deleted file
which will give you the offset in bytes of the matching line. Follow this with a series of dd commands, starting with
dd if=/dev/sda1 count=1 skip=$(expr 13813610612 / 512)
You'd also want to read some blocks before and after that block. On UFS, file blocks are usually 8KB and are usually allocated fairly contiguously, a single file's blocks being interleaved alternately with 8KB blocks from other files or free space. The tail of a file on UFS is up to 7 1KB fragments, which may or may not be contiguous.
Of course, on file systems that compress or encrypt data, recovery might not be this straightforward.
There are actually very few utilities in Unix that will overwrite an existing file's data blocks. One that comes to mind is dd conv=notrunc. Another is shred.
| Can overwritten files be recovered? |
1,357,727,051,000 |
Is there a simple option on extundelete how I can try to undelete a file called /var/tmp/test.iso that I just deleted?
(it is not so important that I would start to remount the drive read-only or such things. I can also just re-download that file again)
I am looking for a simple command with that I could try if I manage to fast-recover it.
I know, it is possible with remounting the drive in read-only: (see How do I simply recover the only file on an empty disk just deleted?)
But is this also possible somehow on the still mounted disk?
For info:
if the deleted file is on an NTFS partition it is easy with ntfsundelete e.g. if you know the size was about 250MB use
sudo ntfsundelete -S 240m-260m -p 100 /dev/hda2
and then undelete the file by inode e.g. with
sudo ntfsundelete /dev/hda2 --undelete --inodes 8270
|
Looking at the usage guide on extundelete it seems as though you're limited to undeleting files to a few ways.
Restoring all
extundelete is designed to undelete files from an unmounted partition to a separate (mounted) partition. extundelete will restore any files it finds to a subdirectory of the current directory named “RECOVERED_FILES”. To run the program, type “extundelete --help” to see various options available to you.
Typical usage to restore all deleted files from a partition looks like this:
$ extundelete /dev/sda4 --restore-all
Restoring a single file
In addition to this method highlighted in the command line usage:
--restore-file path/to/deleted/file
Attemps to restore the file which was deleted at the given filename,
called as "--restore-file dirname/filename".
So you should be able to accomplish what you want doing this:
$ extundelete --restore-file /var/tmp/test.iso /dev/sda4
NOTE: In both cases you need to know the device, /dev/sda4 to perform this command. You'll have to remount the filesystem as readonly. This is one of the conditions of using extundelete and there isn't any way around this.
| undelete a just deleted file on ext4 with extundelete |
1,357,727,051,000 |
Recently, my external hard drive enclosure failed (the hard drive itself powers up in another enclosure). However, as a result, it appears its EXT4 file system is corrupt.
The drive has a single partition and uses a GPT partition table (with the label ears).
fdisk -l /dev/sdb shows:
Device Boot Start End Blocks Id System
/dev/sdb1 1 1953525167 976762583+ ee GPT
testdisk shows the partition is intact:
1 P MS Data 2049 1953524952 1953522904 [ears]
... but the partition fails to mount:
$ sudo mount /dev/sdb1 a
mount: you must specify the filesystem type
$ sudo mount -t ext4 /dev/sdb1 a
mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
fsck reports an invalid superblock:
$ sudo fsck.ext4 /dev/sdb1
e2fsck 1.42 (29-Nov-2011)
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/sdb1
and e2fsck reports a similar error:
$ sudo e2fsck /dev/sdb1
Password:
e2fsck 1.42 (29-Nov-2011)
e2fsck: Superblock invalid, trying backup blocks...
e2fsck: Bad magic number in super-block while trying to open /dev/sdb1
dumpe2fs also:
$ sudo dumpe2fs /dev/sdb1
dumpe2fs 1.42 (29-Nov-2011)
dumpe2fs: Bad magic number in super-block while trying to open /dev/sdb1
mke2fs -n (note, -n) returns the superblocks:
$ sudo mke2fs -n /dev/sdb1
mke2fs 1.42 (29-Nov-2011)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
61054976 inodes, 244190363 blocks
12209518 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
7453 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
... but trying "e2fsck -b [block]" for each block fails:
$ sudo e2fsck -b 71663616 /dev/sdb1
e2fsck 1.42 (29-Nov-2011)
e2fsck: Invalid argument while trying to open /dev/sdb1
However as I understand, these are where the superblocks were when the filesystem was created, which does not necessarily mean they are still intact.
I've also ran a testdisk deep search if anyone can decypher the log. It mentions many entry like:
recover_EXT2: s_block_group_nr=1/7452, s_mnt_count=6/20,
s_blocks_per_group=32768, s_inodes_per_group=8192
recover_EXT2: s_blocksize=4096
recover_EXT2: s_blocks_count 244190363
recover_EXT2: part_size 1953522904
recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed
Running e2fsck with those values gives:
e2fsck: Bad magic number in super-block while trying to open /dev/sdb1
I tried that with all superblocks in the testdisk.log
for i in $(grep e2fsck testdisk.log | uniq | cut -d " " -f 4); do
sudo e2fsck -b $i -B 4096 /dev/sdb1
done
... all with the same e2fsck error message.
In my last attempt, I tried different filesystem offsets. For each offset i, where i is one of 31744, 32768, 1048064, 1049088:
$ sudo losetup -v -o $i /dev/loop0 /dev/sdb
... and running testdisk /dev/loop0, I didn't find anything interesting.
I've been fairly exhaustive, but is there any way to recover the file system without resorting to low-level file recovery tools (foremost/photorec)?
|
Unfortunately, I was unable to recover the file system and had to resort to lower-level data recovery techniques (nicely summarised in Ubuntu's Data Recovery wiki entry), of which Sleuth Kit proved most useful.
Marking as answered for cleanliness' sake.
| Recovering ext4 superblocks |
1,357,727,051,000 |
I have an external hard drive which is encrypted via LUKS. It contains an ext4 fs.
I just got an error from rsync for a file which is located on this drive:
rsync: readlink_stat("/home/some/dir/items.json") failed: Structure needs cleaning (117)
If I try to delete the file I get the same error:
rm /home/some/dir/items.json
rm: cannot remove ‘//home/some/dir/items.json’: Structure needs cleaning
Does anyone know what I can do to remove the file and fix related issues with the drive/fs (if there are any)?
|
That is strongly indicative of file-system corruption. You should unmount, make a sector-level backup of your disk, and then run e2fsck to see what is up. If there is major corruption, you may later be happy that you did a sector-level backup before letting e2fsck tamper with the data.
| Cannot remove file: "Structure needs cleaning" |
1,357,727,051,000 |
What happens if the limit of 4 billion files was exceeded in an ext4 partition, with a transfer of 5 billion files for example?
|
Presumably, you'll be seeing some flavor of "No space left on device" error:
# truncate -s 100M foobar.img
# mkfs.ext4 foobar.img
Creating filesystem with 102400 1k blocks and 25688 inodes
---> number of inodes determined at mkfs time ^^^^^
# mount -o loop foobar.img loop/
# touch loop/{1..25688}
touch: cannot touch 'loop/25678': No space left on device
touch: cannot touch 'loop/25679': No space left on device
touch: cannot touch 'loop/25680': No space left on device
And in practice you hit this limit a lot sooner than "4 billion files". Check your filesystems with both df -h and df -i to find out how much space there is left.
# df -h loop/
Filesystem Size Used Avail Use% Mounted on
/dev/loop0 93M 2.1M 84M 3% /dev/shm/loop
# df -i loop/
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/loop0 25688 25688 0 100% /dev/shm/loop
In this example, if your files are not 4K size on the average, you run out of inode-space much sooner than storage-space. It's possible to specify another ratio (mke2fs -N number-of-inodes or -i bytes-per-inode or -T usage-type as defined in /etc/mke2fs.conf).
| What happens if the limit of 4 billion files was exceeded in an ext4 partition? |
1,357,727,051,000 |
The default journal mode for Ext4 is data=ordered, which, per the documentation, means that
"All data are forced directly out to the main file system prior to its
metadata being committed to the journal."
However, there is also the data=journal option, which means that
"All data are committed into the journal prior to being written into
the main file system. Enabling this mode will disable delayed
allocation and O_DIRECT support."
My understanding of this is that the data=journal mode will journal all data as well as metadata, which, on the face of it, appears to mean that this is the safest option in terms of data integrity and reliability, though maybe not so much for performance.
Should I go with this option if reliability is of the utmost concern, but performance much less so? Are there any caveats to using this option?
For background, the system in question is on a UPS and write caching is disabled on the drives.
|
Yes, data=journal is the safest way of writing data to disk. Since all data and metadata are written to the journal before being written to disk, you can always replay interrupted I/O jobs in the case of a crash. It also disables the delayed allocation feature, which may lead to data loss.
The 3 modes are presented in order of safeness in the manual:
data=journal
data=ordered
data=writeback
There's also another option which may interest you:
commit=nrsec (*) Ext4 can be told to sync all its data and metadata
every 'nrsec' seconds. The default value is 5 seconds.
The only known caveat is that it can become terribly slow. You can reduce the performance impact by disabling the access time update with the noatime option.
| Is data=journal safer for Ext4 as opposed to data=ordered? |
1,357,727,051,000 |
I'm running Arch Linux, and use ext4 filesystems.
When I run ls in a directory that is actually small now, but used to be huge - it hangs for a while. But the next time I run it, it's almost instantaneous.
I tried doing:
strace ls
but I honestly don't know how to debug the output. I can post it if necessary, though it's more than a 100 lines long.
And, no, I'm not using any aliases.
$ type ls
ls is hashed (/usr/bin/ls)
$ df .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda9 209460908 60427980 138323220 31% /home
|
A directory that used to be huge may still have a lot of blocks allocated for directory entries (= names and inode numbers of files and sub-directories in that directory), although almost all of them are now marked as deleted.
When a new directory is created, only a minimum number of spaces are allocated for directory entries. As more and more files are added, new blocks are allocated to hold directory entries as needed. But when files are deleted, the ext4 filesystem does not consolidate the directory entries and release the now-unnecessary directory metadata blocks, as the assumption is that they might be needed again soon enough.
You might have to unmount the filesystem and run a e2fsck -C0 -f -D /dev/sda9 on it to optimize the directories, to get the extra directory metadata blocks deallocated and the existing directory entries consolidated to a smaller space.
Since it's your /home filesystem, you might be able to do it by making sure all regular user accounts are logged out, then logging in locally as root (typically on the text console). If umount /home in that situation reports that the filesystem is busy, you can use fuser -m /dev/sda9 to identify the processes blocking you from unmounting /home. If they are remnants of old user sessions, you can probably just kill them; but if they belong to services, you might want to stop those services in a controlled manner.
The other classic way to do this sort of major maintenance to /home would be to boot the system into single-user/emergency mode. On distributions using systemd, the boot option systemd.unit=emergency.target should do it.
And as others have mentioned, there is an even simpler solution, if preserving the timestamps of the directory is not important, and the problem directory is not the root directory of the filesystem it's in: create a new directory alongside the "bloated" one, move all files to the new directory, remove the old directory, and rename the new directory to have the same name as the old one did. For example, if /directory/A is the one with the problem:
mkdir /directory/B
mv /directory/A/* /directory/B/ # regular files and sub-directories
mv /directory/A/.??* /directory/B/ # hidden files/dirs too
rmdir /directory/A
mv /directory/B /directory/A
Of course, if the directory is being used by any services, it would be a good idea to stop those services first.
| Why does "ls" take extremely long in a small directory that used to be big? How to fix this? |
1,357,727,051,000 |
I have a very high density virtualized environment with containers, so I'm trying to make each container really small. "Really small" means 87 MB on base Ubuntu 14.04 (Trusty Tahr) without breaking up the package manager compatibility.
So I use LVM as a backing storage for my containers and recently I found very strange numbers. Here they are.
Let's create a 100 MiB (yeah, power of 2) logical volume.
sudo lvcreate -L100M -n test1 /dev/purgatory
I'd like to check the size, so I issue sudo lvs --units k
test1 purgatory -wi-a---- 102400.00k
Sweet, this is really 100 MiB.
Now let's make an ext4 filesystem. And of course, we remember -m 0 parameter, which prevents space waste.
sudo mkfs.ext4 -m 0 /dev/purgatory/test1
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
25688 inodes, 102400 blocks
0 blocks (0.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67371008
13 block groups
8192 blocks per group, 8192 fragments per group
1976 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
Sweet and clean. Mind the block size - our logical volume is small, so mkfs.ext4 decided to make a 1 KiB sized block, not the usual 4 KiB.
Now we will mount it.
sudo mount /dev/purgatory/test1 /mnt/test1
And let's call df without parameters (we would like to see 1 KiB-blocks)
/dev/mapper/purgatory-test1 95054 1550 91456 2% /mnt/test1
Wait, oh shi~
We have 95054 blocks total. But the device itself has 102400 blocks of 1 KiB. We have only 92.8% of our storage. Where are my blocks, man?
Let's look at it on a real block device. A have a 16 GiB virtual disk, 16777216 blocks of 1K, but only 15396784 blocks are in df output. 91.7%, what is it?
Now follows the investigation (spoiler: no results)
Filesystem could begin not at the beginning of the device. This is strange, but possible. Luckily, ext4 has magic bytes, let's check their presence.
sudo hexdump -C /dev/purgatory/test1 | grep "53 ef"
This shows superblock:
00000430 a9 10 e7 54 01 00 ff ff 53 ef 01 00 01 00 00 00 |...T....S.......|
Hex 430 = Dec 1072, so somewhere after first kilobyte. Looks reasonable, ext4 skips first 1024 bytes for oddities like VBR, etc.
This is journal!
No, it is not. Journal take space from Available if df output.
Oh, we have dump2fs and could check the sizes there!
... a lot of greps ...
sudo dumpe2fs /dev/purgatory/test1 | grep "Free blocks"
Ouch.
Free blocks: 93504
Free blocks: 3510-8192
Free blocks: 8451-16384
Free blocks: 16385-24576
Free blocks: 24835-32768
Free blocks: 32769-40960
Free blocks: 41219-49152
Free blocks: 53249-57344
Free blocks: 57603-65536
Free blocks: 65537-73728
Free blocks: 73987-81920
Free blocks: 81921-90112
Free blocks: 90113-98304
Free blocks: 98305-102399
And we have another number. 93504 free blocks.
The question is: what is going on?
Block device: 102400k (lvs says)
Filesystem size: 95054k (df says)
Free blocks: 93504k (dumpe2fs says)
Available size: 91456k (df says)
|
Try this: mkfs.ext4 -N 104 -m0 -O ^has_journal,^resize_inode /dev/purgatory/test1
I thinks this does let you understand "what is going on".
-N 104 (set the number of iNodes you filesystem should have)
every iNode "costs" usable space (128 Byte)
-m 0 (no reserved blocks)
-O ^has_journal,^resize_inode (deactivate the features has_journal and resize_inode
resize_inode "costs" free space (most of the 1550 1K-Blocks/2% you see in your df - 12K are used for the "lost+found" folder)
has_journal "costs" usable space (4096 1K-Blocks in your case)
We get 102348 out of 102400, another 52 blocks unusable (if we have deleted the "lost+found" folder). Therefore we dive into dumpe2fs:
Group 0: (Blocks 1-8192) [ITABLE_ZEROED]
Checksum 0x5ee2, unused inodes 65533
Primary superblock at 1, Group descriptors at 2-2
Block bitmap at 3 (+2), Inode bitmap at 19 (+18)
Inode table at 35-35 (+34)
8150 free blocks, 0 free inodes, 1 directories, 65533 unused inodes
Free blocks: 17-18, 32-34, 48-8192
Free inodes:
Group 1: (Blocks 8193-16384) [BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x56cf, unused inodes 5
Backup superblock at 8193, Group descriptors at 8194-8194
Block bitmap at 4 (+4294959107), Inode bitmap at 20 (+4294959123)
Inode table at 36-36 (+4294959139)
8190 free blocks, 6 free inodes, 0 directories, 5 unused inodes
Free blocks: 8193-16384
Free inodes: 11-16
Group 2: (Blocks 16385-24576) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x51eb, unused inodes 8
Block bitmap at 5 (+4294950916), Inode bitmap at 21 (+4294950932)
Inode table at 37-37 (+4294950948)
8192 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 16385-24576
Free inodes: 17-24
Group 3: (Blocks 24577-32768) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x3de1, unused inodes 8
Backup superblock at 24577, Group descriptors at 24578-24578
Block bitmap at 6 (+4294942725), Inode bitmap at 22 (+4294942741)
Inode table at 38-38 (+4294942757)
8190 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 24577-32768
Free inodes: 25-32
Group 4: (Blocks 32769-40960) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x79b9, unused inodes 8
Block bitmap at 7 (+4294934534), Inode bitmap at 23 (+4294934550)
Inode table at 39-39 (+4294934566)
8192 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 32769-40960
Free inodes: 33-40
Group 5: (Blocks 40961-49152) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x0059, unused inodes 8
Backup superblock at 40961, Group descriptors at 40962-40962
Block bitmap at 8 (+4294926343), Inode bitmap at 24 (+4294926359)
Inode table at 40-40 (+4294926375)
8190 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 40961-49152
Free inodes: 41-48
Group 6: (Blocks 49153-57344) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x3000, unused inodes 8
Block bitmap at 9 (+4294918152), Inode bitmap at 25 (+4294918168)
Inode table at 41-41 (+4294918184)
8192 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 49153-57344
Free inodes: 49-56
Group 7: (Blocks 57345-65536) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x5c0a, unused inodes 8
Backup superblock at 57345, Group descriptors at 57346-57346
Block bitmap at 10 (+4294909961), Inode bitmap at 26 (+4294909977)
Inode table at 42-42 (+4294909993)
8190 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 57345-65536
Free inodes: 57-64
Group 8: (Blocks 65537-73728) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0xf050, unused inodes 8
Block bitmap at 11 (+4294901770), Inode bitmap at 27 (+4294901786)
Inode table at 43-43 (+4294901802)
8192 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 65537-73728
Free inodes: 65-72
Group 9: (Blocks 73729-81920) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x50fd, unused inodes 8
Backup superblock at 73729, Group descriptors at 73730-73730
Block bitmap at 12 (+4294893579), Inode bitmap at 28 (+4294893595)
Inode table at 44-44 (+4294893611)
8190 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 73729-81920
Free inodes: 73-80
Group 10: (Blocks 81921-90112) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x60a4, unused inodes 8
Block bitmap at 13 (+4294885388), Inode bitmap at 29 (+4294885404)
Inode table at 45-45 (+4294885420)
8192 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 81921-90112
Free inodes: 81-88
Group 11: (Blocks 90113-98304) [INODE_UNINIT, BLOCK_UNINIT, ITABLE_ZEROED]
Checksum 0x28de, unused inodes 8
Block bitmap at 14 (+4294877197), Inode bitmap at 30 (+4294877213)
Inode table at 46-46 (+4294877229)
8192 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 90113-98304
Free inodes: 89-96
Group 12: (Blocks 98305-102399) [INODE_UNINIT, ITABLE_ZEROED]
Checksum 0x9223, unused inodes 8
Block bitmap at 15 (+4294869006), Inode bitmap at 31 (+4294869022)
Inode table at 47-47 (+4294869038)
4095 free blocks, 8 free inodes, 0 directories, 8 unused inodes
Free blocks: 98305-102399
Free inodes: 97-104
and count the used blocks (for Backup superblock, Group descriptors, Block bitmap, Inode bitmap and Inode table) or we grep and count:
LANG=C dumpe2fs /dev/mapper/vg_vms-test1 | grep ' at ' | grep -v ',' | wc -l
which gives us the count of lines which have a single block (in our example) and
LANG=C dumpe2fs /dev/mapper/vg_vms-test1 | grep ' at ' | grep ',' | wc -l
which gives us the count of lines which have two blocks (in our example).
So we have (in our example) 13 lines with one block each and 19 lines with two blocks each.
13+19*2
which gives us 51 blocks which are in use by ext4 itself. Finally there is only one block left. The block 0, which are the skipped 1024 Bytes at the beginning for things like the boot sector.
| Why doesn't my exactly 100 MiB partition at 1 KiB block size have the corresponding available blocks/space? |
1,357,727,051,000 |
I have a Debian system here. fsck runs from time to time while booting (on an ext4 file system).
I get messages like this:
inode extent tree (at level 1) could be shorter IGNORED
What do they mean?
|
They mean that e2fsck determined that an extent tree (a data structure used to point to data in the file system) could be restructured to have less depth (presumably because it tracked extents in the past which are no longer in use, so the tree could be rebalanced). That’s not much of a problem in practice, unless the extent depth is greater than the maximum; so it can be ignored, as you’re seeing. If an extent tree is too big, e2fsck will force a rebuild and you won’t see the IGNORED message.
If you run e2fsck interactively, it will ask you whether it should fix these trees, instead of just ignoring them.
| "inode extent tree (at level 1) could be shorter IGNORED" |
1,357,727,051,000 |
I can successfully mount an ext4 partition, the problem is that all the files on the partition are owned by the user with userid 1000. On one machine, my userid is 1000, but on another it's 1010. My username is the same on both machines, but I realise that the filesystem stores userids, not usernames.
I could correct the file ownership with something like the following:
find /mnt/example -exec chown -h 1010 {} \;
But then I would have to correct the file ownerships again back to 1000 when I mount this external drive on another machine.
What I would like is to give mount an option saying map userid 1000 to 1010, so that I don't have to actually modify any files. Is there a way to do this?
|
Take a look at the bindfs package. bindfs is a FUSE filesystem that allows for various manipulations of file permissions, file ownership etc. on top of existing file systems.
You are looking specifically for the --map option of bindfs:
--map=user1/user2:@group1/@group2:..., -o map=...
Given a mapping user1/user2, all files owned by user1 are shown as owned by user2. When user2 creates files, they are chowned to user1 in the underlying directory. When files are chowned to user2, they are chowned to user1 in the underlying directory. Works similarly for groups.
A single user or group may appear no more than once on the left and once on the right of a slash in the list of mappings. Currently, the options --force-user, --force-group, --mirror, --create-for-*, --chown-* and --chgrp-* override the corresponding behavior of this option.
Requires mounting as root.
So to map your files with user id 1001 in /mnt/wrong to /mnt/correct with user id 1234, run this command:
sudo bindfs --map=1001/1234 /mnt/wrong /mnt/correct
| How can I mount a filesystem, mapping userids? |
1,357,727,051,000 |
I want to format my USB stick to ext4 and just use as I would any other typical non-linux format drive (FAT32, exFAT, NTFS).
That is to say, I want to be able to plug the usb stick into any of my linux machines, and read/write to it without needing to adjust permissions, like doing chmod or chown stuff.
I would prefer to use GUI partition software like GParted, rather than command-line commands, though I welcome any solution!
I'm sure a post like this is duplicate flag heaven for some, but after browsing 6~10 SO and forum posts from google, I didn't find a simple solution to my question. Seemed like everything was about adjusting permissions on a per-user basis. Maybe you just cannot use ext4 drives with the same brainless convenience as NTFS.
|
Like any unix-style filesystem, ext4 includes standard Unix file ownership and permission conventions. That is, the user is identified by an UID number, and each user will belong to one or more groups, each group identified by its GID number. Each file has an owner UID and one group owner GID. The three classic Unix file permission sets are:
one set of permissions for the owner, identified by the owner's UID number
one set of permissions for the group owner, identified by the group's GID number
one set of permissions for everyone else
In order to be able to access the stick without needing to adjust permissions, you must make sure any files and directories created on the stick will have non-restrictive permissions automatically. The problem is, permissions on any new files created are controlled by the umask value... and you don't really want to keep changing it to 000 for creating files on the USB stick and back to the default value (usually 002 or 022) for normal use. A single mistake could lead you creating an important configuration file with wide-open permissions, that might compromise the security of your user account or cause other more minor problems.
If you can make sure that your normal user's UID number is the same across all your Linux systems, and you only care about access for that one user (plus root of course), you can get away with just formatting the USB stick to ext4, mounting it for the first time, and assigning the ownership of its root directory to your regular user account before you begin using the filesystem.
Assuming that /dev/sdX1 is the USB stick partition you wish to create the filesystem in, and <username> is your username, you can do this when setting up the USB stick for use:
sudo mkfs.ext4 /dev/sdX1
sudo mount /dev/sdX1 /mnt
sudo chown <username>: /mnt
sudo umount /mnt
But if you cannot guarantee matching UID/GID numbers, and/or there are multiple users who might want to use the USB stick, you'll need to do something a bit more complicated, but still an one-time operation after creating the ext4 filesystem on the stick.
We need to set a default ACL on the root directory of the USB stick filesystem that assigns full access to everyone on any new file or directory. And to ensure that the stick will be mounted with ACL support enabled, we need to use tune2fs to adjust the default mount options stored in the filesystem metadata.
sudo mkfs.ext4 /dev/sdX1
sudo tune2fs -o acl /dev/sdX1
sudo mount /dev/sdX1 /mnt
sudo chown <username>: /mnt
chmod 777 /mnt
setfacl -m d:u::rwx,d:g::rwx,d:o::rwx /mnt
sudo umount /mnt
Assuming that all your systems support ACLs on ext4 filesystems, and that any removable media mounting tool you might use won't choose to ignore the acl mount option, you now should have a USB stick on which all files created on it will have permissions -rw-rw-rw- and all created sub-directories will be drwxrwxrwx+. The plus sign indicates that the sub-directory will have an ACL: the custom default permission set configured for the stick's root directory will be inherited by the sub-directories too, and they will behave the same.
The owner UID/GID will still match the UID and primary GID of the user that created the file on the filesystem, but because of relaxed file and directory permissions, that should not be much of an issue.
The only problem I might expect is that copying files to the USB stick will by default attempt to duplicate the file permissions of the original, which you don't want in this case.
For example, if you create a file on System A with permissions -rw-r--r-- and copy it to the stick, then move the stick to System B with non-matching UID numbers. You can still read the file on System B, but you cannot overwrite it on the stick without first explicitly deleting or renaming the original file. But you can do that, as long as you have write access to the directory the file's in.
This can actually be an useful feature: if you modify the same file on multiple systems, this will push you towards saving a new version of the file each time instead of overwriting the One True File... and if the file is important, that might actually be a good thing.
| How to make an ext4 formatted usb drive with full RW permissions for any linux machine? |
1,357,727,051,000 |
In the ext4 wiki article I've seen that ext4 can be used up to 1 EiB, but is only recommended up to 16 TiB. Why is that the case? Why is XFS recommended for larger file systems?
(ELICS: Explain me like I'm a CS student, but without much knowledge in file systems)
|
The exact quote from the ext4 Wikipedia entry is
However, Red Hat recommends using XFS instead of ext4 for volumes larger than 100 TB.
The ext4 howto mentions that
The code to create file systems bigger than 16 TiB is, at the time of writing this article, not in any stable release of e2fsprogs. It will be in future releases.
which would be one reason to avoid file systems larger than 16 TiB, but that note is outdated: e2fsprogs since version 1.42 (November 2011) is quite capable of creating and processing file systems larger than 16 TiB. mke2fs uses the big and huge types for such systems (actually, big between 4 and 16 TiB, huge beyond); these increase the inode ratio so that fewer inodes are provisioned.
Returning to the Red Hat recommendation, as of RHEL 7.3, XFS is the default file system, supported up to 500 TiB, and ext4 is only supported up to 50 TiB. I think this is contractual rather than technical, although the Storage Administration Guide phrases the limits in a technical manner (without going into much detail). I imagine there are technical or performance reasons for the 50 TiB limit...
The e2fsprogs release notes do give one reason to avoid file systems larger than 16 TiB: apparently, the resize_inode feature has to be disabled on file systems larger than this.
| Why is ext4 only recommended up to 16 TB? |
1,357,727,051,000 |
I want to shrink an ext4 filesystem to make room for a new partition and came across the resize2fs program. The command looks like this:
resize2fs -p /dev/mapper/ExistingExt4 $size
How should I determine $size if I want to substract exactly 15 GiB from the current ext4 filesystem? Can I use the output of df somehow?
|
You should not use df because it shows the size as reported by the filesystem (in this case, ext4).
Use the dumpe2fs -h /dev/mapper/ExistingExt4 command to find out the real size of the partition. The -h option makes dumpe2fs show super block info without a lot other unnecessary details. From the output, you need the block count and block size.
...
Block count: 19506168
Reserved block count: 975308
Free blocks: 13750966
Free inodes: 4263842
First block: 0
Block size: 4096
...
Multiplicating these values will give the partition size in bytes.
The above numbers happen to be a perfect multiple of 1024, so we can calculate the result in KiB:
$ python -c 'print 19506168.0 * 4096 / 1024' # python2
$ python -c 'print(19506168.0 * 4096 / 1024)' # python3
78024672.0
Since you want to shrink the partition by 15 GiB (which is 15 MiB times 1 KiB):
$ python -c 'print 19506168.0 * 4096 / 1024 - 15 * 1024 * 1024' #python2
$ python -c 'print(19506168.0 * 4096 / 1024 - 15 * 1024 * 1024)' #python3
62296032.0
As resize2fs accepts several kinds of suffixes, one of them being K for "1024 bytes", the command for shrinking the partition to 62296032 KiB becomes:
resize2fs -p /dev/mapper/ExistingExt4 62296032K
Without unit, the number will be interpreted as a multiple of the filesystem's blocksize (4096 in this case). See man resize2fs(8)
| How do I determine the new size for resize2fs? |
1,357,727,051,000 |
We have seen OS doing Copy on Write optimisation when forking a process. Reason being that most of the time fork is preceded by exec, so we don't want to incur the cost of page allocations and copying the data from the caller address space unnecessarily.
So does this also happen when doing CP on a linux with ext4 or xfs (journaling) file systems? If it does not happen, then why not?
|
The keyword to search is reflink. It was recently implemented in XFS.
EDIT: the XFS implementation was initially marked EXPERIMENTAL. This warning was removed in the kernel release 4.16, a number of months after I wrote the above :-).
| Does any file system implement Copy on Write mechanism for CP? |
1,357,727,051,000 |
I'm migrating a server from an Ubuntu Server 18.02 instance ("saturn") to a newly-built Debian Buster 10 system ("enceladus"). I have copied a complete filesystem across the network using
sudo rsync --progress -au --delete --rsync-path="sudo rsync" /u/ henry@enceladus:/u
I check the number of directories and the number of files on the sending and receiving side: the counts are identical. I have an RYO Perl program which traverses the file tree and compares each file in one tree with its counterpart in the other: it finds no differences in 52,190 files. Both filesystems are EXT4; both have 512-byte blocks logical, 4096 physical.
Yet the receiving filesystem is 103,226,592,508 bytes and the sending one only 62,681,486,428. If the received filesystem were a little smaller I could understand it, because of unreclaimed blocks; but it's the other way round, and the difference is two thirds the original!
How can this be? Should I worry about it, as being evidence of some malfunction?
|
I can think of two things offhand:
you didn't use -H, so hardlinks are lost.
you didn't use -S, so sparse files may have been expanded
| Filesystem copied to new server is 60% bigger - why |
1,357,727,051,000 |
Playing with e2fsprogs debugfs, by change/accident, a file named filen/ame was created. Obviously the forward slash character / serves as the special separator character in pathnames.
Still using debugfs I wanted to remove the file named filen/ame, but I had little success, since the / character is not interpreted as part of the filename?
Does debugfs provide a way to remove this file containing the slash? If so how?
I used:
cd /tmp
echo "content" > contentfile
dd if=/dev/zero of=/tmp/ext4fs bs=1M count=50
mkfs.ext4 /tmp/ext4fs
debugfs -w -R "write /tmp/contentfile filen/ame" /tmp/ext4fs
debugfs -w -R "ls" /tmp/ext4fs
which outputs:
debugfs 1.43.4 (31-Jan-2017)
2 (12) . 2 (12) .. 11 (20) lost+found 12 (980) filen/ame
I tried the following to remove the filen/ame file:
debugfs -w -R "rm filen/ame" /tmp/ext4fs
but this did not work and only produced:
debugfs 1.43.4 (31-Jan-2017)
rm: File not found by ext2_lookup while trying to resolve filename
Apart from changing the content of the directory node manually, is there a way to remove the file using debugfs ?
|
If you want a fix and are not just trying out debugfs, you can have fsck do the work for you. Mark the filesystem as dirty and run fsck -y to get the filename changed:
$ debugfs -w -R "dirty" /tmp/ext4fs
$ fsck -y /tmp/ext4fs
...
/tmp/ext4fs was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Entry 'filen/ame' in / (2) has illegal characters in its name.
Fix? yes
...
$ debugfs -w -R "ls" /tmp/ext4fs
2 (12) . 2 (12) .. 11 (20) lost+found 12 (980) filen.ame
| How to delete a file named "filen/ame" (with slash) on an ext4 filesystem in debugfs? |
1,357,727,051,000 |
I have a micro SD card which has a FAT32 partition and an EXT4 partition. The EXT4 partition will no longer mount. dmesg shows the following error:
EXT4-fs (sdb2): bad geometry: block count 2199023779840 exceeds size of device (524288 blocks)
I've Googled, but still don't fully understand where the problem is (in the partition table? the filesystem?) nor how to fix it. I have attempted a number of solutions:
Using testdisk to write the partition table
Using fsck to restore the superblock from the backups (I've tried all of them). e.g. fsck.ext4 -b 163840 -B 4096 /dev/sdb2
Using fsck -cc to check for bad blocks
Using resize2fs to set the size of the partition. Output: The combination of flex_bg and !resize_inode features is not supported by resize2fs.
When I run fsck, it comes up with a bunch of errors (full output below), which it claims to fix. If I run it again, however, it shows the same errors all over again, every time.
How can I fix the bad geometry issue and make my filesystem mountable again? How did this happen?
fsck output:
e2fsck 1.42 (29-Nov-2011)
One or more block group descriptor checksums are invalid. Fix<y>? yes
Group descriptor 0 checksum is invalid. FIXED.
Group descriptor 1 checksum is invalid. FIXED.
Group descriptor 2 checksum is invalid. FIXED.
Group descriptor 3 checksum is invalid. FIXED.
Group descriptor 4 checksum is invalid. FIXED.
Group descriptor 5 checksum is invalid. FIXED.
Group descriptor 6 checksum is invalid. FIXED.
Group descriptor 7 checksum is invalid. FIXED.
Group descriptor 8 checksum is invalid. FIXED.
Group descriptor 9 checksum is invalid. FIXED.
Group descriptor 10 checksum is invalid. FIXED.
Group descriptor 11 checksum is invalid. FIXED.
Group descriptor 12 checksum is invalid. FIXED.
Group descriptor 13 checksum is invalid. FIXED.
Group descriptor 14 checksum is invalid. FIXED.
Group descriptor 15 checksum is invalid. FIXED.
/dev/sdb2 contains a file system with errors, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong for group #0 (24465, counted=24466).
Fix<y>? yes
Free blocks count wrong for group #2 (4788, counted=5812).
Fix<y>? yes
Free blocks count wrong for group #3 (8710, counted=8881).
Fix<y>? yes
Free blocks count wrong for group #8 (5682, counted=22066).
Fix<y>? yes
Free blocks count wrong (299742, counted=317322).
Fix<y>? yes
Inode bitmap differences: -(8193--8194) -8197 -8208 -(8225--8226) -8229 -(8240--8241) -(8257--8258) -8261 -8272 -8274 -(8289--8290) -8293 -(8304--8306) -(8321--8322) -8325 -8336 -8339 -16387 -16389 -16400 -16419 -16421 -(16432--16433) -16451 -16453 -16464 -16466 -16483 -16485 -(16496--16498) -16515 -16517 -16528 -16531 -24577 -24579 -24581 -24592 -24609 -24611 -24613 -(24624--24625) -24641 -24643 -24645 -24656 -24658 -24673 -24675 -24677 -(24688--24690) -24705 -24707 -24709 -24720 -24723 -(32770--32771) -32773 -32784 -(32802--32803) -32805 -(32816--32817) -(32834--32835) -32837 -32848 -32850 -(32866--32867) -32869 -(32880--32882) -(32898--32899) -32901 -32912 -32915 -(40961--40963) -40965 -40976 -(40993--40995) -40997 -(41008--41009) -(41025--41027) -41029 -41040 -41042 -(41057--41059) -41061 -(41072--41074) -(41089--41091) -41093 -41104 -41107 -(49156--49157) -49168 -(49188--49189) -(49200--49201) -(49220--49221) -49232 -49234 -(49252--49253) -(49264--49266) -(49284--49285) -49296 -49299 -57345 -(57348--57349) -57360 -57377 -(57380--57381) -(57392--57393) -57409 -(57412--57413) -57424 -57426 -57441 -(57444--57445) -(57456--57458) -57473 -(57476--57477) -57488 -57491 -65538 -(65540--65541) -65552 -65570 -(65572--65573) -(65584--65585) -65602 -(65604--65605) -65616 -65618 -65634 -(65636--65637) -(65648--65650) -65666 -(65668--65669) -65680 -65683 -(73729--73730) -(73732--73733) -73744 -(73761--73762) -(73764--73765) -(73776--73777) -(73793--73794) -(73796--73797) -73808 -73810 -(73825--73826) -(73828--73829) -(73840--73842) -(73857--73858) -(73860--73861) -73872 -73875 -(81923--81925) -81936 -(81955--81957) -(81968--81969) -(81987--81989) -82000 -82002 -(82019--82021) -(82032--82034) -(82051--82053) -82064 -82067 -90113 -(90115--90117) -90128 -90145 -(90147--90149) -(90160--90161) -90177 -(90179--90181) -90192 -90194 -90209 -(90211--90213) -(90224--90226) -90241 -(90243--90245) -90256 -90259 -(98306--98309) -98320 -(98338--98341) -(98352--98353) -(98370--98373) -98384 -98386 -(98402--98405) -(98416--98418) -(98434--98437) -98448 -98451 -(106497--106501) -106512 -(106529--106533) -(106544--106545) -(106561--106565) -106576 -106578 -(106593--106597) -(106608--106610) -(106625--106629) -106640 -106643 -114694 -114704 -114726 -(114736--114737) -114758 -114768 -114770 -114790 -(114800--114802) -114822 -114832 -114835 -122881 -122886 -122896 -122913 -122918 -(122928--122929) -122945 -122950 -122960 -122962 -122977 -122982 -(122992--122994) -123009 -123014 -123024 -123027
Fix<y>? yes
Free inodes count wrong for group #0 (7803, counted=7804).
Fix<y>? yes
Free inodes count wrong (130683, counted=130684).
Fix<y>? yes
/dev/sdb2: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sdb2: 388/131072 files (22.7% non-contiguous), 206966/524288 blocks
fdisk -l output:
Disk /dev/sdb: 16.0 GB, 16012804096 bytes
64 heads, 32 sectors/track, 15271 cylinders, total 31275008 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0005ce93
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 27080703 13539328 c W95 FAT32 (LBA)
/dev/sdb2 27080704 31275007 2097152 83 Linux
|
Since I wasn't able to find any other solution, I reformatted the EXT4 partition. This eliminated the bad geometry error. Wish I knew why.
| Fix EXT4-fs bad geometry (block count exceeds size of device) |
1,357,727,051,000 |
I'm wondering what the output of lsattr means.It prints so oddly as follows,when I have tried: lsattr /usr.
$ lsattr /usr
-----------------e- /usr/local
-----------------e- /usr/src
-----------------e- /usr/games
--------------I--e- /usr/include
--------------I--e- /usr/share
--------------I--e- /usr/lib
-----------------e- /usr/lib32
--------------I--e- /usr/bin
--------------I--e- /usr/sbin
I've read the man page of chattr and lsattr but still have no idea.
|
The man page for chattr contains all the info you need to understand the lsattr output.
excerpt
The letters 'aAcCdDeFijmPsStTux' select the new attributes for the files: append only (a), no atime updates (A), compressed (c), no copy on write (C), no dump (d), synchronous directory updates (D), extent format (e), case-insensitive directory lookups (F), immutable (i), data journaling (j), don't compress (m), project hierarchy (P), secure deletion (s), synchronous updates (S), no tail-merging (t), top of directory hierarchy (T), undeletable (u), and direct access for files (x).
The following attributes are read-only, and may be listed by lsattr(1) but not modified by chattr: encrypted (E), indexed directory (I), inline data (N), and verity (V).
If you take a look at the descriptions' of the tags further down in that same man page:
The e attribute indicates that the file is using extents for mapping the blocks on disk. It may not be removed using chattr(1).
The I attribute is used by the htree code to indicate that a directory is being indexed using hashed trees. It may not be set or cleared using chattr(1), although it can be displayed by lsattr(1).
| What's the meaning of output of lsattr |
1,357,727,051,000 |
I am in progress of resizing a LUKS encrypted partition that contains a single ext4 filesystem (no LVM or something). The cryptsetup FAQ recommends to remove the old partition and recreate it, but that sounds like wasting a lot time. Therefore I want to proceeed by manually, carefully resizing the partition.
So far, I think that I need to do:
Create an (encrypted) backup of the filesystem. Important! You won't be the first to lose your data while performing the following tasks.
Unmount the existing ext4 filesystem (e.g. by booting into a Live CD). If booting from a Live CD, mount the encrypted partition using cryptsetup luksOpen /dev/sdXY ExistingExt4
Resize the existing ext4 filesystem.
cryptsetup resize /dev/mapper/ExistingExt4 -b $SECTORS
Close/ "unmount" the LUKS partition using cryptsetup luksClose ExistingExt4
Shrink the partition size.
Are the above steps correct?
In step 4, what should I choose for $SECTORS? Is this step even necessary? The cryptsetup manual page is not really descriptive on the resize option:
resize <name>
resizes an active mapping <name>.
If --size (in sectors) is not specified, the size of the underlying
block device is used.
Finally, if I shrink the ext4 partition by 15 GiB, can I safely assume that 15 GiB can be removed from the existing partition using parted? If yes, how to do so? My disk is GPT partitioned, if that matters.
|
After backing up (step 1) and unmounting (between 2 and 3), run fsck to ensure that the filesystem is healthy:
e2fsck -f /dev/mapper/ExistingExt4
Other than that, the steps are OK.
Purpose of the cryptsetup resize command
what should I choose for $SECTORS? Is this step even necessary?
This step is necessary, otherwise the partition would still show up at the old side. This is confirmed with Nautilus, even after resizing with resize2fs, the LUKS partition showed up as the old size. After running cryptsetup resize, the correct number is shown. This step is not necessary. It only affects the current size status as shown in the file browser. After changing the size and closing/opening the partition again, the number is restored. So, when closing the LUKS partition as shown later will make this obsolete.
$SECTORS can be determined by looking at the output of cryptsetup status ExistingExt4:
/dev/mapper/ExistingExt4 is active.
type: LUKS1
cipher: aes-cbc-essiv:sha256
keysize: 256 bits
device: /dev/sda2
sector size: 512
offset: 2056 sectors
size: 156049348 sectors
mode: read/write
(As of cryptsetup 2.0.0 (December 2017), the sector size may be larger than 512 bytes: see the cryptsetup(8) manpage and the --sector-size option.)
Thus, to subtract 15 GiB, use a sector size of 156049348 - 15 * 1024 * 1024 * 2 = 124592068:
cryptsetup resize ExistingExt4 -b 124592068
Resizing the partition with parted
As for resizing the partition, parted works fine with GPT partitions. The resize command does not work however, as a workaround (or solution), remove the partition information and create a new partition as inspired by http://ubuntuforums.org/showthread.php?p=8721017#post8721017:
# cryptsetup luksClose ExistingExt4
# parted /dev/sda2
GNU Parted 2.3
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) unit s
(parted) p
Model: ATA INTEL SSDSA2CW08 (scsi)
Disk /dev/sda: 156301488s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 34s 2082s 2049s Boot bios_grub
3 2083s 250034s 247952s ext2 RootBoot
2 250035s 156301438s 156051404s Everything
As 15 GiB has to be shaved off, the new end becomes 156301438 - 15 * 1024 * 1024 * 2 = 124844158. Since I want to change partition 2, I first have to remove it and then recreate it with the label "Everything" (this could be changed if you like). Note: this disk has a GPT layout. For MBR, you should replace Everything by primary or extended (untested, resizing a partition on MBR has not been tested and is not recommended because it is untested).
WARNING: the following commands has destroyed data. Do not copy it without understanding what is happening. The sector dimensions must be changed, otherwise you WILL destroy your partition(s). I am in no way responsible for your stupidness, BACKUP BACKUP BACKUP your data to a second storage medium before risking your data.
(parted) rm 2
(parted) mkpart Everything 250035s 124844158s
Warning: The resulting partition is not properly aligned for best performance.
Ignore/Cancel? ignore
(parted) p
Model: ATA INTEL SSDSA2CW08 (scsi)
Disk /dev/sda: 156301488s
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Number Start End Size File system Name Flags
1 34s 2082s 2049s Boot bios_grub
3 2083s 250034s 247952s ext2 RootBoot
2 250035s 124844158s 124594124s Everything
(parted) quit
In the above parted example, my sectors are not aligned which is a mistake from an earlier installation, do not pay too much attention to it.
That is it! You can use cryptsetup status and file -Ls /dev/... to verify that everything is OK and then reboot.
| How can I shrink a LUKS partition, what does `cryptsetup resize` do? |
1,357,727,051,000 |
What are the consequences for a ext4 filesystem when I terminate a copying cp command by typing Ctrl + C while it is running?
Does the filesystem get corrupted? Is the partition's space occupied by the incomplete copied file still usable after deleting it?
And, most importantly, is terminating a cp process a safe thing to do?
|
This is safe to do, but naturally you may not have finished the copy.
When the cp command is run, it makes syscalls that instruct the kernel to make copies of the file. A syscall, or system call, is a function that an application can use to requests a service from the kernel, such as reading or writing data to the disk. The userspace process simply waits for the syscall to finish. If you were to trace the calls from cp ~/hello.txt /mnt, it would look like:
open("/home/user/hello.txt", O_RDONLY) = 3
open("/mnt/hello.txt", O_CREAT|O_WRONLY, 0644) = 4
read(3, "Hello, world!\n", 131072) = 14
write(4, "Hello, world!\n", 14) = 14
close(3) = 0
close(4) = 0
This repeats for each file that is to be copied. No corruption will occur because of the way these syscalls work. When syscalls like these are entered, the fatal signal will only take effect after the syscall has finished, not while it is running (in fact, signals only arrive during a kernelspace to userspace context switch). Note that some signals, like read(), can be terminated early.
Because of this, forcibly killing the process will only cause it to terminate after the currently running syscall has returned. This means that the kernel, where the filesystem driver lives, is free to finish the operations that it needs to complete to put the filesystem into a sane state. Any I/O of this kind will never be terminated in the middle of operation, so there is no risk of filesystem corruption.
| What happens when I kill 'cp'? Is it safe and does it have any consequences? |
1,357,727,051,000 |
For a presentation, I need to show ext4 File System is better than NTFS. I searched and got nice article on both ext4 and NTFS
http://en.wikipedia.org/wiki/Ext4
http://en.wikipedia.org/wiki/NTFS
But I need a comparison guideline with better example.
|
"Better" is subjective and not very meaningful. Nevertheless, you can get a good comparison of filesystems (including NTFS and ext4) on Wikipedia. There's also an article on PC World that covers it more briefly.
Ultimately you should remember that performance metrics in this case are not really a good measure of filesystem performance, there are too many variables involved, especially in that the performance of a filesystem is very related to the performance of the driver being used to access it.
| Why ext4 File System is better than NTFS? [closed] |
1,357,727,051,000 |
Ubuntu 14.04 on a desktop
Source Drive: /dev/sda1: 5TB ext4 single
drive volume
Target Volume: /dev/mapper/archive-lvarchive: raid6 (mdadm) 18TB volume with lvm
partition and ext4
There are roughly 15 million files to move, and some may be duplicates (I do not want to overwrite duplicates).
Command used (from source directory) was:
ls -U |xargs -i -t mv -n {} /mnt/archive/targetDir/{}
This has been going on for a few days as expected, but I am getting the error in increasing frequency. When it started the target drive was about 70% full, now its about 90%. It used to be about 1/200 of the moves would state and error, now its about 1/5. None of the files are over 100Mb, most are around 100k
Some info:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb3 155G 5.5G 142G 4% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 3.9G 4.0K 3.9G 1% /dev
tmpfs 797M 2.9M 794M 1% /run
none 5.0M 4.0K 5.0M 1% /run/lock
none 3.9G 0 3.9G 0% /run/shm
none 100M 0 100M 0% /run/user
/dev/sdb1 19G 78M 18G 1% /boot
/dev/mapper/archive-lvarchive 18T 15T 1.8T 90% /mnt/archive
/dev/sda1 4.6T 1.1T 3.3T 25% /mnt/tmp
$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdb3 10297344 222248 10075096 3% /
none 1019711 4 1019707 1% /sys/fs/cgroup
udev 1016768 500 1016268 1% /dev
tmpfs 1019711 1022 1018689 1% /run
none 1019711 5 1019706 1% /run/lock
none 1019711 1 1019710 1% /run/shm
none 1019711 2 1019709 1% /run/user
/dev/sdb1 4940000 582 4939418 1% /boot
/dev/mapper/archive-lvarchive 289966080 44899541 245066539 16% /mnt/archive
/dev/sda1 152621056 5391544 147229512 4% /mnt/tmp
Here's my output:
mv -n 747265521.pdf /mnt/archive/targetDir/747265521.pdf
mv -n 61078318.pdf /mnt/archive/targetDir/61078318.pdf
mv -n 709099107.pdf /mnt/archive/targetDir/709099107.pdf
mv -n 75286077.pdf /mnt/archive/targetDir/75286077.pdf
mv: cannot create regular file ‘/mnt/archive/targetDir/75286077.pdf’: No space left on device
mv -n 796522548.pdf /mnt/archive/targetDir/796522548.pdf
mv: cannot create regular file ‘/mnt/archive/targetDir/796522548.pdf’: No space left on device
mv -n 685163563.pdf /mnt/archive/targetDir/685163563.pdf
mv -n 701433025.pdf /mnt/archive/targetDir/701433025.pd
I've found LOTS of postings on this error, but the prognosis doesn't fit. Such issues as "your drive is actually full" or "you've run out of inodes" or even "your /boot volume is full". Mostly, though, they deal with 3rd party software causing an issue because of how it handles the files, and they are all constant, meaning EVERY move fails.
Thanks.
EDIT:
here is a sample failed and succeeded file:
FAILED (still on source drive)
ls -lhs 702637545.pdf
16K -rw-rw-r-- 1 myUser myUser 16K Jul 24 20:52 702637545.pdf
SUCCEEDED (On target volume)
ls -lhs /mnt/archive/targetDir/704886680.pdf
104K -rw-rw-r-- 1 myUser myUser 103K Jul 25 01:22 /mnt/archive/targetDir/704886680.pdf
Also, while not all files fail, a file which fails will ALWAYS fail. If I retry it over and over it is consistent.
EDIT: Some additional commands per request by @mjturner
$ ls -ld /mnt/archive/targetDir
drwxrwxr-x 2 myUser myUser 1064583168 Aug 10 05:07 /mnt/archive/targetDir
$ tune2fs -l /dev/mapper/archive-lvarchive
tune2fs 1.42.10 (18-May-2014)
Filesystem volume name: <none>
Last mounted on: /mnt/archive
Filesystem UUID: af7e7b38-f12a-498b-b127-0ccd29459376
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr dir_index filetype needs_recovery extent 64bit flex_bg sparse_super huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 289966080
Block count: 4639456256
Reserved block count: 231972812
Free blocks: 1274786115
Free inodes: 256343444
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 2048
Inode blocks per group: 128
RAID stride: 128
RAID stripe width: 512
Flex block group size: 16
Filesystem created: Thu Jun 25 12:05:12 2015
Last mount time: Mon Aug 3 18:49:29 2015
Last write time: Mon Aug 3 18:49:29 2015
Mount count: 8
Maximum mount count: -1
Last checked: Thu Jun 25 12:05:12 2015
Check interval: 0 (<none>)
Lifetime writes: 24 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 3ea3edc4-7638-45cd-8db8-36ab3669e868
Journal backup: inode blocks
$ tune2fs -l /dev/sda1
tune2fs 1.42.10 (18-May-2014)
Filesystem volume name: <none>
Last mounted on: /mnt/tmp
Filesystem UUID: 10df1bea-64fc-468e-8ea0-10f3a4cb9a79
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 152621056
Block count: 1220942336
Reserved block count: 61047116
Free blocks: 367343926
Free inodes: 135953194
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 732
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 4096
Inode blocks per group: 256
Flex block group size: 16
Filesystem created: Thu Jul 23 13:54:13 2015
Last mount time: Tue Aug 4 04:35:06 2015
Last write time: Tue Aug 4 04:35:06 2015
Mount count: 3
Maximum mount count: -1
Last checked: Thu Jul 23 13:54:13 2015
Check interval: 0 (<none>)
Lifetime writes: 150 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: a266fec5-bc86-402b-9fa0-61e2ad9b5b50
Journal backup: inode blocks
|
Bug in the implementation of ext4 feature dir_index which you are using on your destination filesystem.
Solution : recreate filesytem without dir_index. Or disable feature using tune2fs (some caution required, see related link Novell SuSE 10/11: Disable H-Tree Indexing on an ext3 Filesystem which although relates to ext3 may need similar caution.
(get a really good backup made of the filesystem)
(unmount the filesystem)
tune2fs -O ^dir_index /dev/foo
e2fsck -fDvy /dev/foo
(mount the filesystem)
ext4: Mysterious “No space left on device”-errors
ext4 has a feature called dir_index enabled by default, which is quite
susceptible to hash-collisions.
......
ext4 has the possibility to hash the filenames of its contents. This enhances performance, but has a “small” problem: ext4 does not grow its hashtable, when it starts to fill up. Instead it returns -ENOSPC or “no space left on device”.
| How to fix intermittant "No space left on device" errors during mv when device has plenty of space? |
1,357,727,051,000 |
btrfs (often pronounced "better fs") has quite a few features that ext4 lacks.
However, comparing the functionality of btrfs vs ext4, what is lacking in btrfs?1
In other words, what can I do with ext4 that I can't with btrfs?
1 Ignoring the lesser battle-ground testing of btrfs given ext4 is so widely used
|
Disadvantages of btrfs compared to ext4:
btrfs doesn't support badblocks
This means that if you've run out of spare non-addressable sectors that the HDD firmware keeps to cover for a limited number of failures, there is no way to mark blocks bad and avoid them at the filesystem level.
Swap files are only supported via a loopback device, which complicates things because it seems impossible to resume from suspend using this method
It's quite tricky to calculate free space, so much so that...
You can get "No space left on device" errors even though btrfs' own tools say there is space
| What ext4 functionality does btrfs not support? |
1,357,727,051,000 |
We would like to store millions of text files in a Linux filesystem, with the purpose of being able to zip up and serve an arbitrary collection as a service. We've tried other solutions, like a key/value database, but our requirements for concurrency and parallelism make using the native filesystem the best choice.
The most straightforward way is to store all files in a folder:
$ ls text_files/
1.txt
2.txt
3.txt
which should be possible on an EXT4 file system, which has no limit to number of files in a folder.
The two FS processes will be:
Write text file from web scrape (shouldn't be affected by number of files in folder).
Zip selected files, given by list of filenames.
My question is, will storing up to ten million files in a folder affect the performance of the above operations, or general system performance, any differently than making a tree of subfolders for the files to live in?
|
The ls command, or even TAB-completion or wildcard expansion by the shell, will normally present their results in alphanumeric order. This requires reading the entire directory listing and sorting it. With ten million files in a single directory, this sorting operation will take a non-negligible amount of time.
If you can resist the urge of TAB-completion and e.g. write the names of files to be zipped in full, there should be no problems.
Another problem with wildcards might be wildcard expansion possibly producing more filenames than will fit on a maximum-length command line. The typical maximum command line length will be more than adequate for most situations, but when we're talking about millions of files in a single directory, this is no longer a safe assumption. When a maximum command line length is exceeded in wildcard expansion, most shells will simply fail the entire command line without executing it.
This can be solved by doing your wildcard operations using the find command:
find <directory> -name '<wildcard expression>' -exec <command> {} \+
or a similar syntax whenever possible. The find ... -exec ... \+ will automatically take into account the maximum command line length, and will execute the command as many times as required while fitting the maximal amount of filenames to each command line.
| Millions of (small) text files in a folder |
1,357,727,051,000 |
Yesterday, one of our computers dropped to grub shell or honestly, I am unsure what shell it was when we turned on the machine.
It showed that it can't mount the root filesystem or something in this sense, because of inconsistencies.
I ran, I believe:
fsck -fy /dev/sda2
Rebooted and the problem was gone.
Here comes the question part:
I already have in her root's crontab:
@reboot /home/ruzena/Development/bash/fs-check.sh
while the script contains:
#!/bin/bash
touch /forcefsck
Thinking about it, I don't know, why I created script file for such a short command, but anyways...
Further, in the file:
/etc/default/rcS
I have defined:
FSCKFIX=yes
So I don't get it. How could the situation even arise?
What should I do to force the root filesystem check (and optionally a fix) at boot?
Or are these two things the maximum, that I can do?
OS: Linux Mint 18.x Cinnamon 64-bit.
fstab:
cat /etc/fstab | grep ext4
shows:
UUID=a121371e-eb12-43a0-a5ae-11af58ad09f4 / ext4 errors=remount-ro 0 1
grub:
fsck.mode=force
was already added to the grub configuration.
|
ext4 filesystem check during boot
Tested on OS: Linux Mint 18.x in a Virtual Machine
Basic information
/etc/fstab has the fsck order as the last (6th) column, for instance:
<file system> <mount point> <type> <options> <dump> <fsck>
UUID=2fbcf5e7-1234-abcd-88e8-a72d15580c99 / ext4 errors=remount-ro 0 1
FSCKFIX=yes variable in /etc/default/rcS
This will change the fsck to auto fix, but not force a fsck check.
From man rcS:
FSCKFIX
When the root and all other file systems are checked, fsck is
invoked with the -a option which means "autorepair". If there
are major inconsistencies then the fsck process will bail out.
The system will print a message asking the administrator to
repair the file system manually and will present a root shell
prompt (actually a sulogin prompt) on the console. Setting this
option to yes causes the fsck commands to be run with the -y
option instead of the -a option. This will tell fsck always to
repair the file systems without asking for permission.
From man tune2fs
If you are using journaling on your filesystem, your filesystem
will never be marked dirty, so it will not normally be checked.
Start with
Setting the following
FSCKFIX=yes
in the file
/etc/default/rcS
Check and note last time fs was checked:
sudo tune2fs -l /dev/sda1 | grep "Last checked"
These two options did NOT work
Passing -F (force fsck on reboot) argument to shutdown:
shutdown -rF now
Nope; see: man shutdown.
Adding the /forcefsck empty file with:
touch /forcefsck
These scripts seem to use this:
/etc/init.d/checkfs.sh
/etc/init.d/checkroot.sh
did NOT work on reboot, but the file was deleted.
Verified by:
sudo tune2fs -l /dev/sda1 | grep "Last checked"
sudo less /var/log/fsck/checkfs
sudo less /var/log/fsck/checkroot
These seem to be the logs for the init scripts.
I repeat, these two options did NOT work!
Both of these methods DID work
systemd-fsck kernel boot switches
Editing the main grub configuration file:
sudoedit /etc/default/grub
GRUB_CMDLINE_LINUX="fsck.mode=force"
sudo update-grub
sudo reboot
This did do a file system check as verified with:
sudo tune2fs -l /dev/sda1 | grep "Last checked"
Note: This DID a check, but to force a fix too, you need to specify fsck.repair="preen", or fsck.repair="yes".
Using tune2fs to set the number of file system mounts before doing a fsck, man tune2fs
tune2fs' info is kept in the file system superblock
-c switch sets the number of times to mount the fs before checking the fs.
sudo tune2fs -c 1 /dev/sda1
Verify with:
sudo tune2fs -l /dev/sda1
This DID work as verified with:
sudo tune2fs -l /dev/sda1 | grep "Last checked"
Summary
To force a fsck on every boot on Linux Mint 18.x, use either tune2fs, or fsck.mode=force, with optional fsck.repair=preen / fsck.repair=yes, the kernel command line switches.
| What should I do to force the root filesystem check (and optionally a fix) at boot? |
1,357,727,051,000 |
Possible Duplicate:
ext4: How to account for the filesystem space?
I have a ~2TB ext4 USB external disk which is about half full:
$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdc 1922860848 927384456 897800668 51% /media/big
I'm wondering why the total size (1922860848) isn't the same as Used+Available (1825185124)? From this answer I see that 5% of the disk might be reserved for root, but that would still only take the total used to 1921328166, which is still off. Is it related to some other filesystem overhead?
In case it's relevant, lsof -n | grep deleted shows no deleted files on this disk, and there are no other filesystems mounted inside this one.
Edit: As requested, here's the output of tune2fs -l /dev/sdc
tune2fs 1.41.14 (22-Dec-2010)
Filesystem volume name: big
Last mounted on: /media/big
Filesystem UUID: 5d9b9f5d-dae7-4221-9096-cbe7dd78924d
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 122101760
Block count: 488378624
Reserved block count: 24418931
Free blocks: 480665205
Free inodes: 122101749
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 907
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Wed Nov 23 14:13:57 2011
Last mount time: Wed Nov 23 14:14:24 2011
Last write time: Wed Nov 23 14:14:24 2011
Mount count: 2
Maximum mount count: 20
Last checked: Wed Nov 23 14:13:57 2011
Check interval: 15552000 (6 months)
Next check after: Mon May 21 13:13:57 2012
Lifetime writes: 144 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 68e954e4-59b1-4f59-9434-6c636402c3db
Journal backup: inode blocks
|
Theres no missing space. 5% reserved is rounded down to the nearest significant figure.
1k Blocks: 1922860848
Reserved 1k Blocks: (24418931 * 4) = 97675724
Total blocks used: 927384456 + 897800668 + 97675724 = 1922860848
Edit: Regarding your comment on the difference between df blocks and 'Block Count' blocks.
So the 4k block difference is (1953514496 - 1922860848)/4 = 7663412
The majority of the 'difference' is made up of the "Inode blocks per group" parameter which is 512.
Since there is 32768 blocks per group that puts the number of groups at 488378624 / 32768 which is 14904 rounded down.
Multiplied by the 512 blocks it takes up gives 7630848 blocks.
That gives us 7663412 - 7630848 = 32564 unaccounted for. I assume that those blocks make up your journal size, but not too sure on that one!
| Why is (free_space + used_space) != total_size in df? [duplicate] |
1,357,727,051,000 |
I have a 900GB ext4 partition on a (magnetic) hard drive that has no defects and no bad sectors. The partition is completely empty except for an empty lost+found directory. The partition was formatted using default parameters except that I set the number of reserved filesystem blocks to 1%.
I downloaded the ~900MB file xubuntu-15.04-desktop-amd64.iso to the partition's mount point directory using wget. When the download was finished, I found that the file was split into four fragments:
filefrag -v /media/emma/red/xubuntu-15.04-desktop-amd64.iso
Filesystem type is: ef53
File size of /media/emma/red/xubuntu-15.04-desktop-amd64.iso is 1009778688 (246528 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 32767: 34816.. 67583: 32768:
1: 32768.. 63487: 67584.. 98303: 30720:
2: 63488.. 96255: 100352.. 133119: 32768: 98304:
3: 96256.. 126975: 133120.. 163839: 30720:
4: 126976.. 159743: 165888.. 198655: 32768: 163840:
5: 159744.. 190463: 198656.. 229375: 30720:
6: 190464.. 223231: 231424.. 264191: 32768: 229376:
7: 223232.. 246527: 264192.. 287487: 23296: eof
/media/emma/red/xubuntu-15.04-desktop-amd64.iso: 4 extents found
Thinking this might be releated to wget somehow, I removed the ISO file from the partition, making it empty again, then I copied the ~700MB file v1.mp4 to the partition using cp. This file was fragmented too. It was split into three fragments:
filefrag -v /media/emma/red/v1.mp4
Filesystem type is: ef53
File size of /media/emma/red/v1.mp4 is 737904458 (180153 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 32767: 34816.. 67583: 32768:
1: 32768.. 63487: 67584.. 98303: 30720:
2: 63488.. 96255: 100352.. 133119: 32768: 98304:
3: 96256.. 126975: 133120.. 163839: 30720:
4: 126976.. 159743: 165888.. 198655: 32768: 163840:
5: 159744.. 180152: 198656.. 219064: 20409: eof
/media/emma/red/v1.mp4: 3 extents found
Why is this happening? And is there a way to prevent it from happening? I thought ext4 was meant to be resistant to fragmentation. Instead I find that it immediately fragments a solitary file when all the rest of the volume is unused. This seems to be worse than both FAT32 and NTFS.
|
3 or 4 fragments in a 900mb file is very good. Fragmentation becomes a problem when a file of that size has more like 100+ fragments. It isn't uncommon for fat or ntfs to fragment such a file into several hundred pieces.
You generally won't see better than that at least on older ext4 filesystems because the maximum size of a block group is 128 MB, and so every 128 MB the contiguous space is broken by a few blocks for the allocation bitmaps and inode tables for the next block group. A more recent ext4 feature called flex_bg allows packing a number of ( typically 16 ) block groups' worth of these tables together, leaving longer runs of allocatable blocks but depending on your distribution and what version of e2fsprogs was used to format it, this option may not have been used.
You can use tune2fs -l to check the features enabled when your filesystem was formatted.
| Why are these files in an ext4 volume fragmented? |
1,357,727,051,000 |
I was reading a blog post about filesystem repair and the author posted a good question… fsck -p is supposed to fix minor errors automatically without human intervention. But what exactly will it fix when it's told to preen the filesystem? What errors will it fix, and what will cause it to stop and tell the user he or she must run fsck interactively? Is there a list of some kind?
I've been Googling around and all I find is the man page, which doesn't really tell what -p will fix or what triggers the hands-on flag. I'm specifically interested in the ext4 filesystem.
|
The answer to your question lies in the e2fsck/problems.c file of the e2fsprogs source code. Looking for the PR_PREEN_OK flag should get you started.
As the complete error handling is a bit more involved, due to the multitude of different error conditions that may occur, you are advised to have a closer look at the code if you are concerned about a specific case. However, the lists below were extracted from the comments to the error conditions and should give you a rough overview about the effects of the preen-mode.
The following errors/warnings are currently handled automatically when the -p flag is specified:
Relocate hint
Journal inode is invalid
Journal superblock is corrupt
Superblock has_journal flag is clear but has a journal
Superblock needs_recovery flag is set but not journal is present
Filesystem revision is 0, but feature flags are set
Superblock hint for external superblock
group descriptor N marked uninitialized without feature set.
group N block bitmap uninitialized but inode bitmap in use.
Group descriptor N has invalid unused inodes count.
Last group block bitmap uninitialized.
The test_fs flag is set (and ext4 is available)
Last mount time is in the future (fudged)
Last write time is in the future (fudged)
Block group checksum (latch question) is invalid.
Root directory has dtime set
Reserved inode has bad mode
Deleted inode has zero dtime
Inode in use, but dtime set
Zero-length directory
Inode has incorrect i_size
Inode has incorrect i_blocks
Bad superblock in group
Bad block group descriptors in group
Block claimed for no reason
Error allocating blocks for relocating metadata
Error allocating block buffer during relocation process
Relocating metadata group information from X to Y
Relocating metatdata group information to X
Block read error during relocation process
Block write error during relocation process
Immutable flag set on a device or socket inode
Non-zero size for device, fifo or socket inode
Filesystem revision is 0, but feature flags are set
Journal inode is not in use, but contains data
Journal has bad mode
INDEX_FL flag set on a non-HTREE filesystem
INDEX_FL flag set on a non-directory
Invalid root node in HTREE directory
Unsupported hash version in HTREE directory
Incompatible flag in HTREE root node
HTREE too deep
invalid inode->i_extra_isize
invalid ea entry->e_name_len
invalid ea entry->e_value_offs
invalid ea entry->e_value_block
invalid ea entry->e_value_size
invalid ea entry->e_hash
inode missing EXTENTS_FL, but is an extent inode
Inode should not have EOFBLOCKS_FL set
Directory entry has deleted or unused inode
Directory filetype not set
Directory filetype set on filesystem
Invalid HTREE root node
Invalid HTREE limit
Invalid HTREE count
HTREE interior node has out-of-order hashes in table
Inode found in group where _INODE_UNINIT is set
Inode found in group unused inodes area
i_blocks_hi should be zero
/lost+found not found
Unattached zero-length inode
Inode ref count wrong
Padding at end of inode bitmap is not set.
Padding at end of block bitmap is not set.
Block bitmap differences header
Block not used, but marked in bitmap
Block used, but not marked used in bitmap
Block bitmap differences end
Inode bitmap differences header
Inode not used, but marked in bitmap
Inode used, but not marked used in bitmap
Inode bitmap differences end
Free inodes count for group wrong
Directories count for group wrong
Free inodes count wrong
Free blocks count for group wrong
Free blocks count wrong
Block range not used, but marked in bitmap
Block range used, but not marked used in bitmap
Inode range not used, but marked in bitmap
Inode range used, but not marked used in bitmap
Group N block(s) in use but group is marked BLOCK_UNINIT
Group N inode(s) in use but group is marked INODE_UNINIT
Recreate journal if E2F_FLAG_JOURNAL_INODE flag is set
The following error conditions cause the non-interactive fsck process to abort, even if the -p flag is set:
Block bitmap not in group
Inode bitmap not in group
Inode table not in group
Filesystem size is wrong
Inode count in superblock is incorrect
The Hurd does not support the filetype feature
Journal has an unknown superblock type
Ask if we should clear the journal
Journal superblock has an unknown read-only feature flag set
Journal superblock has an unknown incompatible feature flag set
Journal has unsupported version number
Ask if we should run the journal anyway
Reserved blocks w/o resize_inode
Resize_inode not enabled, but resize inode is non-zero
Resize inode invalid
Last mount time is in the future
Last write time is in the future
group descriptor N checksum is invalid.
Root directory is not an inode
Block bitmap conflicts with some other fs block
Inode bitmap conflicts with some other fs block
Inode table conflicts with some other fs block
Block bitmap is on a bad block
Inode bitmap is on a bad block
Illegal blocknumber in inode
Block number overlaps fs metadata
Inode has illegal blocks (latch question)
Too many bad blocks in inode
Illegal block number in bad block inode
Bad block inode has illegal blocks (latch question)
Bad block used as bad block indirect block
Inconsistency can't be fixed prompt
Bad primary block prompt
Suppress messages prompt
Imagic flag set on an inode when filesystem doesn't support it
Compression flag set on an inode when filesystem doesn't support it
Deal with inodes that were part of orphan linked list
Deal with inodes that were part of corrupted orphan linked list (latch question)
Error reading extended attribute block
Invalid extended attribute block
Extended attribute reference count incorrect
Multiple EA blocks not supported
Error EA allocation collision
Bad extended attribute name
Bad extended attribute value
Inode too big (latch question)
Directory too big
Regular file too big
Symlink too big
Bad block has indirect block that conflicts with filesystem block
Resize inode failed
inode appears to be a directory
Error while reading extent tree
Failure to iterate extents
Bad starting block in extent
Extent ends beyond filesystem
EXTENTS_FL flag set on a non-extents filesystem
inode has extents, superblock missing INCOMPAT_EXTENTS feature
Fast symlink has EXTENTS_FL set
Extents are out of order
Inode has an invalid extent node
Clone duplicate/bad blocks?
Bad inode number for '.'
Directory entry has bad inode number
Directry entry is link to '.'
Directory entry points to inode now located in a bad block
Directory entry contains a link to a directory
Directory entry contains a link to the root directry
Directory entry has illegal characters in its name
Missing '.' in directory inode
Missing '..' in directory inode
First entry in directory inode doesn't contain '.'
Second entry in directory inode doesn't contain '..'
i_faddr should be zero
i_file_acl should be zero
i_dir_acl should be zero
i_frag should be zero
i_fsize should be zero
inode has bad mode
directory corrupted
filename too long
Directory inode has a missing block (hole)
'.' is not NULL terminated
'..' is not NULL terminated
Illegal character device inode
Illegal block device inode
Duplicate '.' entry
Duplicate '..' entry
Final rec_len is wrong
Error reading directory block
Error writing directory block
Directory entry for '.' is big. Split?
Illegal FIFO inode
Illegal socket inode
Directory filetype incorrect
Directory filename is null
Invalid symlink
i_file_acl (extended attribute block) is bad
Filesystem contains large files, but has no such flag in sb
Clear invalid HTREE directory
Bad block in htree interior node
Duplicate directory entry found
Non-unique filename found
i_blocks_hi should be zero
Unexpected HTREE block
Root inode not allocated
No room in lost+found
Unconnected directory inode
.. entry is incorrect
Lost+found not a directory
Unattached inode
Superblock corrupt
Fragments not supported
Error determing physical device size of filesystem
The external journal has (unsupported) multiple filesystems
Can't find external journal
External journal has bad superblock
Superblock has a bad journal UUID
Error allocating inode bitmap
Error allocating block bitmap
Error allocating icount link information
Error allocating directory block array
Error while scanning inodes
Error while iterating over blocks
Error while storing inode count information
Error while storing directory block information
Error while reading inode (for clearing)
Error allocating refcount structure
Error reading Extended Attribute block while fixing refcount
Error writing Extended Attribute block while fixing refcount
Error allocating EA region allocation structure
Error while scanning inodes
Error allocating inode bitmap
Internal error: couldn't find dir_info
Error allocating icount structure
Error iterating over directory blocks
Error deallocating inode
Error adjusting EA refcount
Error allocating inode bitmap
Error creating root directory
Root inode is not directory; aborting
Cannot proceed without a root inode.
Internal error: couldn't find dir_info
Programming error: bitmap endpoints don't match
Internal error: fudging end of bitmap
Error copying in replacement inode bitmap
Error copying in replacement block bitmap
| What does fsck -p (preen) do on ext4? |
1,357,727,051,000 |
There is already the Nimbus ExaDrive 100TB SSD and the 200TB SSD will come soon. As you can read here ext4 supports up to 256 TB. It's only a matter of time hardware will reach this limit.
Will they update ext4 or will there be ext5? What will happen?
|
64-bit ext4 file systems can be up to 64ZiB in size with 4KiB blocks, and up to 1YiB in size with 64KiB blocks, no need for an ext5 to handle large volumes. 1 YiB, one yobibyte, is 10248 bytes.
There are practical limits around 1 PiB and 1 EiB, but that’s still (slightly) larger than current SSDs, and the limits should be addressable within ext4, without requiring an ext5.
| When is ext5 coming or when will ext4 be updated to support large (huge) SSDs? |
1,357,727,051,000 |
I've recently formatted a 1.5 TB drive with the intention of replacing ntfs with ext4.
Then I noticed that the files I saved don't fit on the new partition.
df:
ext4 (ext3 & ext2 show the same behavior)
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sdb1 1442146364 71160 1442075204 1% /media/Seagate
ntfs (similar to all other options that gparted offers):
/dev/sdb1 1465137148 110700 1465026448 1% /media/Seagate
That 1K-blocks difference means a glaring 22 GiB less usable space.
I have already executed
tune2fs -O \^has_journal
tune2fs -r 0
tune2fs -m 0
with, unsurprisingly, no effect as that does not affect blocks that just aren't there.
Still, fdisk reports that the ext4 partition covers the entire disk.
fdisk -l /dev/sdb:
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdb: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 2930277167 1465138583+ ee GPT
And thus e. g. resize2fs reports that there's "Nothing to do!"
dumpe2fs -h /dev/sdb1:
dumpe2fs 1.41.14 (22-Dec-2010)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: d6fc8971-89bd-4c03-a7cd-abdb945d2173
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: (none)
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 91578368
Block count: 366284288
Reserved block count: 0
Free blocks: 360518801
Free inodes: 91578357
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 936
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Sat May 21 17:12:04 2011
Last mount time: Sat May 21 17:15:30 2011
Last write time: Sat May 21 17:24:32 2011
Mount count: 1
Maximum mount count: 32
Last checked: Sat May 21 17:12:04 2011
Check interval: 15552000 (6 months)
Next check after: Thu Nov 17 16:12:04 2011
Lifetime writes: 1372 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Default directory hash: half_md4
Directory Hash Seed: c334e6ef-b060-45d2-b65d-4ac94167cb09
Journal backup: inode blocks
What is using that missing space?
|
Let's see. The device size is 1,465,138,583½ kB = 1,500,301,909,504 B. The filesystem consists of 366,284,288 blocks of 4096 B each, which is 1,500,300,443,648 B. I don't know what the remaining 1,465,856 B (1.4 MB) are used for (additional copies of the superblock? I know there are a few kB of space at the beginning for the bootloader.).
The filesystem contains 91,578,368 inodes of 256 bytes each, which takes up 23,444,062,208 B (about 22 GB, hint, hint). Then there is 1,442,146,364 kB = 1,476,757,876,736 B for file contents. This accounts for 23,444,062,208 B + 1,476,757,876,736 B = 1,500,201,938,944 B. The remaining size is 98,504,704 B = 24,029 blocks which is in the right range to be the journal size.
As you can see, everything is accounted for. (Ok, almost everything, but we're talking megabytes, not gigabytes.)
| ext4: How to account for the filesystem space? |
1,357,727,051,000 |
I know that mounting the same disk with an ext4 filesystem from two different servers (it's an iSCSI vloume) will likely corrupt data on the disk. My question is will it make any difference if one of the servers mounts the disk read-only while the other mounts it read-write?
I know OCFS2 or the likes could be used for this and that I could export the disk with NFS to be accesible to the other server, but I would like to know if the setup I propose will work.
|
No. It won't give consistent results on the read-only client, because of caching. It's definitely not designed for it. You could expect to see IO errors returned to applications. There's probably still some number of oversights in the code, that could cause a kernel crash or corrupt memory used by any process.
But most importantly, ext4 replays the journal even on readonly mounts. So a readonly mount will still write to the underlying block device. It would be unsafe even if both the mounts were readonly :).
| Can the same ext4 disk be mounted from two hosts, one readonly? |
1,357,727,051,000 |
When running
e2fsck -cck /dev/mapper/xxx
I am prompted with
has 487 multiply-claimed block(s), shared with 84 file(s):
... (inode #221446306, mod time Tue Feb 20 19:48:38 2018)
... (inode #221446305, mod time Tue Feb 20 19:48:32 2018)
... (inode #221446304, mod time Tue Feb 20 19:48:38 2018)
... (inode #221446303, mod time Tue Feb 20 19:48:12 2018)
... (inode #221446302, mod time Tue Feb 20 19:59:04 2018)
... (inode #221446300, mod time Tue Feb 20 19:47:52 2018)
Clone multiply-claimed blocks<y>?
What will be the possible consequence of continuing with "yes"? Will there be complete data loss? What is the result if continue with "no"?
|
Multiply-claimed blocks are blocks which are used by multiple files, when they shouldn’t be. One consequence of that is that changes to one of those files, in one of the affected blocks, will also appear as changes to the files which share the blocks, which isn’t what you want. (Hard links are a different scenario, which doesn’t show up here.)
If there is data loss here, it has already occurred, and it won’t easily be reversible; but it could be made worse...
If you answer “no” to the fsck question, the file system will remain in an inconsistent state. If you answer “yes”, then fsck will copy the shared blocks so that they can be re-allocated to a single file — with the 84 files involved here, each block would be copied 83 times. This will avoid future data loss, since changes to files will be limited to each individual file, as you’d expect. However cloning the blocks could involve overwriting data in other blocks, which currently appear to be unused, but might contain data you want to keep.
So the traditional data-recovery advice applies: if you think you need to recover data from the file system, do not touch it; make a copy of it on another disk and work on that to recover the data. The scenario here where this might be desirable is as follows. Files A and B used to be separate, but following some corruption somewhere, file B now shares blocks with file A. If nothing has overwritten file B’s old blocks, the data is still there, but it is no longer accessible. As long as nothing overwrites those blocks, they can be recovered (with a fair amount of effort perhaps). But once they’re overwritten, they’re gone; and here, cloning the shared blocks from file A could overwrite the old data...
In summary, if you have backups, or you know that the data can be recovered easily, answer “yes”. Otherwise, stop fsck, copy the file system somewhere else, and if you need the system back up and running, run fsck again and answer “yes” (and recover the data from the copy). If the data is important and needs to be recovered, copy the file system somewhere else, but leave the original alone — if you need the system back up and running, make another copy and run the system off of that, after running fsck on it.
| Should I answer yes to "Clone multiply-claimed blocks<y>?" when running e2fsck? |
1,357,727,051,000 |
I've just read this question: What does size of a directory mean in output of 'ls -l' command?
...which doesn't quite answer my question. Basically, I'm moving files onto a NAS. The folders I've already moved are completely empty, with no hidden files or anything, and yet du still reports their size at 3.5MB. Admittedly, they previously contained a large number of files, with long filenames.
Is this size simply because of the quantity and name-length of files that were in that directory? Why hasn't the size decreased now that the folders are empty (ext4 filesystem)?
|
When you delete all the files from a directory, for most file systems, the directory remains the same size.
If the directory is empty,
rmdir ./directory_name; mkdir ./directory
The resulting new directory will be smaller. But as files are added it will grow larger. Do not worry about directory file size as much as the number of files in a single directory. Huge numbers of files in a single directory impact file lookup performance negatively. Even with ample inode caching.
| Why is this empty directory 3.5MB? |
1,357,727,051,000 |
I'm attempting to run fsck -p /dev/sda5 to repair errors on an ext4 partition, however the command outputs
fsck from util-linux-ng 2.17.2
fsck.ext4: Device or resource busy while trying to open /dev/sda5
Filesystem mounted or opened exclusively by another program?
I have confirmed using /etc/mtab and lsof that nothing is using the partition and it's not mounted. I also used fuser -k /dev/sda5 to forcibly close anything using the file, and umount to attempt to unmount it to no avail.
How can I force fsck to at least check, and hopefully to repair, the partition despite the fact that it reads as busy? Assuming I'm confident enough that it's neither mounted nor in use, and that the possibility of data corruption isn't an issue.
All commands were executed as root from an Ubuntu 10.04 32-bit liveCD. The partition is the system (non-home) portion of an Ubuntu 10.04 32-bit installation.
|
There are things (usually in the kernel, like the NFS threads, swap files, bind mounts, etc.) that can keep a filesystem busy that won't show up in fuser.
If you try to fsck a filesystem that is mounted, it will get corrupted. You should find a live CD that doesn't automatically mount your filesystems, like Knoppix or Fedora.
| How can I fsck a partition when the device reads as busy (but has been confirmed otherwise)? |
1,643,300,188,000 |
I have a filesystem with many small files that I erase regularly (the files are a cache that can easily be regenerated). It's much faster to simply create a new filesystem rather than run rm -rf or rsync to delete all the files (i.e. Efficiently delete large directory containing thousands of files).
The only issue with creating a new filesystem to wipe the filesystem is that its UUID changes, leading to changes in e.g. /etc/fstab.
Is there a way to simply "unlink" a directory from e.g. an ext4 filesystem, or completely clear its list of inodes?
|
Since you're using ext4 you could format the filesystem and the set the UUID to a known value afterwards.
man tune2fs writes,
-U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. The format of the UUID is a series of hex digits separated by hyphens, like this c1b9d5a2-f162-11cf-9ece-0020afc76f16.
And similarly, man mkfs.ext4 writes,
-U UUID Set the universally unique identifier (UUID) of the filesystem to UUID. […as above…]
Personally, I prefer to reference filesystems by label. For example in the /etc/fstab for one of my systems I have entries like this
# <file system> <mount point> <type> <options> <dump> <pass>
LABEL=root / ext4 errors=remount-ro 0 1
LABEL=backup /backup ext4 defaults 0 2
Such labels can be added with the -L flag for tune2efs and mkfs.ext4. They avoid issues with inode checksums causing rediscovery or corruption on a reformatted filesystem and they are considerably easier to identify visually. (But highly unlikely to be unique across multiple systems, so beware if swapping disks around.)
| Reset ext4 filesystem without changing the filesystem UUID |
1,643,300,188,000 |
The question is why exactly does a directory shrink after directory entries are removed? Is it due to how ext4 filesystem configured to retain directory metadata? Obviously removing the directory and recreating it isn't a solution, since it deletes original inode and creates a new one. What can be done to decrease the number manually?
|
Quoting a developer (in a linux kernel thread ext3/ext4 directories don't shrink after deleting lots of files):
On Thu, May 14, 2009 at 08:45:38PM -0400, Timo Sirainen wrote:
>
> I was rather thinking something that I could run while the system was
> fully operational. Otherwise just moving the files to a temp directory +
> rmdir() + rename() would have been fine too.
>
> I just tested that xfs, jfs and reiserfs all shrink the directories
> immediately. Is it more difficult to implement for ext* or has no one
> else found this to be a problem?
It's probably fairest to say no one has thought it worth the effort.
It would require some fancy games to swap out block locations in the
extent trees (life would be easier with non-extent-using inodes), and
in the case of htree, we would have to keep track of the index block
so we could remove it from the htree index. So it's all doable, if a
bit tricky in terms of the technical details; it's just that the
people who could do it have been busy enough with other things.
It's hasn't been considered high priority because most of the time
directories don't go from holding thousands of files down to a small
handful.
- Ted
| Why directory with large amounts of entries does not shrink in size after entries are removed? |
1,643,300,188,000 |
Related to this.
I'd like to take advantage of an OS switch to upgrade to BTRFS.
BTRFS claims to offer a lot (data-loss resiliency, self-healing if RAID, checksumming of metadata and data, compression, snapshots). But it's slow when used with fsync-intensive programs such as dpkg (I know eatmydata and the crappy apt-btrfs-snapshot programs) and I won't setup a RAID :p.
EXT4 allow metadata check-summing only and doesn't compress data.
In 6 years, I had to reinstall my OS twice because of HDD corruption (after flight trips). The first making the laptop unbootable, the second bunch of corruptions was identified thanks to a corrupted film and then md5sum check of the OS binaries. (SMART tells me the disk is sane). The lappy currently behave quite strangely. I don't know if the hardware or the software is to blame but I suspect the hardware (it all began right after a flight, once again).
Would you advise to switch to BTRFS for a laptop because of data compression and check-summing or should I stick with EXT4?
(I don't care about which is "best" relative to whatever variable but I have almost no experience with BTRFS and would like some feedback)
EDIT:
Let's be clearer:
BTRFS is still flagged as experimental, I know, but SUSE says it shouldn't anymore. So does Oracle (I know who Oracle is). And a bunch of distributions already propose BTRFS for installation and most of them are planning to switch to it in the next few months.
Two facts:
Backups of corrupted data are worthless. I don't understand why I seem to be the only one to bother. Isn't that common sense? In the meanwhile:
Stop telling me I should do backups: I already do.
Stop implying backups are just enough to keep my data safe except if you are willing to give me TBs of free space to do years worth of backups.
A corrupted file =/=> Linux complaining. So:
Don't assume your system/data are sane just because the OS is booting.
I hope you understand that I prefer (meta)data checksumming to an over-engineered and bloated piece of software that would inconveniently do half as a good job as BTRFS to check the data integrity.
Is that more clear now that I am not asking for which FS is "better"? The question is, given that I regularly do backups, is BTRFS still too experimental to be used for its data-integrity checking functions or should I stick to EXT4?
|
I agree with vonbrand, btrfs is not yet to the maturity level of ext* or XFS or JFS to name a few. I would not use it on a laptop with precious data unless I have a reliable backup that can be done also on the go.
Btrfs can detect corruptions but it won't do anything more than reporting the detection unless you have an available uncorrupted copy of the same data, which means you either need RAID or duplication of data on the volume.
That said, I am considering using it (using RAID-1) for one machine, but I also do have Crashplan running on this machine!
For a long time, I have been using JFS on my laptop. One reason was the lower CPU usage compare to XFS or ext3 when doing file operations. I have never verified if it saved power consumption as well, but that was my assumption. I found JFS pretty stable and safe, never lost data while using it.
| Should a laptop user switch from ext4 to btrfs? |
1,643,300,188,000 |
The problem of using a filesystem like ext4 on a USB stick or memory card is that when it's mounted into another system the disk's UID/GID might not be present.
Can this be fixed with a mount option?
|
I assume you're hoping to find an equivalent of the uid=N and gid=N options supported by some of the other filesystems Linux's mount command knows about. Sorry, but no, ext4 doesn't have that option.
These other filesystems have such an option in order to give permissions to files for a filesystem that may not have useful POSIX permissions. You're looking to take permissions away — or at least, reassign them — which is a bad idea from a security standpoint, which is doubtless why these options don't exist.
When you use a filesystem like ext4 on removable media, you're saying that you care about things like POSIX permissions. That means you have to take the same sort of steps to synchronize user and group IDs as you would for, say, NFS.
If you don't actually care about permissions, there is probably a more appropriate filesystem for your situation.
I tried UDF as a candidate for such a filesystem after the commentary below, but alas, it won't work:
If you create a UDF filesystem on one Linux box, add files to it, change their permissions, and mount them on another Linux box, it will obey the permissions it finds there, even if you give uid=N,gid=N. You have to sync UIDs and GIDs here, as with NFS.
Mac OS X behaves as hoped: it believes it owns everything on a UDF filesystem created on a Linux box. But, add a file to the disk, and it will set the UID and GID of the file, which a Linux box will then obey.
If you then try to mount that filesystem on a FreeBSD box, it yells invalid argument. I assume this is because the kernel devs didn't realize UDF could appear on non-optical media, simply because I couldn't find any reports of success online. Perhaps there is a magic incantation I have missed.
It is reportedly possible to get a UDF hard drive to work on Windows, but it is very picky about the way it is created. If you need this to work, it's probably best to format from within Windows. From the command line:
format /fs:udf x:
Don't use /q: that creates a filesystem that is less likely to mount on other OSes.
Note that UDF is only read/write on Vista and newer. XP will mount a UDF hard drive created on Vista, but won't be able to write to it.
Some form of FAT is probably the best option here. If you are avoiding that because of the FAT32 4 GB file size limit, you might wish to look into exFAT. There is a free FUSE version available.
NTFS might also work, if you're using recent enough Linux distros that they include reliable read/write support for NTFS.
| Different UID/GID when using an ext4 formatted USB drive with another computer |
1,643,300,188,000 |
I'm looking for the commands that will tell me the allocation quantum on drives formatted with ext4 vs btrfs.
Background: I am using a backup system that allows users to restore individual files. This system just uses rsync and has no server-side software, backups are not compressed. The result is that I have some 3.6TB of files, most of them small.
It appears that for my data set storage is much less efficient on a btrfs volume under LVM than it is on a plain old ext4 volume, and I suspect this has to do with the minimum file size, and thus the block size, but I have been unable to figure out how to get those sizes for comparison purposes. The btrfs wiki says that it uses the "page size" but there's nothing I've found on obtaining that number.
|
You'll want to look at the data block allocation size, which is the minimum block that any file can allocate. Large files consist of multiple blocks. And there's always some "waste" at the end of large files (or all small files) where the final block isn't filled entirely, and therefore unused.
As far as I know, every popular Linux filesystem uses 4K blocks by default because that's the default pagesize of modern CPUs, which means that there's an easy mapping between memory-mapped files and disk blocks. I know for a fact that BTRFS and Ext4 default to the page size (which is 4K on most systems).
On ext4, just use tune2fs to check your block size, as follows (change /dev/sda1 to your own device path):
[root@centos8 ~]# tune2fs -l /dev/sda1 |grep "^Block size:"
Block size: 4096
[root@centos8 ~]#
On btrfs, use the following command to check your block size (change /dev/mapper/cr_root to your own device path, this example simply uses a typical encrypted BTRFS-on-LUKS path):
sudo btrfs inspect-internal dump-super -f /dev/mapper/cr_root | grep "^sectorsize"
| How do I determine the block size for ext4 and btrfs filesystems? |
1,643,300,188,000 |
If I read the ext4 documentation correctly, starting from Linux 3.8 it should be possible to store data directly in the inode in the case of a very small file.
I was expecting such a file to have a size of 0 blocks, but it is not the case.
# creating a small file
printf "abcde" > small_file
# checking size of file in bytes
stat --printf='%s\n' small_file
5
# number of 512-byte blocks used by file
stat --printf='%b\n' small_file
8
I would expect this last number here to be 0. Am I am missing something?
|
To enable inline data in ext4, you'll need to use e2fsprogs 1.43 or later. Support for inline data was added in March 2014 to the Git repository but was only released in May 2016.
Once you have that, you can run mke2fs -O inline_data on an appropriate device to create a new filesystem with inline data support; this will erase all your data. It's apparently not yet possible to activate inline data on an existing filesystem (at least, tune2fs doesn't support it).
Now create a small file, and run debugfs on the filesystem. cd to the appropriate directory, and run stat smallfile; you'll get something like
Inode: 32770 Type: regular Mode: 0644 Flags: 0x10000000
Generation: 2302340561 Version: 0x00000000:00000001
User: 1000 Group: 1000 Size: 6
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 0
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x553731e9:330badf8 -- Wed Apr 22 07:30:17 2015
atime: 0x553731e9:330badf8 -- Wed Apr 22 07:30:17 2015
mtime: 0x553731e9:330badf8 -- Wed Apr 22 07:30:17 2015
crtime: 0x553731e9:330badf8 -- Wed Apr 22 07:30:17 2015
Size of extra inode fields: 28
Extended attributes:
system.data (0)
Size of inline data: 60
As you can see the data was stored inline. This can also be seen using df; before creating the file:
% df -i /mnt/new
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg--large--mirror-inline 65536 12 65524 1% /mnt/new
% df /mnt/new
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg--large--mirror-inline 1032088 1280 978380 1% /mnt/new
After creating the file:
% echo Hello > smallfile
% ls -l
total 1
-rw-r--r-- 1 steve steve 6 Apr 22 07:35 smallfile
% df -i /mnt/new
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg--large--mirror-inline 65536 13 65523 1% /mnt/new
% df /mnt/new
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg--large--mirror-inline 1032088 1280 978380 1% /mnt/new
The file is there, it uses an inode but the storage space available hasn't changed.
| How to use the new ext4 inline data feature? (storing data directly in the inode) |
1,643,300,188,000 |
I have a failing hard drive that is unable to write or read the first sectors of the disk. It just gives I/O errors and that is all there is. There are other areas on the disk that seem (mostly) fine.
I am trying to mount a partition (ext4) and see if I can access some files I would like to recover. Since the mount command supports an offset option, I should be able to mount the filesystem even though the partition table is unreadable and unwriteable. The problem is how to find the offset. None of the ext4 tools seems to have this particular feature.
|
There isn't a standard offset per-se, as of course you can start the partition wherever you want. But let's assume for a moment that you're looking for the first partition, and it was created more or less accepting defaults. There are then two places you may find it, assuming you were using a traditional DOS partition table:
Starting at (512-byte) sector 63. This was the tradition for a very long time, and worked until someone came up with 4K disks...
Starting at (512-byte) sector 2048. This is the new tradition, to accommodate 4K disks.
A bonus option! Sarting at sector 56. This is what happens if someone moves the 63-start partition to make it align with a 4K sector.
Now, to proceed, you'll want to pick up your favorite hex-dump tool, and learn a little about the ext4 Disk Layout. In particular, it starts with 1024 bytes of padding, which ext4 ignores. Next comes the superblock. You can recognize the superblock by checking for the magic number 0xEF53 at offset 0x38 (from the superblock start, or 0x438 from the partition start, or 1080 in decimal.) The magic number is little-endian. So it's actually stored on disk as 0x53EF.
Here is what that looks like with xxd -a:
0000000: 0000 0000 0000 0000 0000 0000 0000 0000 ................
*
0000400: 0040 5d00 0084 7401 33a0 1200 33db a600 .@]...t.3...3...
0000410: 4963 5300 0000 0000 0200 0000 0200 0000 IcS.............
0000420: 0080 0000 0080 0000 0020 0000 6637 0952 ......... ..f7.R
0000430: 6637 0952 0200 1600 53ef 0100 0100 0000 f7.R....S.......
0000440: 9938 f851 004e ed00 0000 0000 0100 0000 .8.Q.N..........
Note, that when you give the offset to mount (or losetup), you must give the offset to where the padding starts—not the superblock.
Now, if its not the first partition, or otherwise isn't in one of the two (three) expected spots, you basically get to search for the magic number 0xEF53. This is what testdisk (recommended in a comment) does for you.
| How do I find the offset of an ext4 filesystem? |
1,643,300,188,000 |
I have just installed Debian 8.4 (Jessie, MATE desktop). For some reason the following command is not recognized:
mkfs.ext4 -L hdd_misha /dev/sdb1
The error I get:
bash: mkfs.ext4: command not found
I have googled and I actually can't seen to find Debian-specific instructions on how to create an ext4 filesystem. Any help much appreciated!
|
Do you have /sbin in your path?
Most likely you are trying to run mkfs.ext4 as a normal user.
Unless you've added it yourself (e.g. in ~/.bashrc or /etc/profile etc), root has /sbin and /usr/sbin in $PATH, but normal users don't by default.
Try running it from a root shell (e.g. after sudo -i) or as:
sudo mkfs.ext4 -L hdd_misha /dev/sdb1
BTW, normal users usually don't have the necessary permissions to use mkfsto format a partition (although they can format a disk-image file that they own - e.g. for use with FUSE or in a VM with, say, VirtualBox).
Formatting a partition requires root privs unless someone has seriously messed up the block device permissions in /dev.
| mkfs.ext4 command not found in Debian (Jessie) |
1,643,300,188,000 |
The manual page says about the barrier option for ext4:
Write barriers enforce proper on-disk ordering of journal commits, making volatile disk write caches safe to use, at some performance penalty. If your disks are battery-backed in one way or another, disabling barriers may safely improve performance.
Does a laptop with a battery (and an SSD) count as having a battery-backed disk? So, is barrier=0 for ext4 safe on a laptop?
|
No, it doesn't. The issue isn't with the type of disk (spinning/non-spinning), it's with committing disk buffers from RAM to disk. If the power goes out suddenly, some of these buffers may never get committed to disk, and having barriers enabled improves your chances of recovering the filesystem.
There's also an additional issue with the disk's on-board cache never getting committed to the disk (or flash chips). That only applies if you have write caching enabled on the disk (write-back), and can bite you regardless of the setting of barriers.
A battery backed-up disk is usually taken to mean a disk unit run by a controller with a battery backup unit (BBU). They have batteries that can store uncommitted data for months, so a crash or black-out won't lose filesystem consistency. BBUs are typically options on server-grade RAID systems.
Often, a machine with a UPS guaranteed to be working properly (or other guaranteed power source) can be safe too.
I wouldn't do this on a laptop. I've never had ext[234] filesystems mess up on me, even in the ext2 days, but your mileage may vary. You're trading off some performance improvement over the cost (personal/monetary) of data loss. My suggestion: mount the filesystem with and without barriers, run benchmarks, and get an idea of the performance gain. If it's negligible or not worth the risk (which you'll have to assess yourself), leave the mount options as they are.
Addendum: Isn't a laptop battery the same as a UPS? In this case yes, a laptop battery is very similar to a UPS, but a laptop battery isn't as carefully monitored and conditioned as a UPS, because it isn't really designed as a means of redundancy. You buy a UPS for added security, so the design reflects this: the battery is conditioned, checked and monitored. All but the cheapest UPS units have ‘battery failed’ lights, alarms and even send SNMP traps to notify the administrator of the issue.
This isn't the case with laptop batteries. Your laptop battery will age and die without the laptop being aware. Mine's on its second battery, and it's failing: on occasion it just loses a lot of charge in a very short time, and the laptop is none the wiser (when the power goes out, the battery runtime indicator still says ‘30 minutes left’).
My point is that a UPS is more reliable than a laptop batter, but a better question would be...
Isn't a UPS or laptop battery the same as a disk controller BBU? And the answer to that is a resounding no. Your UPS will continue to power a computer that's just been hard-reset, but when the disk is reset, any uncommitted writeback sectors will be lost forever. With a BBU, you can unceremoniously unplug the server, store it for six months, move it to a different country, plug it back in, and the moment you hit the power on button, the uncommitted buffers are (finally) written to disk. Since this may amount to a few gigs of data, the BBU is a pretty essential piece of kit for server hardware. The controller conditions the battery backup much better than the average UPS. On our Dell servers, it runs discharge simulations every week and can send you IM/SMS/Email/SNMP traps/buzz your ears off when it detects that the charge/discharge cycle or expected battery life go out of tolerance. It'll also disable write-caching when the BBU is in a less than optimal condition. It's this sort of environment that gains something from disabling barriers.
In practice, though, any systems manager who insists on battery-backed host adaptors is unlikely to disable a filesystem safety measure. :) (I know I don't)
| Is disabling barriers for ext4 safe on a laptop with battery? |
1,643,300,188,000 |
I have an SSD disk with an ext4 filesystem on it:
$ lsblk -f /dev/sdc
NAME FSTYPE LABEL UUID MOUNTPOINT
sdc ext4 142b28fd-c886-4182-892d-67fdc34b522a
I am attempting to mount it, but it is failing:
$ sudo mkdir /mnt/data
$ sudo mount /dev/sdc /mnt/data
mount: /mnt/data: cannot mount /dev/sdc read-only.
What does the error message mean?
How can I diagnose and fix the problem?
To add additional information pertinent to an answer below:
There is only one partition on the disk.
Here is the result of executing lsblk for the boot disk:
$ lsblk /dev/sda
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 10G 0 disk
├─sda1 8:1 0 9.9G 0 part /
├─sda14 8:14 0 4M 0 part
└─sda15 8:15 0 106M 0 part /boot/efi
and here is the result of executing lsblk for the disk in question:
$ lsblk /dev/sdc
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdc 8:32 0 2G 1 disk
|
I had a similar problem with a USB thumb drive which was down to the ext4 journal recovery not working. dmesg confirmed this:
[1455125.992721] EXT4-fs (sdh1): INFO: recovery required on readonly filesystem
[1455125.992725] EXT4-fs (sdh1): write access unavailable, cannot proceed (try mounting with noload)
As it suggested, mounting with noload worked:
sudo mount -o ro,noload /dev/sdh1 /mnt/drive
I was then able to backup the content:
sudo rsync -av /mnt/drive /data/tmp/
and then use fdisk to delete and recreate the partition and then create a new filesystem with mkfs.ext4.
| mount ext4 disk: cannot mount /dev/sdc read-only |
1,643,300,188,000 |
I know that this feature dates back 20 years but I still would like to find out
What is the purpose of the reserved blocks in ext2/3/4 filesystems?
|
The man page of tune2fs gives you an explanation:
Reserving some number of filesystem blocks for use by privileged processes is done to avoid filesystem fragmentation, and to allow system daemons, such as syslogd(8), to continue to function correctly after non-privileged processes are prevented from writing to the filesystem.
It also acts as a failsafe; if for some reason the normal users and their programs fill up the disk up to 100%, you might not even be able to login and/or sync files before deleting them. By reserving some blocks to root, the system ensures you can always correct the situation.
In practice, 5% is an old default and may be too much if your hard drive is big enough. You can change that value using the previously mentioned tune2fs tool, but be sure to read its man page first!
| ext2/3/4 reserved blocks percentage purpose [duplicate] |
1,643,300,188,000 |
Possible Duplicate:
How can I increase the number of inodes in an ext4 filesystem?
I have a homemade NAS with Debian Wheezy 64bit. It has three disks - 2x2TB and 1.5TB, pooled together using RAID1/5 and LVM. The result is a LVM Logical Volume, about 3.16TB in size, formatted as ext4 and mounted as /home. However I just found out that roughly 50GB of this capacity is used by Inodes (exact count being 212 459 520, with 256B in size or to put it in another way - one Inode per every 16k of the partition size).
While 50GB in 3.16TB is about 1.5% of the total capacity, it's still a lot of space. Since this is a storage NAS, mostly used for multimedia, I don't ever expect the /home partition to have 212 million files in it.
So, my question is this - is it possible to lower/change the number of Inodes without actually re-creating the whole partition? While it might be possible to do it, I'd still prefer to find a way to do so instead of moving 2TB of data around and waiting for RAID to re-sync again.
|
From the mke2fs man page:
Be warned that it is not possible to expand the number of inodes on a filesystem after it is created, so be careful deciding the correct value for this parameter.
So the answer is no.
What you could do is shrink the existing ext4 volume (this requires unmounting the filesystem), use the free space to create a new ext4 volume with fewer inodes, copy the data, remove the old volume and extend the new volume to occupy all the space.
| Is it possible to change Inode count on an ext4 filesystem? [duplicate] |
1,643,300,188,000 |
On my personal home computer running Kubuntu Linux 13.04 I'm having trouble mounting a partition that is very dear to me. My backup policy is to perform a backup about monthly, so I do have a backup from August :). Is there any way to recover the personal files that are on this drive?
The drive is a 1.5 year old 1000 GiB Western Digital Green drive, with home mounted on /dev/sdc2, the filesystem root on /dev/sdc6, and media files on /dev/sdc3. Therefore of course sdc2 would be the one to go! So far as I know there were no power outages or other such events during the life of the drive. I managed to get this information by running a Kubuntu LiveCD:
kubuntu@kubuntu:~$ sudo fdisk -l
Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00008044
Device Boot Start End Blocks Id System
/dev/sdc1 * 4094 88066047 44030977 5 Extended
/dev/sdc2 88066048 1419266047 665600000 83 Linux
/dev/sdc3 1419266048 1953523711 267128832 83 Linux
/dev/sdc5 4096 6146047 3070976 82 Linux swap / Solaris
/dev/sdc6 6148096 47106047 20478976 83 Linux
/dev/sdc7 47108096 88066047 20478976 83 Linux
kubuntu@kubuntu:~$ sudo mount -t ext4 /dev/sdc2 c1
mount: wrong fs type, bad option, bad superblock on /dev/sdc2,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
kubuntu@kubuntu:~$ sudo debugfs -c /dev/sdc2
debugfs 1.42.5 (29-Jul-2012)
/dev/sdc2: Attempt to read block from filesystem resulted in short read while opening filesystem
debugfs: quit
kubuntu@kubuntu:~$ sudo fsck /dev/sdc2
fsck from util-linux 2.20.1
e2fsck 1.42.5 (29-Jul-2012)
fsck.ext4: Attempt to read block from filesystem resulted in short read while trying to open /dev/sdc2
Could this be a zero-length partition?
kubuntu@kubuntu:~$ sudo fsck.ext4 -v /dev/sdc2
e2fsck 1.42.5 (29-Jul-2012)
fsck.ext4: Attempt to read block from filesystem resulted in short read while trying to open /dev/sdc2
Could this be a zero-length partition?
kubuntu@kubuntu:~$ dmesg | tail
[ 2684.532855] Descriptor sense data with sense descriptors (in hex):
[ 2684.532858] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00
[ 2684.532876] 05 3f c8 b0
[ 2684.532885] sd 5:0:0:0: [sdc]
[ 2684.532893] Add. Sense: Unrecovered read error - auto reallocate failed
[ 2684.532898] sd 5:0:0:0: [sdc] CDB:
[ 2684.532902] Read(10): 28 00 05 3f c8 b0 00 00 08 00
[ 2684.532917] end_request: I/O error, dev sdc, sector 88066224
[ 2684.532927] Buffer I/O error on device sdc2, logical block 22
[ 2684.532973] ata6: EH complete
Help me Unix & Linux, you're our only hope.
|
There might still be hope, but your drive seems to have hardware problems (my interpretation of the read error in dmesg output).
You should try to make a copy of what is recoverable from that partition onto another drive (to minimize disc access). Use ddrescue for that, it might take a while but gets most if not all of the recoverable data of the partition.
If possible start from another disc, from a Live CD, or connect the drive to a different computer that has its own Linux to boot from. The reason I would do so is that the read errors while doing ddrescue probably has an impact on the disc access speed on the other partitions.
Once you have that copy, lets call that the original copy, as file on another disc, make a copy of that copy. Then try to do a filesystem check on that copy. If that recovery scrambles the copy, you can start from the original copy and try once more, something else.
| Short read while trying to open partition |
1,643,300,188,000 |
In an ext4 filesystem, suppose that file1 has inode number 1, and that file2 has inode number 2. Now, regardless of any crtime timestamp that might be available, is it wrong to assume that file1 was created earlier than file2 only because inode 1 is less than inode 2?
|
Lower inode number doesn't prove older.
A simple case that would change that sequence is deleting a file which would free the inode. That inode therefore becomes available for future use.
| Does inode number determine what files were created earlier than others? |
1,643,300,188,000 |
The ext2/3/4 filesystem checker has two options that seem to be very similar, -p and -y.
Both seem to perform an automatic repair, but the manpage states that -p can exit when it encounters certain errors while for -y no such thing is mentioned. Is this the only difference?
|
There is a specific difference which when we read it twice might make more sense.
-p - Automatically repair the file system without any questions.
-y - Assume an answer of `yes' to all questions.
So fsck -p will try to fix the file system automatically without any user intervention. It is most likely to take decisions such as yes or no by itself.
However, fsck -y will just assume yes for all questions.
An example can be thought like,
If some changes need to be made in a partition, fsck -y will just go ahead and assume yes and make the changes.
However, fsck -p will take the correct decision which can be either yes or no.
| What is the difference between fsck options -y and -p? |
1,643,300,188,000 |
zerofree -v /dev/sda1 returned
123642/1860888/3327744.
The man page does not explain what those numbers are:
http://manpages.ubuntu.com/manpages/natty/man8/zerofree.8.html
I found some code on github:
https://github.com/haggaie/zerofree/blob/master/zerofree.c
And there's this line:
if ( verbose ) {
printf("\r%u/%u/%u\n", modified, free, fs->super->s_blocks_count);
}
So I guess the middle number was the free space (in kB?), the first one might be the amount that was written over with zeros, and the last one lost me.
What do you think?
|
I have the same tool installed on Fedora 19, and I noticed in the .spec file a URL which lead to this page titled: Keeping filesystem images sparse. This page included some examples for creating test data so I ran the commands to create the corresponding files.
Example
$ dd if=/dev/zero of=fs.image bs=1024 seek=2000000 count=0
$ /sbin/mke2fs fs.image
$ ls -l fs.image
-rw-rw-r--. 1 saml saml 2048000000 Jan 4 21:42 fs.image
$ du -s fs.image
32052 fs.image
When I ran the zerofree -v command I got the following:
$ zerofree -v fs.image
...counting up percentages 0%-100%...
0/491394/500000
Interrogating with filefrag
When I used the tool filefrag to interrogate the fs.image file I got the following.
$ filefrag -v fs.image
Filesystem type is: ef53
File size of fs.image is 2048000000 (500000 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 620: 11714560.. 11715180: 621:
1: 32768.. 32769: 11716608.. 11716609: 2: 11715181:
2: 32892.. 33382: 11716732.. 11717222: 491: 11716610:
3: 65536.. 66026: 11722752.. 11723242: 491: 11717223:
...
The s_block_count referenced in your source code also coincided with the source code for my version of zerofree.c.
if ( verbose ) {
printf("\r%u/%u/%u\n", nonzero, free,
current_fs->super->s_blocks_count) ;
}
So we now know that s_blocks_count is the 500,000 blocks of 4096 bytes.
Interrogating with tune2fs
We can also query the image file fs.image using tune2fs.
$ sudo tune2fs -l fs.image | grep -i "block"
Block count: 500000
Reserved block count: 25000
Free blocks: 491394
First block: 0
Block size: 4096
Reserved GDT blocks: 122
Blocks per group: 32768
Inode blocks per group: 489
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
From this output we can definitely see that the 2nd and 3rd numbers being reported by zerofree are in fact:
Free blocks: 491394
Block count: 500000
Back to the source code
The 1st number being reported is in fact the number of blocks that are found that are not zero. This can be confirmed by looking at the actual source code for zerofree.
There is a counter called, nonzero which is getting incremented in the main loop that's analyzing the free blocks.
if ( i == current_fs->blocksize ) {
continue ;
}
++nonzero ;
if ( !dryrun ) {
ret = io_channel_write_blk(current_fs->io, blk, 1, empty) ;
if ( ret ) {
fprintf(stderr, "%s: error while writing block\n", argv[0]) ;
return 1 ;
}
}
Conclusion
So after some detailed analysis it would look like those numbers are as follows:
number of nonzero free blocks encountered (which were subsequently zeroed)
number of free blocks within the filesystem
total number of blocks within the filesystem
| zerofree verbose returns what? |
1,643,300,188,000 |
I was running into performance problems with hundredthousands of files in single directories when I needed to do certain wildcard matches. From my applications point of view, a simple solution is to place the files in deeply nested folders.
The expected upper bound for the total number of folders across the whole hierarchy is 9^30. It can be assumed that this limit will never be reached (see comment below). The number of folders will just simply grow as files are added.
Question: Are there any implications from filesystem perspective, when vast ammounts of folders are created on an ext4 filesystem? How much space is consumed by eg. a folder just containing another folder? Will I run into trouble because of too much metadata?
(There are certain advantages from my apps perspective with above structure compared to for example hash-based folders in a more simple hierarchy, I am aware of "better" methods to organise data)
|
Each folder consumes one inode (256 byte) and at least one block (probably 4096 byte). The bigger problem may be access time over several hierarchy layers.
The performance problem is probably not due to the folder size but to pathname expansion. Pathname expansion has two problems:
It sorts the results (which cannot be disabled) which takes a disturbingly long time for huge amounts of items.
It creates (depending on the kind of usage) illegal command lines (too many items).
You should address this on application level. Read 100 file names at a time (unsorted, with find or ls -U) and sort these small groups if necessary. This also allows for parallel reading from disk and CPU usage.
If you really need pathname expansion and / or sorting then you may speed up the process a lot (if the files change seldom only) by adding the files to their (empty) directories in sort order.
| What is the "cost" of deeply nested folders in ext4? |
1,643,300,188,000 |
Today I found an "empty" directory with a size of 4MB.
It had no visible contents, so I tried ls -lah. This showed me some hidden files (not very large). Searching for the reason why the directory was so large I found that the dot file (.) had a size of 3.9MB.
What gets stored in that file? Isn't that just a kind of link to the same directory?
Here is the shell output (anonymized):
-bash# more /proc/version
Linux version 2.6.18-8.1.15.el5 ([email protected]) (gcc version 4.1.1 20070105 (Red Hat 4.1.1-52)) #1 SMP Mon Oct 22 08:32:04 EDT 2007
-bash# pwd
/data/foo/bar/tmp
-bash# ls -lah
total 4.1M
drwxrwxrwx 3 nobody nobody 3.9M Nov 21 10:02 .
drwxrwxrwx 16 nobody nobody 4.0K Aug 27 17:26 ..
-rw------- 1 root root 20K Oct 25 14:06 .bash_history
...
|
The dot file, like every directory, contains a list of names for the files in this directory and their inode numbers. So if you once had lots of files in that directory (not unlikely for a "tmp" directory) that would have made the directory entry grow to this size.
After the files are gone, the file system doesn't automatically shrink the directory file again.
You can experiment with this yourself by making a new empty directory, do ls -la in it to see the initial size (4096 on my machine) then touching a lot of files, which will make the directory size grow.
(Yes I know that I'm glossing over/being inaccurate about a lot of details here. But the OP didn't ask for a full explanation of how EXT* file systems work.)
| Why could the size of the "dot" file "." exceed 4096? |
1,643,300,188,000 |
Faux pas: The "fast" method I mention below, is not 60 times faster than the slow one. It is 30 times faster. I'll blame the mistake on the hour (3AM is not my best time of day for clear thinking :)..
Update: I've added a summary of test times (below).
There seem to be two issues involved with the speed factor:
The choice of command used (Time comparisons shown below)
The nature of large numbers of files in a directory... It seems that "big is bad". Things get disoprportionately slower as the numbers increase..
All the tests have been done with 1 million files.
(real, user, and sys times are in the test scripts)
The test scripts can be found at paste.ubuntu.com
#
# 1 million files
# ===============
#
# |time |new dir |Files added in ASCENDING order
# +---- +------- +-------------------------------------------------
# real 01m 33s Add files only (ASCENDING order) ...just for ref.
# real 02m 04s Add files, and make 'rm' source (ASCENDING order)
# Add files, and make 'rm' source (DESCENDING order)
# real 00m 01s Count of filenames
# real 00m 01s List of filenames, one per line
# ---- ------- ------
# real 01m 34s 'rm -rf dir'
# real 01m 33s 'rm filename' via rm1000filesPerCall (1000 files per 'rm' call)
# real 01m 40s 'rm filename' via ASCENDING algorithm (1000 files per 'rm' call)
# real 01m 46s 'rm filename' via DESCENDING algorithm (1000 files per 'rm' call)
# real 21m 14s 'rm -r dir'
# real 21m 27s 'find dir -name "hello*" -print0 | xargs -0 -n 1000 rm'
# real 21m 56s 'find dir -name "hello*" -delete'
# real 23m 09s 'find dir -name "hello*" -print0 | xargs -0 -P 0 rm'
# real 39m 44s 'rm filename' (one file per rm call) ASCENDING
# real 47m 26s 'rm filename' (one file per rm call) UNSORTED
#
I recently created and deleted 10 million empty test files.
Deleting files on a name by name basis (ie rm filename), I found out the hard way that there is a huge time difference between 2 different methods...
Both methods use the exact same rm filename command.
Update: as it turns out, the commands were not exactly the same... One of them was sending 1000 filenames at a time to 'rm'... It was a shell brace-expansion issue where I thought each filename was being written to the feeder file on a line of its own, but actually it was 1000 per line
The filnames are provide via a 'feeder file' into a while read loop..
The feeder file is the output of ls -1 -f
The methods are identical in all reaspects, except for one thing:
the slow method uses the unsorted feeder file direct from ls -1 -f
the fast method uses a sorted version of that same unsorted file
I'm not sure whether the sorting is ths issue here, or is it perhaps that the sorted feeder file just happens to match the sequence in which the files were created (I used a simple ascending integer algorithm)
For 1 million files, the fast rm filename method is 60 times faster than the slow method... again, I don't know if this is a "sorting" issue, or a behind-the-scenes hash table issue... I suspect it is not a simple sorting issue, because why would ls -1 -f intentionally give me an unsort listing of a freshly added "sorted" sequence of filenames...
I'm just wondering what is going on here, so it doesn't take me days (yes days) to delete the next 10 million files :) .... I say "days" because I tried so many alternatives, and the times involved increase disproportionatly to the numberof file involved .. so I've only tested 1 million in detail
BTW: Deleting the files via the "sorted list" of names is actually faster than rm -rf by a factor of 2.
and: rm -r was 30 times slower than the "sorted list" method
... but is "sorted" the issue here? or is it more related to a hashing(or whatever) method of storage used by ext4?
The thing which quite puzzles me is that each call to rm filename is unrelated to the previous one .. (well, at least it is that way from the 'bash' perspective)
I'm using Ubuntu / bash / 'ext4' / SATA II drive.
|
rm -r is expected to be slow as its recursive. A depth first traversal has to be made on the directory structure.
Now how did you create 10 million files ? did u use some script which loops on some order ? 1.txt,2.txt,3.txt... if yes then those files may too be allocated on same order in contigous blocks in hdd.so deleting on same order will be faster.
"ls -f" will enable -aU which lists in directory order which is again recursive.
| Why is deleting files by name painfully slow and also exceptionally fast? |
1,643,300,188,000 |
I'm trying to mount a device but without success.
The strange thing is that the mount command succeeds and return exit code 0, but the device is not mounted.
Any idea on why this happens or how to investigate it?
Please see the example below:
[root@mymachine ~]# blkid -o list
device fs_type label mount point UUID
-----------------------------------------------------------------------------------------
/dev/xvda1 xfs / 29342a0b-e20f-4676-9ecf-dfdf02ef6683
/dev/xvdy ext4 /vols/data 72c23c30-2704-42ec-9518-533c182e2b22
/dev/xvdb swap <swap> 990ff722-158c-4ad5-963a-0bc9e1e2b17a
/dev/xvdx ext4 (not mounted) 956b5553-d8b4-4ffe-830c-253e1cb10a2f
[root@mymachine ~]# grep /dev/xvdx /etc/fstab
/dev/xvdx /vols/data5 ext4 defaults 0 0
[root@mymachine ~]# mount -a; echo $?
0
[root@mymachine ~]# blkid -o list
device fs_type label mount point UUID
-----------------------------------------------------------------------------------------
/dev/xvda1 xfs / 29342a0b-e20f-4676-9ecf-dfdf02ef6683
/dev/xvdy ext4 /vols/data 72c23c30-2704-42ec-9518-533c182e2b22
/dev/xvdb swap <swap> 990ff722-158c-4ad5-963a-0bc9e1e2b17a
/dev/xvdx ext4 (not mounted) 956b5553-d8b4-4ffe-830c-253e1cb10a2f
[root@mymachine ~]# mount /dev/xvdx /vols/data5; echo $?
0
[root@mymachine ~]# blkid -o list
device fs_type label mount point UUID
-----------------------------------------------------------------------------------------
/dev/xvda1 xfs / 29342a0b-e20f-4676-9ecf-dfdf02ef6683
/dev/xvdy ext4 /vols/data 72c23c30-2704-42ec-9518-533c182e2b22
/dev/xvdb swap <swap> 990ff722-158c-4ad5-963a-0bc9e1e2b17a
/dev/xvdx ext4 (not mounted) 956b5553-d8b4-4ffe-830c-253e1cb10a2f
[root@mymachine ~]#
Full fstab:
[root@mymachine ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon May 1 18:59:01 2017
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=29342a0b-e20f-4676-9ecf-dfdf02ef6683 / xfs defaults 0 0
/dev/xvdb swap swap defaults,nofail 0 0
/dev/xvdy /vols/data ext4 defaults 0 0
/dev/xvdx /vols/data5 ext4 defaults 0 0
|
Normally mount doesn't return 0 if there have been problems. When I had a similar problem, the reason was that systemd unmounted the filesystem immediately after the mount.
You can try strace mount /dev/xvdx /vols/data5 to see the result of the syscall. You can also try mount /dev/xvdx /vols/data5; ls -li /vols/data5 to see whether something is mounted immediately after the mount command.
| Why is mount failing silently for me? |
1,643,300,188,000 |
I am using Linux as guest OS in VirtualBox. I deleted huge number of files from its filesystem. Now i want to shrink the filesystem image file (vdi). The shrinking works by compressing filesystem image wherever it has "null" value in disk.
It seems an application called zerofree can write "null" into free space of filesystem in such a way that it becomes sparse. But the instructions say it works only on ext2/ext3. I have ext4 on my guest OS.
Why won't it work on ext 4 (reason cited is "extents", but can someone shed more light on it) ?
Will it work, If i mount the ext 4 as ext 3 and then remount as ext 4 ?
Any other tools that can do similiar thing as zerofree on ext ?
|
The page you reference (http://intgat.tigress.co.uk/rmy/uml/index.html) states:
The utility also works on ext3 or ext4 filesystems.
So I'm not sure where you're getting that it doesn't work on ext4 filesystems.
Note that the zerofree utility is different from the zerofree kernel patch that is mentioned on the same page (which indeed does not seem to have a version for ext4).
Update: At least in the case of VirtualBox, I don't think you need this utility at all. In my testing, on a stock Ubuntu 10.04 install on ext4, you can just zero out the filesystem like so:
$ dd if=/dev/zero of=test.file
...wait for the virtual disk to fill, then
$ rm test.file
and shut the VM down. Then on your VirtualBox host do:
$ VBoxManage modifyhd --compact yourImage.vdi
and you'll recover all the unused space.
| How to make ext4 filesystem sparse? |
1,643,300,188,000 |
I want to try Btrfs. I've already found that you can make a snapshot of a live system but there are a few thing I haven't found answers for. Well, as I understand a snapshot basically a full copy in archive form of some sort. So can I make a snapshot of my live btrfs system and place that snapshot on my non btrfs hard drive (ext4, for example)?
Also, I'm running full disk encryption (luks). Are snapshots going to be encrypted if I transfer them somewhere? Do snapshots copy actual data from the partition itself (in that case it's going to be encrypted obviously) or it works differently?
Also, how are btrfs snapshots protected from read acess? Can other users read snapshots? Or only root? Is it managable?
|
A snapshot (in this sense) is a part of the filesystem. In btrfs terminology, it's a subvolume — it's one of the directory trees on the volume. It isn't in “archive form”. Making a snapshot of a subvolume creates a new subvolume which contains the data of the original volume at the date the snapshot was made. Subsequent writes to the original subvolume don't affect the snapshot and vice versa. All subvolumes are part of the same volume — they designate subsets (potentially overlapping) of the data in the volume.
The parts of the snapshot that haven't been modified in either subvolume share their storage. Creating a snapshot initially requires no storage except for the snapshot control data; the amount of storage increases over time as the content of the subvolumes diverge.
The most important property of snapshot creation is that it's atomic: it takes a picture of the data at a point in time. This is useful to make backups: if the backup program copies files from the live system, it might interact poorly with modifications to the files. For example, if a file is moved from directory A to directory B, but the backup program traversed B before the move and A after the move, the file wouldn't be included in the backup. Snapshots solve this problem: the file will be in A if the snapshot is made before the move and in B if it's made after, but either way it will be there. Then the backup program can copy from the snapshot to the external media.
Since the snapshot is on the same volume as the original, it's stored in the same way, e.g. it's encrypted if the volume is encrypted.
A snapshot reproduces the original directory tree, including permissions and all other metadata. So the permissions are the same as the original. In addition, users must be able to access the snapshot directory itself. If you don't want users to be able to access a snapshot at all, create it under a directory that they can't access (you can place the snapshot anywhere you want).
If you want to make a copy of the snapshot outside the filesystem, access or mount the snapshot then make a copy with your favorite program (cp, rsync, etc.). You can find sample commands in the btrfs wiki; see the manual page for a full reference.
| Btrfs snapshot to non-btrfs disk. Encryption, read acess |
1,643,300,188,000 |
My program creates many small short-lived files. They are typically deleted within a second after creation. The files are in an ext4 file system backed by a real hard disk. I know that Linux periodically flushes (pdflush) dirty pages to disk. Since my files are short-lived, most likely they are not cached by pdflush. My question is, does my program cause a lot of disk writes? My concern is my hard disk's life.
Since the files are small, let's assume the sum of their size is smaller than dirty_bytes and dirty_background_bytes.
Ext4 has default journal turned on, i.e. metadata journal. I also want to know whether the metadata or the data is written to disk.
|
A simple experiment using ext4:
Create a 100MB image...
# dd if=/dev/zero of=image bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB) copied, 0.0533049 s, 2.0 GB/s
Make it a loop device...
# losetup -f --show image
/dev/loop0
Make filesystem and mount...
# mkfs.ext4 /dev/loop0
# mount /dev/loop0 /mnt/tmp
Make some kind of run with short lived files. (Change this to any method you prefer.)
for ((x=0; x<1000; x++))
do
(echo short-lived-content-$x > /mnt/tmp/short-lived-file-$x
sleep 1
rm /mnt/tmp/short-lived-file-$x ) &
done
Umount, sync, unloop.
# umount /mnt/tmp
# sync
# losetup -d /dev/loop0
Check the image contents.
# strings image | grep short-lived-file | tail -n 3
short-lived-file-266
short-lived-file-895
short-lived-file-909
# strings image | grep short-lived-content | tail -n 3
In my case it listed all the file names, but none of the file contents. So only the contents were not written.
| Are short-lived files flushed to disk? |
1,643,300,188,000 |
I am partitioning a disk with the intent to have an ext4 filesystem on the partition. I am following a tutorial, which indicates that there are two separate steps where the ext4 filesystem needs to be specified. The first is by parted when creating the partition:
sudo parted -a opt /dev/sda mkpart primary ext4 0% 100%
The second is by the mkfs.ext4 utility, which creates the filesystem itself:
sudo mkfs.ext4 -L datapartition /dev/sda1
My question is: what exactly are each of these tools doing? Why is ext4 required when creating the partition? I would have thought the defining of the partition itself was somewhat independent of the constituent file system.
(The tutorial I'm following is here: https://www.digitalocean.com/community/tutorials/how-to-partition-and-format-storage-devices-in-linux)
|
A partition can have a type. The partition type is a hint as in "this partition is designated to serve a certain function". Many partition types are associated with certain file-systems, though the association is not always strict or unambiguous. You can expect a partition of type 0x07 to have a Microsoft compatible file-system (e.g. FAT, NTFS or exFAT) and 0x83 to have a native Linux file-system (e.g. ext2/3/4).
The creation of the file-system is indeed a completely independent and orthogonal step (you can put whatever file-system wherever you want – just do not expect things to work out of the box).
parted defines the partition as in "a part of the overall disk". It does not actually need to know the partition type (the parameter is optional). In use however, auto-detection of the file-system and henceforth auto-mounting may not work properly if the partition type does not correctly hint to the file-system.
A partition is a strictly linear piece of storage space. The mkfs.ext4 and its variants create file-systems so you can have your actual directory tree where you can conveniently store your named files in.
| Why does parted need a filesystem type when creating a partition, and how does its action differ from a utility like mkfs.ext4? |
1,643,300,188,000 |
I'm wondering if this is considered safe. I know the file handles work just fine as long as a link remains, and I know the identifier is the inode rather than the name, but I am not sure how it works across different FS.
For example copying from an ext4 harddrive to a NTFS USB stick, or copying from a FAT stick to an ext4 drive.
I was just copying over a bunch of large media files, and renamed them before the copy was done. The checksums match. I wonder if it is always safe, will it work in the opposite direction, are there quirks I should know about or reasons to avoid doing this?
The OS/Distro is Ubuntu with the 5.0.0-15 Linux kernel.
|
I am not sure how it works across different FS.
The rename operation itself doesn’t operate across different file systems; there is no difference between writing to a file from say a text editor and writing to a file using cp with a source file on another file system.
On Linux, the rename system call is transparent to other links to the file, which include other hard links and open file descriptions (and descriptors). The manpage explicitly states
Open file descriptors for oldpath are also unaffected.
(I’m qualifying with “on Linux” only because I couldn’t find a reference in POSIX; I think this is common across POSIX-style operating systems.)
So when you’re copying a file across file systems, cp opens the source for reading, the target for writing, and starts copying. Rename operations don’t affect the file descriptors it’s using; you can rename the source and/or the target without affecting cp.
Another way to think of this is that the file’s name in its containing directory is part of its directory entry, which points at its inode; open file descriptions are other pointers to the inode, as are other hard links. Changing the file name doesn’t affect any other existing pointers.
The caveats to watch out for are that tools such as mv don’t limit themselves to what the rename system call can do; if you mv files across file systems, the rename will fail (or mv will figure out that the operation is across file systems and won’t even attempt it), and mv will then resort to manually copying the file contents and deleting the original. This won’t give good results if the file being renamed is being changed simultaneously.
| Renaming a file while it is being written |
1,643,300,188,000 |
I have a small "rescue" system (16 MB) that I boot into RAM as ramdisk. The initrd disk that I am preparing needs to be formatted. I think ext4 will do fine, but obviously, it doesn't make any sense to use journal or other advanced ext4 features.
How can I create the most minimal ext4 filesystem?
without journal
without any lazy_init
without any extended attributes
without ACL without large files
without resizing support
without any unnecessary metadata
The most bare minimum filesystem possible?
|
Or you could simply use ext2
For ext4:
mke2fs -t ext4 -O ^has_journal,^uninit_bg,^ext_attr,^huge_file,^64bit [/dev/device or /path/to/file]
man ext4 contains a whole lot of features you can disable (using ^).
| Minimalistic ext4 filesystem without journal and other advanced features |
1,643,300,188,000 |
How can we lock a symlink so it cannot be deleted?
With a normal file/directory chattr +i /file/location can achieve this but doing so with a symlink we get chattr: Operation not supported while reading flags on my-file.
There is a similar question, How to set `chattr +i` for my `/etc/resolv.conf `?, but without a solution that could be applied here.
|
This doesn’t provide a solution, but it explains why chattr can’t make a symlink immutable.
On Linux, immutable attributes are part of a set of flags which are controlled using the FS_IOC_SETFLAGS ioctl. Historically this was implemented first in ext2, and chattr itself is still part of e2fsprogs. When it attempts to retrieve the flags, before it can set them, chattr explicitly checks that the file it’s handling is a regular file or a directory:
if (!lstat(name, &buf) &&
!S_ISREG(buf.st_mode) && !S_ISDIR(buf.st_mode)) {
goto notsupp;
}
One might think that removing these checks, or changing them to allow symlinks too, would be a good first step towards allowing chattr to make a symlink immutable, but the next hurdle comes up immediately thereafter:
fd = open (name, OPEN_FLAGS);
if (fd == -1)
return -1;
r = ioctl (fd, EXT2_IOC_GETFLAGS, &f);
ioctl operates on file descriptors, which means the target has to be opened before its flags can be set. Symlinks can’t be opened for use with ioctl; while open supports O_NOFOLLOW and O_NOPATH on symlinks, the former on its own will fail with ELOOP, and the latter will return a file descriptor which can’t be used with ioctl.
| How to make a symlink read only (`chattr +i /location/symlink`)? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.