date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,643,300,188,000 |
Will a standard fresh linux (Ubuntu 11.10 to be exact) install and drive re-format (full) successfully TRIM my SSD, or do I need to do something extra?
I know that ext4 will TRIM blocks on erase when I specify the discard option, but I want to start with a completely TRIMmed drive if possible.
|
TRIM is a command that needs to be sent for individual blocks. I have asked the question before (What is the recommended way to empty a SSD?) and it is suggested to use ATA Secure Erase, a command that is sent to the device to clear all data.
| Will formatting my drive TRIM my SSD? |
1,643,300,188,000 |
my home directory is on a separate partition which still uses ext3. Is there a way to convert this partition into ext4 in a non-destructive way?
|
Yes, you can. This is explained very nicely in the ext4-wiki at kernel.org. Basically it all boils down to
tune2fs -O extents,uninit_bg,dir_index /dev/DEV
e2fsck -fDC0 /dev/DEV
with /dev/DEV replaced by the partition in question. Although this should be non-destructive, I'd still strongly suggest to back up your data before doing it.
| Can I convert an ext3 partition into ext4 without formatting? |
1,643,300,188,000 |
I am currently using backintime to take "snapshots" of my file system. It is similar to rsnapshot in that it, makes hard links to unchanged files. I have recently run out of inodes on my EXT4 filesystem. df -hi reveals I have used 9.4 million inodes. A rough count of the number of current directories times the number of snapshots plus the number of current files suggests that I may in fact be using 9.4 million inodes.
From what I understand the EXT4 filesystem can support around 2^32 inodes. I am considering reformatting the partition to use all 4 billion or so inodes, but I am concerned that this is a bad idea. What are the drawbacks of having so many inodes in an EXT4 filesystem? Is there a better choice of filesystem for an application like this?
|
That is a really bad idea. Every inode consumes 256 bytes (may be configured as 128). Thus just the inodes would consume 1TiB of space.
Other file systems like btrfs can create inodes dynamically. Use one of them instead.
| Drawbacks of increasing number of inodes in EXT4 |
1,643,300,188,000 |
When running this script:
#!/usr/bin/env python3
f = open("foo", "w")
f.write("1"*10000000000)
f.close()
print("closed")
I can observe the following process on my Ubuntu machine:
The memory fills with 10GB.
The Page Cache fills with 10GB of dirty pages. (/proc/meminfo)
"closed" is printed and the script terminates.
A while after, the dirty pages decrease.
However, if file "foo" already exists, close() blocks until all dirty pages have been written back.
What is the reason for this behavior?
This is the strace if the file does NOT exist:
openat(AT_FDCWD, "foo", O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0666) = 3
fstat(3, {st_mode=S_IFREG|0664, st_size=0, ...}) = 0
ioctl(3, TCGETS, 0x7ffd50dc76f0) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
ioctl(3, TCGETS, 0x7ffd50dc76c0) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
lseek(3, 0, SEEK_CUR) = 0
mmap(NULL, 10000003072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcd9892e000
mmap(NULL, 10000003072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fcb4486f000
write(3, "11111111111111111111111111111111"..., 10000000000) = 2147479552
write(3, "11111111111111111111111111111111"..., 7852520448) = 2147479552
write(3, "11111111111111111111111111111111"..., 5705040896) = 2147479552
write(3, "11111111111111111111111111111111"..., 3557561344) = 2147479552
write(3, "11111111111111111111111111111111"..., 1410081792) = 1410081792
munmap(0x7fcb4486f000, 10000003072) = 0
munmap(0x7fcd9892e000, 10000003072) = 0
close(3) = 0
write(1, "closed\n", 7closed
) = 7
rt_sigaction(SIGINT, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fcfedd5cf20}, {sa_handler=0x62ffc0, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7fcfedd5cf20}, 8) = 0
sigaltstack(NULL, {ss_sp=0x2941be0, ss_flags=0, ss_size=8192}) = 0
sigaltstack({ss_sp=NULL, ss_flags=SS_DISABLE, ss_size=0}, NULL) = 0
exit_group(0) = ?
+++ exited with 0 +++
This is the strace if it exists:
openat(AT_FDCWD, "foo", O_WRONLY|O_CREAT|O_TRUNC|O_CLOEXEC, 0666) = 3
fstat(3, {st_mode=S_IFREG|0664, st_size=0, ...}) = 0
ioctl(3, TCGETS, 0x7fffa00b4fe0) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
ioctl(3, TCGETS, 0x7fffa00b4fb0) = -1 ENOTTY (Inappropriate ioctl for device)
lseek(3, 0, SEEK_CUR) = 0
lseek(3, 0, SEEK_CUR) = 0
mmap(NULL, 10000003072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f71de68b000
mmap(NULL, 10000003072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f6f8a5cc000
write(3, "11111111111111111111111111111111"..., 10000000000) = 2147479552
write(3, "11111111111111111111111111111111"..., 7852520448) = 2147479552
write(3, "11111111111111111111111111111111"..., 5705040896) = 2147479552
write(3, "11111111111111111111111111111111"..., 3557561344) = 2147479552
write(3, "11111111111111111111111111111111"..., 1410081792) = 1410081792
munmap(0x7f6f8a5cc000, 10000003072) = 0
munmap(0x7f71de68b000, 10000003072) = 0
close(3#### strace will block exactly here until write-back is completed ####) = 0
write(1, "closed\n", 7closed
) = 7
rt_sigaction(SIGINT, {sa_handler=SIG_DFL, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7f7433ab9f20}, {sa_handler=0x62ffc0, sa_mask=[], sa_flags=SA_RESTORER, sa_restorer=0x7f7433ab9f20}, 8) = 0
sigaltstack(NULL, {ss_sp=0x1c68be0, ss_flags=0, ss_size=8192}) = 0
sigaltstack({ss_sp=NULL, ss_flags=SS_DISABLE, ss_size=0}, NULL) = 0
exit_group(0) = ?
+++ exited with 0 +++
The same behaviour can be observed when simply printing and piping into a file instead of using python file-io, as well as when doing the same with a small equivalent C++ program printing to cout. It seems to be the actual systemcall that blocks.
|
That sounds like a reminder of the O_PONIES fiasco, which just recently had its 11th birthday.
Before ext4 came, ext3 had acquired a sort of a reputation for being stable in the face of power losses. It seldom broke, it seldom lost data from files. Then, ext4 added delayed allocation of data blocks, meaning that it didn't even try to write file data to disk immediately. Normally, that's not a problem as long as the data gets there at some point, and for temporary files, it might turn out that there was no need to write the data to disk at all.
But ext4 did write metadata changes, and recorded that something had changed with the file. Now, if the system crashed, the file was marked as truncated, but the writes after that weren't stored on disk (because no blocks were allocated for them). Hence, on ext4, you'd often see recently-modified files truncated to a zero length after a crash.
That, of course was not exactly what most users wanted, but the argument was made that application programs that cared about their data so much, should have called fsync(), and if they actually cared about renames, they should fsync() (or at least fdatasync()) the containing directory too. Next to no-one did that, though, partly because on ext3, an fsync() synced the whole disk, possibly including large amounts of unrelated data. (Or as close to the whole disk that the difference doesn't matter anyway.)
Now, on one hand, you had ext3 which performed poorly with fsync() and on the other, ext4 that required fsync() to not lose files. Not a nice situation, considering that most application programs would care to implement filesystem-specific behavior even less than the rigid dance with calling fsync() at just the right moments. Apparently it wasn't even easy to figure out if a filesystem was mounted as ext3 or ext4 in the first place.
In the end, the ext4 developers made some changes to the most common critical-seeming cases
Renaming a file on top of another. On a running system, this is an atomic update and is commonly used to put a new version of a file in place.
Overwriting an existing file (your case). This isn't atomic on a running system, but usually means the application wants the file replaced, not truncated. If an overwrite is botched, you'd lose the old version of the file too, so this is a bit different from creating a completely new file where a power-out would only lose the most recent data.
As far as I can remember, XFS also exhibited similar zero-length files after a crash even before ext4. I never followed that, though, so I don't know what sorts of fixes they'd have done.
See, e.g. this article on LWN, which mentions the fixes: ext4 and data loss (March 2009)
There were other writings about that at the time, of course, but I'm not sure it's useful to link to them, as it's mostly a question of pointing fingers.
| Why does closing a file wait for sync when overwriting a file, but not when creating? |
1,643,300,188,000 |
Why don't ext2/3/4 need to be defragmented? Is there no fragmentation at all?
|
Modern filesystems, particularly those designed to be efficient in multi-user and/or multi-tasking use cases, do a good fairly job of not fragmenting data until filesystems become near to full (there is no exact figure for where the "near to full" mark is as it depends on how large the filesystem is, the distribution of file sizes and what your access patterns are - figures between 85% and 95% are commonly quoted) or the pattern of file creations and writes is unusual or the filesystem is very old so has seen a lot of "action". This includes ext2/3/4, reiser, btrfs, NTFS, ZFS, and others.
There is currently no kernel-/filesystem- level way to defragment ext3 or 4 at the moment (see http://en.wikipedia.org/wiki/Ext3#Defragmentation for a little more info) though ext4 is planned to soon gain online defragmentation.
There are user-land tools (such as http://vleu.net/shake/ and others listed in that wikipedia article) that try defragment individual files or sets of files by copying/rewriting them - if there is a large enough block of free space this generally results in the file being given a contiguous block. This in no way guarantees files are near each other though so if you run shake over a pair of large files you migth find is results in the two files being defragmented themselves but not anywhere near each other on the disk. In a multi-user filesystem the locality of files to each other isn't often important (it is certainly less important then fragmentation of the files themselves) as the drive heads are flipping all over the place to serve different user's needs anyway and this drowns out the latency bonus that locality of reference between otherwise unfragmented files would give but on a mostly single user system it can give measurable benefits.
If you have a filesystem that has become badly fragmented over time and currently has a fair amount of free space then running something like shake over all its files could have the effet you are looking for. Another method would be to copy all the data to a new filesystem, remove the original, and then copy it back on again. This helps in much the same way shake does but may be quicker for larger amounts of data.
For small amounts of fragmentation, just don't worry about it. I know people who spend more time sat watching defragmentation progress bars than they'll ever save (due to more efficient disk access) in several lifetimes of normal operation!
| Defragging an ext partition? |
1,397,407,725,000 |
I searched but couldn't find anything - I am looking for a breakdown of the file structure of a symlink in bytes, in a ext filesystem.
I have tried creating a symlink file and then using hexdump on the symlink, but it complains that it's a directory (the link was to a folder) so it's obviously trying to dump the file/folder the link points to rather than the link itself.
|
You didn't provide additional details, so this explanation is for the moment centered on the EXT file systems common in Linux.
If you look at the "size" of a symlink as provided by e.g. ls -l, you will notice that the size is just as large as the name of the target it is pointing to is long. So, you can infer that the "actual" file contains just the path to the link target as text, and the interpretation as a symbolic link is stored in the filetype metadata (in particular, the flag S_IFLINK in the i_mode field of the inode the link file is attached to, where also the permission bits are stored; see this kernel documentation reference).
In order to improve performance and reduce device IO, if the symlink is shorter than 60 bytes it will be stored in the i_block field in the inode itself (see here). Since this makes a separate block access unnecessary, these links are called "fast symlinks" as opposed to symlinks pointing to longer paths, which fall back to the "traditional" method of storing the link target as text in an external data block.
| Actual content of a symlink file |
1,397,407,725,000 |
In mount man page errors=remount-ro is an option for mounting fat but this option doesn't appear in ext4 options catalog.
I know what this option means: In case of mistake remounting the partition like readonly but I don't know if it's a correct option or only a bug.
|
It is perfectly valid for ext4, and is defined in the ext4 manpage:
errors={continue|remount-ro|panic}
Define the behavior when an error is encountered. (Either
ignore errors and just mark the filesystem erroneous and
continue, or remount the filesystem read-only, or panic and
halt the system.) The default is set in the filesystem
superblock, and can be changed using tune2fs(8).
Some versions of the mount manpage do list this option for ext4; others refer to the manpage linked above:
Mount options for ext2, ext3 and ext4
See the options section of the ext2(5), ext3(5) or ext4(5) man page
(the e2fsprogs package must be installed).
| Why do I have "errors=remount-ro" option in my ext4 partition in my Linux? |
1,397,407,725,000 |
Can we confirm the log message "recovering journal" from fsck should be interpreted as indicating the filesystem was not cleanly unmounted / shut down the last time? Or, are there other possible reasons to be aware of?
May 03 11:52:34 alan-laptop systemd-fsck[461]: /dev/mapper/alan_dell_2016-fedora: recovering journal
May 03 11:52:42 alan-laptop systemd-fsck[461]: /dev/mapper/alan_dell_2016-fedora: clean, 365666/2621440 files, 7297878/10485760 blocks
May 03 11:52:42 alan-laptop systemd[1]: Mounting /sysroot...
May 03 11:52:42 alan-laptop kernel: EXT4-fs (dm-0): mounted filesystem with ordered data mode. Opts: (null)
May 03 11:52:42 alan-laptop systemd[1]: Mounted /sysroot.
Compare fsck of /home from the same boot, which shows no such message:
(ignore the -1 hour jump, it's due to "RTC time in the local time zone")
May 03 10:52:57 alan-laptop systemd[1]: Starting File System Check on /dev/mapper/alan_dell_2016-home...
May 03 10:52:57 alan-laptop systemd-fsck[743]: /dev/mapper/alan_dell_2016-home: clean, 1469608/19857408 files, 70150487/79429632 blocks
May 03 10:52:57 alan-laptop systemd[1]: Started File System Check on /dev/mapper/alan_dell_2016-home.
May 03 10:52:57 alan-laptop audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:init_t:s0 msg='unit=systemd-fsc>
May 03 10:52:57 alan-laptop systemd[1]: Mounting /home...
May 03 10:52:57 alan-laptop systemd[1]: Mounted /boot/efi.
May 03 10:52:57 alan-laptop kernel: EXT4-fs (dm-2): mounted filesystem with ordered data mode. Opts: (null)
May 03 10:52:57 alan-laptop systemd[1]: Mounted /home.
May 03 10:52:57 alan-laptop systemd[1]: Reached target Local File Systems.
Version
$ rpm -q --whatprovides $(which fsck.ext4)
e2fsprogs-1.43.8-2.fc28.x86_64
Motivation
This happened immediately after an offline update; it was most likely triggered by a PackageKit bug:
Bug 1564462 - offline update performed unclean shutdown
where it effectively uses systemctl reboot --force. I'm concerned that there's a bug in Fedora here, because systemd forced shutdown is still supposed to kill all processes and then unmount the filesystems cleanly where possible.
The above messages are from Fedora 28, systemd-238-7.fc28.1.x86_64. Fedora 27 was using a buggy version of systemd which could have failed to unmount filesystems:
systemd-shutdown[1]: Failed to parse /proc/self/mountinfo #6796
however the fix should be included in systemd 235 and above. So I'm concerned there's yet another bug lurking somewhere.
The filesystem is on LVM.
I seem to remember that shutdown is associated with a few screenfuls of repeated messages in a few seconds immediately before the screen goes black. I think they are from inside the shutdown initrd. I don't know if this represents a problem or not.
|
The “recovering journal” message is output by e2fsck_run_ext3_journal, which is only called if ext2fs_has_feature_journal_needs_recovery indicates that the journal needs recovery. This “feature” is a flag which is set by the kernel whenever a journalled Ext3/4 file system is mounted, and cleared when the file system is unmounted, when recovery is completed (when mounting an unclean file system, or remounting a file system read-only), and when freezing the file system (before taking a snapshot).
Ignoring snapshots, this means that e2fsck only prints the message when it encounters a file system which hasn’t been cleanly unmounted, so its presence is proof of an unclean unmount (and perhaps shutdown, assuming the unmount was supposed to take place during shutdown).
| Does "recovering journal" prove an unclean shutdown/unmount? |
1,397,407,725,000 |
I am building a disk image for an embedded system (to be placed on an 4GB SD card). I want the system to have two partitions. A 'Root'(200Mb), and a 'Data' partition(800Mb).
I create an empty 1GB file with dd.
Then I use parted to set up the partitions.
I mount them each in a loop device then format them; ext2 for 'Root' ext4 for 'Data'. Add my root file system to the 'Root' partition and leave 'Data' empty.
Here's where the problem is. I am now stuck with a 1GB image, with only 200MB of data on it. Shouldn't I, in theory, be able to truncate the image down to say.. 201MB and still have the file system mountable? Unfortunately I have not found this to be the case.
I recall in the past having used a build environment from Freescale that used to create 30Mb images, that would have partitions for utilizing an entire 4GB sdcard. Unfortunately, at this time, I can not find how they were doing that.
I have read the on-disk format for the ext file system, and if there is no data in anything past the first super block (except for backup super blocks, and unused block tables) I thought I could truncate there.
Unfortunately, when I do this, the mounting system freaks out. I can then run FSCK, restore the super blocks, and block tables, and can mount it then no problem. I just don't think that should be necessary.
Perhaps a different file system could work? Any ideas?
thanks,
edit
changed partition to read file system. The partition is still there and deoesn't change, but the file system is getting destroyed after truncating the image.
edit
I have found the case to be that when I truncate the file to a size just larger than the first set of 'Data' partition superblock and inode/block tables, (Somewhere in the data-block range) the file system becomes umountable without doing a fsck to restore the rest of the super blocks and block/inode tables
|
Firstly, writing a sparse image to a disk will not result in anything but the whole of the size of that image file - holes and all - covering the disk. This is because handling of sparse files is a quality of the filesystem - and a raw device (such as the one to which you write the image) has no such thing yet. A sparse file can be stored safely and securely on a medium controlled by a filesystem which understands sparse files (such as an ext4 device) but as soon as you write it out it will envelop all that you intend it to. And so what you should do is either:
Simply store it on an fs which understands sparse files until you are prepared to write it.
Make it two layers deep...
Which is to say, write out your main image to a file, create another parent image with an fs which understands sparse files, then copy your image to the parent image, and...
When it comes time to write the image, first write your parent image, then write your main image.
Here's how to do 2:
Create a 1GB sparse file...
dd bs=1kx1k seek=1k of=img </dev/null
Write two ext4 partitions to its partition table: 1 200MB, 2 800MB...
printf '%b\n\n\n\n' n '+200M\nn\n' 'w\n\c' | fdisk img
Create two ext4 filesystems on a -Partitioned loop device and put a copy of the second on the first...
sudo sh -c '
for p in "$(losetup --show -Pf img)p"* ### the for loop will iterate
do mkfs.ext4 "$p" ### over fdisks two partitions
mkdir -p ./mnt/"${p##*/}" ### and mkfs, then mount each
mount "$p" ./mnt/"${p##*/}" ### on dirs created for them
done; sync; cd ./mnt/*/ ### next we cp a sparse image
cp --sparse=always "$p" ./part2 ### of part2 onto part1
dd bs=1kx1k count=175 </dev/zero >./zero_fill ### fill out part1 w/ zeroes
sync; cd ..; ls -Rhls . ### sync, and list contents
umount */; losetup -d "${p%p*}" ### last umount, destroy
rm -rf loop*p[12]/ ' ### loop devs and mount dirs
mke2fs 1.42.12 (29-Aug-2014)
Discarding device blocks: done
Creating filesystem with 204800 1k blocks and 51200 inodes
Filesystem UUID: 2f8ae02f-4422-4456-9a8b-8056a40fab32
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
mke2fs 1.42.12 (29-Aug-2014)
Discarding device blocks: done
Creating filesystem with 210688 4k blocks and 52752 inodes
Filesystem UUID: fa14171c-f591-4067-a39a-e5d0dac1b806
Superblock backups stored on blocks:
32768, 98304, 163840
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
175+0 records in
175+0 records out
183500800 bytes (184 MB) copied, 0.365576 s, 502 MB/s
./:
total 1.0K
1.0K drwxr-xr-x 3 root root 1.0K Jul 16 20:49 loop0p1
0 drwxr-xr-x 2 root root 40 Jul 16 20:42 loop0p2
./loop0p1:
total 176M
12K drwx------ 2 root root 12K Jul 16 20:49 lost+found
79K -rw-r----- 1 root root 823M Jul 16 20:49 part2
176M -rw-r--r-- 1 root root 175M Jul 16 20:49 zero_fill
./loop0p1/lost+found:
total 0
./loop0p2:
total 0
Now that's a lot of output - mostly from mkfs.ext4 - but notice especially the ls bits at the bottom. ls -s will show the actual -size of a file on disk - and it is always displayed in the first column.
Now we can basically reduce our image to only the first partition...
fdisk -l img
Disk img: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc455ed35
Device Boot Start End Sectors Size Id Type
img1 2048 411647 409600 200M 83 Linux
img2 411648 2097151 1685504 823M 83 Linux
There fdisk tells us there are 411647 +1 512 byte sectors in the first partition of img...
dd seek=411648 of=img </dev/null
That truncates the img file to only its first partition. See?
ls -hls img
181M -rw-r--r-- 1 mikeserv mikeserv 201M Jul 16 21:37 img
...but we can still mount that partition...
sudo mount "$(sudo losetup -Pf --show img)p"*1 ./mnt
...and here are its contents...
ls -hls ./mnt
total 176M
12K drwx------ 2 root root 12K Jul 16 21:34 lost+found
79K -rw-r----- 1 root root 823M Jul 16 21:34 part2
176M -rw-r--r-- 1 root root 175M Jul 16 21:34 zero_fill
And we can append the stored image of the second partition to the first...
sudo sh -c '
dd seek=411648 if=./mnt/part2 of=img
umount ./mnt; losetup -D
mount "$(losetup -Pf --show img)p"*2 ./mnt
ls ./mnt; umount ./mnt; losetup -D'
1685504+0 records in
1685504+0 records out
862978048 bytes (863 MB) copied, 1.96805 s, 438 MB/s
lost+found
Now that has grown our img file: it's no longer sparse...
ls -hls img
1004M -rw-r--r-- 1 mikeserv mikeserv 1.0G Jul 16 21:58 img
...but removing that is as simple the second time as it was the first, of course...
dd seek=411648 of=img </dev/null
ls -hls img
181M -rw-r--r-- 1 mikeserv mikeserv 201M Jul 16 22:01 img
| How do I create small disk image with large partitions |
1,397,407,725,000 |
Debian on external USB SSD drive. There was some error in dmesg log file:
...[ 3.320718] EXT4-fs (sdb2): INFO: recovery required on readonly filesystem
[ 3.320721] EXT4-fs (sdb2): write access will be enabled during recovery
[ 5.366367] EXT4-fs (sdb2): orphan cleanup on readonly fs
[ 5.366375] EXT4-fs (sdb2): ext4_orphan_cleanup: deleting unreferenced inode 6072
[ 5.366426] EXT4-fs (sdb2): ext4_orphan_cleanup: deleting unreferenced inode 6071
[ 5.366442] EXT4-fs (sdb2): 2 orphan inodes deleted
[ 5.366444] EXT4-fs (sdb2): recovery complete
...
The system boots and works normally. Is it possible to repair this fully, and what is the proper way?
|
You can instruct the filesystem to perform an immediate fsck upon being mounted like so:
Method #1: Using /forcefsck
You can usually schedule a check at the next reboot like so:
$ sudo touch /forcefsck
$ sudo reboot
Method #2: Using shutdown
You can also tell the shutdown command to do so as well, via the -F switch:
$ sudo shutdown -rF now
NOTE: The first method is the most universal way to achieve this!
Method #3: Using tune2fs
You can also make use of tune2fs, which can set the parameters on the filesystem itself to force a check the next time a mount is attempted.
$ sudo tune2fs -l /dev/sda1
Mount count: 3
Maximum mount count: 25
So you have to place the "Mount count" higher than 25 with the following command:
$ sudo tune2fs -C 26 /dev/sda1
Check the value changed with tune2fs -l and then reboot!
NOTE: Of the 3 options I'd use tune2fs given it can deal with force checking any filesystem whether it's the primary's (/) or some other.
Additional notes
You'll typically see the "Maximum mount count:" and "check interval:" parameters associated with a partition that's been formatted as ext2/3/4. Often times they're configured like so:
$ tune2fs -l /dev/sda5 | grep -E "Mount count|Maximum mount|interval"
Mount count: 178
Maximum mount count: -1
Check interval: 0 (<none>)
When the parameters are set this way, the device will never perform an fsck during mounting. This is fairly typical with most distros.
There are 2 forces that drive a check. Either number of mounts or an elapse time. The "Check interval" is the time based one. You can say every 2 weeks to that argument, 2w. See the tune2fs man page for more info.
NOTE: Also make sure to understand that tune2fs is a filesystem command, not a device command. So it doesn't work with just any old device, /dev/sda, unless there's an ext2/3/4 filesystem there, the command tune2fs is meaningless, it has to be used against a partition that's been formatted with one of those types of filessystems.
References
Linux Force fsck on the Next Reboot or Boot Sequence
| How to repair a file system corruption? |
1,397,407,725,000 |
Recently I accidentally formatted an EXT4 partition to FAT. I got into a panic. After a long journey through a dark wood in which my hope was fading I could recover my partition and it seems ok. After sudo mke2fs -n /dev/sdx introduced some superblocks I picked up one and ran sudo e2fsck -b a_block_number /dev/sdxy and bingo! All my files and directories were put in a lost+found folder.
The question is that are all backup superblocks the same or it is possible one be more updated than another?
The second question is that does reformatting an EXT4 partition to EXT4 overwrite the backup superblocks? (between ourselves, I reformatted the FAT partition again to EXT4 before trying mke2fs and e2fsck)
|
All backup superblocks are the same. They are all a copy of the superblock, and are scattered throughout the disk to provide redundancy in case a large contiguous part of the disk is corrupted.
Formatting a partition, even with the same filesystem type, clears the superblock. (It makes sense: the purpose of formatting is to create a clean slate on the partition, so all filesystem metadata is erased.) However, it does not erase the backup superblocks, since there is no need to do so (and your experience confirms this).
EDIT: to answer your comment questions:
How I could recover my partition with the help of e2fsck if formatting it clears the superblocks?
The first format to FAT cleared the superblock but all the files and directories were still there, just non-available because they are not referenced anymore in the filesystem. (Inexperienced users are often surprised by the fact that after formatting a disk, 99% of the content is still there. Therefore, if you plan to sell an used disk, never do a simple format -- securely wipe all content bit by bit!)
mke2fs -n displayed the location of the backup superblock for the ext4 filesystem, which was the filesystem you had before formatting to FAT; that superblock was therefore the "correct" superblock. e2fsck -b applied the superblock found at that location. This allowed the data fragments to be recovered in /lost+found.
How formatting say EXT4 to FAT clear the superblock but not the other parts of a filesystem like inodes?
Formatting clears the superblock but not the inodes because inodes are scattered throughout the disk; where exactly thy are scattered depends on the filesystem type. For instance, space in a EXT2/EXT3 filesystems is split up into blocks, grouped into block groups; inodes are stored just before data blocks in each block group. And as I said before, formatting leaves a very large part of the disk untouched.
when are backup superblocks created? According to SUSE Linux 9 Bible by Justin Davies, "backup superblocks are created when an EXT2 or EXT3 filesystem is created.". So I expect when I reformat my partition the backup superblocks be reformatted.
No, only the main superblock is erased. The backup superblock reside in other locations of the disk and, like metadata (inodes...) and files, are not wiped out by the format, as said before. They might be overwritten by the backup superblocks of the new fs, however; this depends on the new fs type.
| Difference between backup superblocks |
1,397,407,725,000 |
We have four identical Linux servers with a large (5T) hard disk partition. We have Scientific Linux with this kernel:
Linux s3.law.di.unimi.it 2.6.32-358.18.1.el6.x86_64 #1 SMP Tue Aug 27 14:23:09 CDT 2013 x86_64 x86_64 x86_64 GNU/Linux
The servers are identically configured, installed, etc. But one, and only one of the servers, is ridiculously slow when writing with ext4. If I do a
dd if=/dev/zero of=/mnt/big/analysis/test
l -tr
total 11M
-rw-r--r-- 1 root root 11M Apr 20 10:01 test
10:01:42 [s3] /mnt/big/analysis
l -tr
total 16M
-rw-r--r-- 1 root root 16M Apr 20 10:02 test
10:02:13 [s3] /mnt/big/analysis
So 5MB in 30s. All other servers write more than an order of magnitude faster.
The machines are 64GB, 32-core and have no I/O or CPU activity, albeit 90% of memory is filled by a large Java process doing nothing. Only one machine is writing slowly.
SMART says everything is OK
# smartctl -H /dev/sdb
smartctl 5.42 2011-10-20 r3458 [x86_64-linux-2.6.32-358.18.1.el6.x86_64] (local build)
Copyright (C) 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net
SMART Health Status: OK
hdparm reads with no problems:
# hdparm -t /dev/sdb
/dev/sdb:
Timing buffered disk reads: 1356 MB in 3.00 seconds = 451.60 MB/sec
The partition is mounted as follows:
/dev/sdb1 on /mnt/big type ext4 (rw,noatime,data=writeback)
and we set
tune2fs -o journal_data_writeback /dev/sdb1
for performance.
I tried to find anything that could explain why this specific server writes so slowly, that is, any difference in the output of a diagnostic tool, with no results.
Just to complete the picture: we started on all servers a crawl, partitions essentially empty. The crawl created a number of files on the partition, and in particular a 156G file (the store of the crawl). The crawl started OK, but after a few hours we saw slowdowns (apparently, as the store was growing). When we checked, we noticed that writing to disk was getting slower, and slower, and slower.
We stopped everything--no CPU activity, no I/O--but still dd was showing the behaviour above. The other three servers, in identical conditions, same files, etc., work perfectly, both during the crawl and using dd.
Frankly, I don't even know where to look. Does anyone hear a bell ringing? Which diagnostic tools might I use to understand what's happening, or which tests should I try?
Update
Beside the link posted, I thought it would have been a good idea to run the same tests on two servers running the crawler. It's interesting. For instance, vmstat 10 gives
Good
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
4 0 68692 9009864 70832 29853400 0 0 15 214 9 2 11 1 88 0 0
10 0 68692 8985620 70824 29883460 0 0 48 7402 79465 62898 12 1 86 0 0
11 0 68692 8936780 70824 29928696 0 0 54 6842 81500 66665 15 1 83 0 0
10 2 68692 8867280 70840 30000104 0 0 65 36578 80252 66272 14 1 85 0 0
15 0 68692 8842960 70840 30026724 0 0 61 3667 81245 65161 14 1 85 0 0
Bad
procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
13 0 8840 14015100 92972 25683108 0 0 11 104 3 9 4 1 94 0 0
2 0 8840 14015800 92984 25696108 0 0 49 16835 38619 54204 2 2 96 0 0
1 0 8840 14026004 93004 25699940 0 0 33 4152 25914 43530 0 2 98 0 0
1 0 8840 14032272 93012 25703716 0 0 30 1164 25062 43230 0 2 98 0 0
2 0 8840 14029632 93020 25709448 0 0 24 5619 23475 40080 0 2 98 0 0
And iostat -x -k 5
Good
Linux 2.6.32-358.18.1.el6.x86_64 (s0.law.di.unimi.it) 04/21/2014 _x86_64_ (64 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
10.65 0.00 1.02 0.11 0.00 88.22
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.11 3338.25 17.98 56.52 903.55 13579.19 388.79 7.32 98.30 1.23 9.18
sda 0.39 0.72 0.49 0.76 11.68 5.90 28.25 0.01 11.25 3.41 0.43
avg-cpu: %user %nice %system %iowait %steal %idle
15.86 0.00 1.33 0.03 0.00 82.78
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 1106.20 9.20 31.00 36.80 4549.60 228.18 0.41 10.09 0.39 1.58
sda 0.00 2.20 0.80 3.00 4.80 20.80 13.47 0.04 10.53 3.21 1.22
avg-cpu: %user %nice %system %iowait %steal %idle
15.42 0.00 1.23 0.01 0.00 83.34
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 1205.40 8.00 33.60 40.80 4956.00 240.23 0.39 9.43 0.33 1.38
sda 0.00 0.60 0.00 1.00 0.00 6.40 12.80 0.01 5.20 4.20 0.42
Bad
Linux 2.6.32-358.18.1.el6.x86_64 (s2.law.di.unimi.it) 04/21/2014 _x86_64_ (64 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
4.37 0.00 1.41 0.06 0.00 94.16
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.06 1599.70 13.76 38.23 699.27 6551.73 278.96 3.12 59.96 0.99 5.16
sda 0.46 3.17 1.07 0.78 22.51 15.85 41.26 0.03 16.10 2.70 0.50
avg-cpu: %user %nice %system %iowait %steal %idle
11.93 0.00 2.99 0.60 0.00 84.48
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 14885.40 13.60 141.20 54.40 60106.40 777.27 34.90 225.45 1.95 30.14
sda 0.00 0.40 0.00 0.80 0.00 4.80 12.00 0.01 7.00 3.25 0.26
avg-cpu: %user %nice %system %iowait %steal %idle
11.61 0.00 2.51 0.16 0.00 85.71
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
sdb 0.00 2245.40 10.60 51.20 42.40 9187.20 298.69 3.51 56.80 2.04 12.58
sda 0.00 0.40 0.00 0.80 0.00 4.80 12.00 0.01 6.25 3.25 0.26
So (if I understand correctly the output) it appears that, yes, as it was apparent from the JVM stack traces the slow server is taking forever to do I/O. It remains to understand why :(.
I also ran a strace -c ls -R /. I didn't realize it had to run for a while, so the previous data is not very meaningful. The command was run while the crawler was running, so with massive I/O ongoing.
Good
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
99.62 14.344825 114 126027 getdents
0.25 0.036219 1 61812 12 open
0.07 0.009891 0 61802 close
0.06 0.007975 0 61801 fstat
0.01 0.000775 0 8436 write
0.00 0.000043 22 2 rt_sigaction
0.00 0.000000 0 12 read
0.00 0.000000 0 1 stat
0.00 0.000000 0 3 1 lstat
0.00 0.000000 0 33 mmap
0.00 0.000000 0 16 mprotect
0.00 0.000000 0 4 munmap
0.00 0.000000 0 15 brk
0.00 0.000000 0 1 rt_sigprocmask
0.00 0.000000 0 3 3 ioctl
0.00 0.000000 0 1 1 access
0.00 0.000000 0 3 mremap
0.00 0.000000 0 1 execve
0.00 0.000000 0 1 fcntl
0.00 0.000000 0 1 getrlimit
0.00 0.000000 0 1 statfs
0.00 0.000000 0 1 arch_prctl
0.00 0.000000 0 3 1 futex
0.00 0.000000 0 1 set_tid_address
0.00 0.000000 0 1 set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00 14.399728 319982 18 total
Bad
% time seconds usecs/call calls errors syscall
------ ----------- ----------- --------- --------- ----------------
99.81 24.210936 181 133618 getdents
0.14 0.032755 1 63183 14 open
0.03 0.006637 0 63171 close
0.02 0.005410 0 63170 fstat
0.00 0.000434 0 15002 write
0.00 0.000000 0 12 read
0.00 0.000000 0 1 stat
0.00 0.000000 0 4 1 lstat
0.00 0.000000 0 33 mmap
0.00 0.000000 0 16 mprotect
0.00 0.000000 0 4 munmap
0.00 0.000000 0 25 brk
0.00 0.000000 0 2 rt_sigaction
0.00 0.000000 0 1 rt_sigprocmask
0.00 0.000000 0 3 3 ioctl
0.00 0.000000 0 1 1 access
0.00 0.000000 0 3 mremap
0.00 0.000000 0 1 execve
0.00 0.000000 0 1 fcntl
0.00 0.000000 0 1 getrlimit
0.00 0.000000 0 1 statfs
0.00 0.000000 0 1 arch_prctl
0.00 0.000000 0 3 1 futex
0.00 0.000000 0 1 set_tid_address
0.00 0.000000 0 1 set_robust_list
------ ----------- ----------- --------- --------- ----------------
100.00 24.256172 338259 20 total
|
It was the kernel. We were using 2.6.32, which is quite old, but it is the one supported by Red Hat EL and Scientific Linux.
Today I had lunch with a friend (Giuseppe Ottaviano) who had a similar experience tuning high-performance indexing algorithms. After upgrading to the newest version everything (compiler, libraries, etc.) he changed the kernel to the 3.10 line and suddenly everything worked fine.
It worked for us, too. With the 3.10 kernel (courtesy of http://elrepo.org), all problems vanished.
Giuseppe suspects a pernicious interaction between NUMA and kernel paging, which leads to the kernel loading and saving like crazy the same data, slowing the machine almost up to a halt.
| Large ext4 partition ridiculously slow when writing |
1,397,407,725,000 |
I need to shrink a large ext4 volume, and I would like to do it with as little downtime as possible. With the testing I've done so far it looks like it could be unmounted for the resize for up to a week. Is there any way to defragment the filesystem online ahead of time so that resizefs won't have to move so many blocks around?
Update:
It's taken some time to get to this point, moved quite a few TB of data around in preparation for the shrink, and I've been experimenting using the information in the answer below. I finally came up with the following command-line which could be useful to others in a similar situation with only minor modifications. Also note, it should be run as root for the filefrag and e4defrag commands to work properly - it won't affect the file ownership. It does also work properly on files with multiple hard-links, which I have lots of.
find -type f -print0 | xargs -0 filefrag -v | grep '\.\.[34][0-9]\{9\}.*eof' -A 1 | awk '/extents found/ {match($0, /^(.*): [0-9]+ extents found/, res); print res[1]}' | xargs -n 1 -d '\n' e4defrag
A quick explanation to make it easier for others to modify/use:
The first 'find' command builds the list of files to work with. Possibly redundant now or could be done a better way, but while testing I had other filters there and I've left it as a handy place to modify the scope of the rest of the command.
Next pass each file through 'filefrag -v' to get a list of all physical blocks used by each file.
The grep looks for the last block used by each file (line ending in 'eof'), and where that block is a 10-digit number starting with 3 or 4. In my case my new filesystem size will be 2980024320 blocks long so that does a good-enough job of only working on files that are on the area of disk to be removed. Having grep also include the following line (the '-A 1') also includes the filename in the output for the next section. This is where anyone else doing this will have to modify the command depending on the size of their filesystem. It could also probably be done in a much better way but this is working for me now and I'm lazy.
awk pulls just the filenames out of all the other garbage that grep left in the filefrag output.
And finally e4defrag is called - I don't care about the actual fragment count, but it has the side effect of moving the physical blocks around (hopefully into an early part of the drive), and it works against files with multiple hard-links with no extra effort.
If you only want to know which files it would defrag without actually moving any data around, just leave the last piece of the command off.
find -type f -print0 | xargs -0 filefrag -v | grep '\.\.[34][0-9]\{9\}.*eof' -A 1 | awk '/extents found/ {match($0, /^(.*): [0-9]+ extents found/, res); print res[1]}'
|
From what I can tell, ext4fs supports online defragmentation (it's listed under "done", but the status field is empty; the original patch is from late 2006) through e4defrag in e2fsprogs 1.42 or newer which when running on Linux 2.6.28 or newer allows you to query status for directories or possibly file systems, and at least defragment individual files. e2fsprogs as of today is at version 1.42.8.
I'm not sure whether or not this helps you, though, as what you want to do doesn't seem to be so much defragment the data as consolidate the data on disk. The two are often done together, but they are distinctly different operations.
A simple way to consolidate the data, which might work, assuming you have a reasonable amount of free space, is to copy each file to some other logical logication on the same file system, and then use mv to replace the data pointed to by the inode with the new copy. It would depend heavily on exactly how the ext4 allocator works in detail, but it might be worth an attempt and it should be fairly easy to script. Just watch out for files that are hardlinked from more than one place (with a scheme like this it might be easiest to simply ignore any files with link count > 1, and let resizefs deal with those).
| Decrease time to shrink ext4 filesystem |
1,397,407,725,000 |
I am aware that there are several questions concerning how much space to reserve on a filesystem using tune2fs -m, but some of the advice is contradictory, some seems to be relevant only to the filesystem where root is mounted, and none seems to be specifically for ext4.
The drive I'm enquiring about is a 3 TB hybrid SSD/Hard Disk with one partition formatted using ext4 and which is ONLY used for media files. Root, home, and swap are all in their own partitions on a SSD drive which I will be leaving well alone.
At the moment on the 3 TB ext4 filesystem, 5% of disk space is reserved (the default), but that's a whopping 150 GB. If safe to do so I'd like to reduce this to 1%, which would be 30 GB, and in so doing free up 120 GB. Please note that the filesystem is 92% full, 5% of the remaining is the reserved space.
The advice in this answer, suggests that setting the reserved space to 5% is sensible on nearly full ext3 filesystems to avoid fragmentation. It then states that ext4 is more efficient, explicitly stating that: "ext4's multi-block allocator is much more fragmentation resistant". It does NOT then go on to advise what percentage would be sensible for ext4.
I'd like to know whether it would be safe to reduce the reserved drive space to 1% on my 3 TB ext4 filesystem, while still maintaining adequate filesystem fragmentation protection?
If the 30 GB reserved space at 1% is not enough, then how little would be safe?
Thanks.
|
This reserve is primarily for the core system partitions so that root can still log in if a regular user manages to fill the drive and clog up the works. The space is needed for temp files, copying, and general elbow room for shell commands.
None is strictly needed on simple user data volumes. 5% on large modern drives is way too much, I use 2% just to be safe but likely is still overkill.(100 MiB would likely be enough for emergency mode on many systems)
However, the secondary reason for reserved space is that it leaves gaps between the end of one file and the beginning of the next. A little space can help prevent fragmenting of frequently altered files, essentially the modifications can be kept within the same physical area as the original file. Media files are rarely modified, unless of course you are editing said media. On SSDs fragmenting doesn't matter, as all segments are accessed at equal speed.
| tune2fs - how much space to reserve on large ext4 filesystem |
1,397,407,725,000 |
Here is the relevant line from my /etc/fstab file:
UUID=f51aa298-9ce4-4a19-a323-10df333f34f5 / ext4 data=writeback,noatime,barrier=0,errors=remount-ro,commit=100,nobh,nouser_xattr 0 1
Here is what happens when I type the command "mount":
/dev/sda1 on / type ext4 (rw,noatime,data=writeback,barrier=0,errors=remount-ro,commit=100,nobh,nouser_xattr,commit=0)
Why does it specify "commit=0" at the end? Does it mean that my commit=100 option is not used?
I am using Ubuntu 10.10, 32-bit with the latest updates.
|
Got it.
It seems the problem was with the /usr/lib/pm-utils/power.d/journal-commit file. I edited the above file as root and changed the line
JOURNAL_COMMIT_TIME_AC=${JOURNAL_COMMIT_TIME_AC:-0}
to be
JOURNAL_COMMIT_TIME_AC=${JOURNAL_COMMIT_TIME_AC:-100}
And that's all!
P.S - I have no idea why the script ignores conflicting mount options. I believe it should check for user-specified options and not override them.
| ext4 overrides my commit=100 mount option with commit=0 |
1,397,407,725,000 |
I need to enable the case insensitive filesystem feature (casefold) on ext4 of a Debian 11 server with a backported 6.1 linux kernel with the required options compiled in.
The server has a swap partition of 2GB and a big ext4 partition for the filesystem, which it also boots from. I only have ssh access as root and cannot access the physial/virtual host itself, so I don't have access to (virtual) usb sticks or cdrom media.
What is the fastest way to enable the casefold feature? tune2fs doesn't want to do it because the fileystem is mounted.
Idea: Drop the swap, install a small rescue system in it, reboot into said rescue system, change the filesystem options of the root partition, reboot into the live partition and restore the swap. For this to work however I need to prepare an extra linux system just to do the tune2fs command needed.
Is there a better way? Any rescue systems I can already use and preconfigure for the required network settings after a reboot?
|
I like your approach; it's clean in that it doesn't require modification of the data on your main system.
And, yes, I think that if you want to run tune2fs then by a large margin, the easiest solution is to run that from a running Linux, so that there's no real way around having to run it when the main file system isn't mounted.
I don't think your network setup is of any significance – you know exactly what you want your system to do; preconfiguring network to give you an SSH shell into it is going be harder than just running tune2fs … /dev/disk/by-partuuid/… in a script that's autonomously executed (and which then moves on to do what is needed to boot your normal system).
Now, two options:
Your debian currently boots using an initrd containing an initramfs (I expect it does)
It doesn't.
In the first case, modifying that initrd generation process to just include the necessary tune2fs invocation, generate a new initrd, booting with that, is probably the easiest. Mind you, initrds are really what you want to avoid building: custom fully-fledged Linux systems (which just happen to be Linux distro's ways to initialize the system before mounting the root file system and continuing the main boot process). It's just that debian already builds these for you, anyways :)
I must admit it's been a decade (or more) since I did something like that for a debianoid Linux, so I'm not terribly much of a help on how; check out debian's (sadly seemingly a bit sparse/outdated) documentation on it, and see what you have in /etc/mkinitrd.
In the second case, your approach seems sensible.
| How to change the casefold ext4 filesystem option of the root partition, if I only have ssh access |
1,397,407,725,000 |
I have Linux installed on a Dell XPS 9343 with an Samsung PM851 SSD.
I recently read that many SSDs don't support TRIM operations.
So I'd like to check if discard option effectively works on my system.
At first step, I tried to simply run sudo fstrim --verbose --all and it reported 41GB trimmed; this makes me fear because I was expecting a really little value because I have the continuously TRIM enabled (see above); in fact, if I re-run that command again I get O bytes trimmed. Is it normal? even if I have the discard option in the /etc/fstab?
PS: I tried to follow the proposed solution here but it stucks on the second command due to trim.test: FIBMAP unsupported.
PS2: it's a flat SSD (no LVM or RAID) with GPT and EXT4 filesystem
|
as @meuh pointed out in the comment, I need to run the test on my EXT4 partition, while I tried it on my /tmp
SOLVED!
PS: following the test result, I can confirm that the drive on my XPS 9343 (Samsung PM851 M.2 2280 256GB, firmware revision: EXT25D0Q) supports TRIM command, even if dmesg reports NCQ Send/Recv Log not supported
| How do I check TRIM? |
1,397,407,725,000 |
I wanted to create live bootable USB of gparted via unetbootin. But by mistake I specified the device of USB as a partition of external HDD rather than the USB drive. I deleted all the files that unetbootin created in that HDD partition except one named "ldlinux.sys". I failed to do that via root user also. I'm unable to delete that file. You can please see the screen shot of the file in the HDD below.
Please see below for the message I'm getting while trying to delete the file via terminal.
ravi@ravi-Aspire-5315:/media/ravi/MyPassport Linux$ ll
total 92
drwxr-xr-x 12 ravi ravi 4096 Nov 6 11:04 ./
drwxr-x---+ 5 root root 4096 Nov 8 09:28 ../
drwxrwxr-x 3 ravi ravi 4096 Jun 24 13:44 15GB_rsync/
drwxrwxr-x 3 ravi ravi 4096 Jun 24 15:13 3.5GB_rsync/
drwxrwxr-x 3 ravi ravi 4096 Jun 24 15:09 7.3GB_rsync/
drwx------ 5 ravi ravi 4096 Nov 6 10:24 asus_21.06.2014/
drwxrwxr-x 4 ravi ravi 4096 Sep 24 09:18 asus_camera_27.09.14/
drwxrwxr-x 3 ravi ravi 4096 Oct 4 15:46 Dusherra_mau/
-r--r--r-- 1 root root 32768 Nov 6 09:59 ldlinux.sys
drwx------ 2 ravi ravi 16384 Apr 24 2014 lost+found/
drwx------ 5 ravi ravi 4096 Jun 23 09:43 .Trash-1000/
drwxr-xr-x 3 ravi ravi 4096 Aug 3 12:31 ubuntu13.10_encripted_home_data/
drwxrwxr-x 3 ravi ravi 4096 Jun 24 15:15 ubuntu_home_rsync/
ravi@ravi-Aspire-5315:/media/ravi/MyPassport Linux$ sudo rm ldlinux.sys
rm: cannot remove ‘ldlinux.sys’: Operation not permitted
ravi@ravi-Aspire-5315:/media/ravi/MyPassport Linux$
Then I noticed that the file isn't having executable permission. I felt that was the reason. So, to change the permissions of the file, I used chmod but it didn't happen & the error message was thrown as below.
ravi@ravi-Aspire-5315:/media/ravi/MyPassport Linux$ sudo chmod 777 ldlinux.sys
chmod: changing permissions of ‘ldlinux.sys’: Operation not permitted
ravi@ravi-Aspire-5315:/media/ravi/MyPassport Linux$ sudo chmod 555 ldlinux.sys
chmod: changing permissions of ‘ldlinux.sys’: Operation not permitted
ravi@ravi-Aspire-5315:/media/ravi/MyPassport Linux$ sudo chmod 666 ldlinux.sys
chmod: changing permissions of ‘ldlinux.sys’: Operation not permitted
Why it's happening so & how to delete the file?
|
Could be that:
The immutable flag is set. As PM 2Ring pointed out - you can use the lsattr ldlinux.sys command and look for the 'i' flag. If this is the case, a chattr -i should remove it.
The filesystem is mounted read only (take at look at the output for the mount command)
Reference:
chattr wikipedia page
| Unable to delete file "ldlinux.sys" from a partition |
1,397,407,725,000 |
Is there any limit for the maximum nested directories in the ext4 filesystem? For example ISO-9660 filesystem AFAIK cannot have more than 7 level of sub-directories.
|
There isn’t any limit inherent in the file system design itself, and experimentation (thanks ilkkachu) shows that directories can be nested to a depth exceeding limits one might naïvely expect (PATH_MAX, 4096 on Linux, although that limits the length of paths passed to system calls and can be worked around with relative paths).
Part of the implementation apparently assumes that the overall path length, inside a given file system, never goes above PATH_MAX; see the directory hashing functions which allocate PATH_MAX bytes.
The only directory-related limit which seems to be checked in the file system implementation is the length of an individual path component, which is limited to 255 bytes; but that doesn’t have any bearing on the nested depth.
| Nested directory depth limit in ext4 |
1,397,407,725,000 |
Does anyone have documentation on ext4-rsv-conver?
$ pgrep -a -f ext4-rsv-conver
153 ext4-rsv-conver
161 ext4-rsv-conver
7451 ext4-rsv-conver
$ dpkg -S ext4-rsv-conver
dpkg-query: no path found matching pattern *ext4-rsv-conver*
I can't find anything about ext4-rsv-conver in Google.
My system is Debian 9.
|
These processes are kernel threads, used by the ext4 implementation to handle conversion work from writeback, i.e. “completed IOs that need unwritten extents handling and have transaction reserved”.
That probably doesn’t explain much, but it does mean they’re nothing to worry about. Basically the kernel ends up with work which needs to be dealt with “out of band”, and uses a work queue with dedicated threads to handle it (instead of blocking the calling process or interrupt).
| What is "ext4-rsv-conver" process? |
1,397,407,725,000 |
When rsyncing a directory to a freshly plugged-in external USB flash drive, via
rsync -av /source/ /dest/
all files get transferred (i.e. rewritten) despite no changes in the files.
Note that overwriting the files only takes place once the USB is un- and replugged. Doing the rsync command twice in a row without unplugging the drive in-between does successfully skip the whole directory contents.
Including the -u update option and explicitly adding the -t option did not change anything.
The mount point remains the same (i.e. /media/user/<UUID>, the drive is automouted by xfce, the /dev/sdxy obviously changes)
The hard drive source is ext4, while the USB is vfat with utf8 character encoding.
What could be the reason for this behaviour is it the change in the /dev/ name entry? How can I make rsync run with properly recognizing file changes? My backup should just take seconds without this, while it now is always minutes due to the large amount of data being overwritten repeatedly, nor is the massive writing the best for the flash drive's life time expectancy.
|
Your FAT drive can store timestamps only to two second accuracy. When you unplug and replug the drive you effectively break all the file times. See the --modify-window option for a workaround:
rsync -av --modify-window=1 /source/ /dest/
Secondly, you're never going to see fast backups with rsync like this, because when copying locally it behaves much like cp.
| rsync to USB flash drive always transferring all data |
1,397,407,725,000 |
I need to detect a filesystem type from a C/C++ program using the filesystem superblock. However, I don't see much differences between superblocks for ext2 and ext4. The s_rev_level field is the same (=1), the s_minor_rev_level is the same (=0).
I could check some features from s_feature_compat (and other feature fields) and try to locate features, which aren't supported by ext2. But - the person, formatting a partition, could disable some features on purpose. So, this method can detect the ext4, but it can't distinguish between the ext2 and the ext4 with disabled ext4-specific features.
So, how to do that?
|
After looking at the code for various utilities and the kernel code for some time, it does seem that what @Hauke suggested is true - whether a filesystem is ext2/ext3/ext4 is purely defined by the options that are enabled.
From the Wikipedia page on ext4:
Backward compatibility
ext4 is backward compatible with ext3 and ext2, making it possible to mount ext3 and ext2 as ext4. This will slightly improve performance, because certain new features of ext4 can also be used with ext3 and ext2, such as the new block allocation algorithm.
ext3 is partially forward compatible with ext4. That is, ext4 can be mounted as ext3 (using "ext3" as the filesystem type when mounting). However, if the ext4 partition uses extents (a major new feature of ext4), then the ability to mount as ext3 is lost.
As most probably already know, there is similar compatibility between ext2 and ext3.
After looking at the code which blkid uses to distinguish different ext filesystems, I was able to turn an ext4 filesystem into something recognised as ext3 (and from there to ext2). You should be able to repeat this with:
truncate -s 100M testfs
mkfs.ext4 -O ^64bit,^extent,^flex_bg testfs <<<y
blkid testfs
tune2fs -O ^huge_file,^dir_nlink,^extra_isize,^mmp testfs
e2fsck testfs
tune2fs -O metadata_csum testfs
tune2fs -O ^metadata_csum testfs
blkid testfs
./e2fsprogs/misc/tune2fs -O ^has_journal testfs
blkid testfs
First blkid output is:
testfs: UUID="78f4475b-060a-445c-a5d2-0f45688cc954" SEC_TYPE="ext2" TYPE="ext4"
Second is:
testfs: UUID="78f4475b-060a-445c-a5d2-0f45688cc954" SEC_TYPE="ext2" TYPE="ext3"
And the final one:
testfs: UUID="78f4475b-060a-445c-a5d2-0f45688cc954" TYPE="ext2"
Note that I had to use a new version of e2fsprogs than was available in my distro to get the metadata_csum flag. The reason for setting, then clearing this was because I found no other way to affect the underlying EXT4_FEATURE_RO_COMPAT_GDT_CSUM flag. The underlying flag for metadata_csum (EXT4_FEATURE_RO_COMPAT_METADATA_CSUM) and EXT4_FEATURE_RO_COMPAT_GDT_CSUM are mutually exclusive. Setting metadata_csum disables EXT4_FEATURE_RO_COMPAT_GDT_CSUM, but un-setting metadata_csum does not re-enable the latter.
Conclusions
Lacking a deep knowledge of the filesystem internals, it seems either:
Journal checksumming is meant to be a defining feature of a filesystem created as ext4 that you are really not supposed to disable and that fact that I have managed this is really a bug in e2fsprogs. Or,
All ext4 features were always designed to be disabled and disabling them does make the filesystem to all intents an purposes an ext3 filesystem.
Either way a high level of compatibility between the filesystems is clearly a design goal, compare this to ReiserFS and Reiser4 where Reiser4 is a complete redesign. What really matters is whether the features present are supported by the driver that is used to mount the system. As the Wikipedia article notes the ext4 driver can be used with ext3 and ext2 as well (in fact there is a kernel option to always use the ext4 driver and ditch the others). Disabling features just means that the earlier drivers will have no problems with the filesystem and so there are no reasons to stop them from mounting the filesystem.
Recommendations
To distinguish between the different ext filesystems in a C program, libblkid seems to be the best thing to use. It is part of util-linux and this is what the mount command uses to try to determine the filesystem type. API documentation is here.
If you have to do your own implementation of the check, then testing the same flags as libblkid seems to be the right way to go. Although notably the file linked has no mention of the EXT4_FEATURE_RO_COMPAT_METADATA_CSUM flag which appears to be tested in practice.
If you really wanted to go the whole hog, then looking at for journal checksums might be a surefire way of finding if a filesystem without these flags is (or perhaps was) ext4.
Update
It is actually somewhat easier to go in the opposite direction and promote an ext2 filesystem to ext4:
truncate -s 100M test
mkfs.ext2 test
blkid test
tune2fs -O has_journal test
blkid test
tune2fs -O huge_file test
blkid test
The three blkid ouputs:
test: UUID="59dce6f5-96ed-4307-9b39-6da2ff73cb04" TYPE="ext2"
test: UUID="59dce6f5-96ed-4307-9b39-6da2ff73cb04" SEC_TYPE="ext2" TYPE="ext3"
test: UUID="59dce6f5-96ed-4307-9b39-6da2ff73cb04" SEC_TYPE="ext2" TYPE="ext4"
The fact that ext3/ext4 features can so easily by enabled on a filesystem that started out as ext2 is probably the best demonstration that the filesystem type really is defined by the features.
| Reliable way to detect ext2 or ext3 or ext4? |
1,397,407,725,000 |
I understand that I can list the location of a filesystem's superblocks using the following commands.
Example
First get the device handle for the current directory.
$ df -h .
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/fedora_greeneggs-home 402G 146G 236G 39% /home
Then use this command to list the superblocks for handle /dev/mapper/fedora_greeneggs-home.
$ sudo dumpe2fs /dev/mapper/fedora_greeneggs-home | grep -i superblock
dumpe2fs 1.42.7 (21-Jan-2013)
Primary superblock at 0, Group descriptors at 1-26
Backup superblock at 32768, Group descriptors at 32769-32794
Backup superblock at 98304, Group descriptors at 98305-98330
Backup superblock at 163840, Group descriptors at 163841-163866
Backup superblock at 229376, Group descriptors at 229377-229402
Backup superblock at 294912, Group descriptors at 294913-294938
Backup superblock at 819200, Group descriptors at 819201-819226
Backup superblock at 884736, Group descriptors at 884737-884762
Backup superblock at 1605632, Group descriptors at 1605633-1605658
Backup superblock at 2654208, Group descriptors at 2654209-2654234
Backup superblock at 4096000, Group descriptors at 4096001-4096026
Backup superblock at 7962624, Group descriptors at 7962625-7962650
Backup superblock at 11239424, Group descriptors at 11239425-11239450
Backup superblock at 20480000, Group descriptors at 20480001-20480026
Backup superblock at 23887872, Group descriptors at 23887873-23887898
Backup superblock at 71663616, Group descriptors at 71663617-71663642
Backup superblock at 78675968, Group descriptors at 78675969-78675994
Backup superblock at 102400000, Group descriptors at 102400001-102400026
But how does one actually examine the contents of one of these superblocks?
|
I'm not sure how you can examine any particular superblock, but you can use this command to examine the general contents that all the superblocks share like so, using dumpe2fs.
$ sudo dumpe2fs /dev/mapper/fedora_greeneggs-home | less
Example
$ sudo dumpe2fs /dev/mapper/fedora_greeneggs-home | less
Filesystem volume name: <none>
Last mounted on: /home
Filesystem UUID: xxxxxxx-xxxx-xxxx-xxxx-88c06ecdd872
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 26722304
Block count: 106857472
Reserved block count: 5342873
Free blocks: 67134450
Free inodes: 25815736
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 998
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Sat Dec 7 20:41:58 2013
Last mount time: Sun Dec 22 21:31:01 2013
...
References
Superblock Definition
| How can I dump the contents of a filesystem's superblock? |
1,397,407,725,000 |
I'm running Debian/Testing, with kernel 4.4:
# uname -a
Linux shaula 4.4.0-1-amd64 #1 SMP Debian 4.4.6-1 (2016-03-17) x86_64 GNU/Linux
So I want to use the lazytime mount option, which is why I put the following in my /etc/fstab:
# grep vg_crypt-root /etc/fstab
/dev/mapper/vg_crypt-root / ext4 lazytime,errors=remount-ro 0 1
However, now the filesystem seems to be mounted with both relatime and lazytime:
# grep vg_crypt-root /etc/mtab
/dev/mapper/vg_crypt-root / ext4 rw,lazytime,relatime,errors=remount-ro,data=ordered 0 0
How can this be?
|
Good news: it's expected.
The lazytime flag is independent of strictatime/relatime/noatime. And
the default is relatime. So when you replaced noatime with lazytime,
it's not surprising that you saw the relatime mount option being set.
-- Ted Ts'o
Unfortunately this doesn't explain what it means.
A literal reading of the manpage suggests that relatime suppresses both in-memory updates and disk writes. lazytime only suppresses disk writes (and applies to mtime as well as atime). This makes sense to me given the discussions that led to the implementation of lazytime. IOW it would be very easy to write a test for relatime. But the effect of lazytime is only visible if you look at disk writes, or test what happens with unclean shutdowns.
Personally the effect of lazytime on mtime sounds a bit odd. Maybe it's a nice optimization for systems with high uptime, but I don't know about the average desktop... And nowadays that's actually a laptop; we're not supposed to be so gung-ho about undefined or weirdly partially-defined behaviour on powerfail. It's even more special-case if you consider copy-on-write filesystems like btrfs; the "inode" is likely to be updated even when the filesize doesn't change. By contrast relatime is lovely and deterministic.
And the mtime optimization only seems to be helpful if you have writes to a large number of files which don't change their size. I'm not sure there's even a common benchmark for that. Some very non-trivial database workload, I suppose.
Seriously Ted, why didn't we get lazyatime?
| Why is EXT4 filesystem mounted with both relatime and lazytime |
1,397,407,725,000 |
Recently I installed Debian Squeeze, first using ext3 and then again using ext4 on the same machine. The automatic fsck done after a certain number of mounts is much faster using ext4 (about 1 min) than ext3 (about 5 min).
What are the reasons for this significant difference in speed? If ext4 is much faster why does the Debian installer default to using ext3?
|
That's one of the most advertised benefits of ext4 (see it mentioned in the Features on Wikipedia).
The reason? Filesystem developers worked hard to achieve this.
Here's a short summary quoted from Wikipedia:
Faster file system checking
In ext4, unallocated block groups and sections of the inode table are marked as such. This enables e2fsck to skip them entirely on a check and greatly reduces the time it takes to check a file system of the size ext4 is built to support.
| Significant difference in speed between fsck using ext3 and ext4 on Debian Squeeze |
1,397,407,725,000 |
Is it possible to change a file "Birth date" (according to the stat file "Birth" field)?
I can change the modification/access time with touch -t 200109110846 file, but can't find the corresponding option for "Birth".
|
Like the last change time, the birth time isn’t externally controllable. On file systems which support it, the birth timestamp is set when a file is created, and never changes after that.
If you want to control it, you need to change the system’s notion of the current date and time, and create a new file.
| Change file "Birth date" for ext4 files? |
1,397,407,725,000 |
I have a problem with removing empty dir, strace shows error:
rmdir("empty_dir") = -1 ENOTEMPTY (Directory not empty)
And ls -la empty_dir shows nothing. So i connected to the fs (ext4) with debugfs and see the hidden file inside this dir:
# ls -lia empty_dir/
total 8
44574010 drwxr-xr-x 2 2686 2681 4096 Jan 13 17:59 .
44573990 drwxr-xr-x 3 2686 2681 4096 Jan 13 18:36 ..
debugfs: ls empty_dir
44574010 (12) . 44573990 (316) ..
26808797 (3768) _-----------------------------------------------------------.jpg
Why could this happen? And any chance to solve this problem without unmounting and full checking fs?
Additional information:
The "hidden" file is just a normal jpg file and can be opened by the image viewer:
debugfs: dump empty_dir/_-----------------------------------------------------------.jpg /root/hidden_file
# file /root/hidden_file
/root/hidden_file: JPEG image data, JFIF standard 1.02
rm -rf empty_dir is not working with the same error:
unlinkat(AT_FDCWD, "empty_dir", AT_REMOVEDIR) = -1 ENOTEMPTY (Directory not empty)
find empty_dir/ -inum 26808797 shows nothing.
|
I straced ls and got more information to dig (stripped non-important syscalls):
open("empty_dir", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
getdents(3, /* 3 entries */, 32768) = 80
write(1, ".\n", 2.) = 2
write(1, "..\n", 3..) = 3
Hmm, we see that syscall getdents works correctly and returned all 3 entries ('.','..' and '_---*'), but ls wrote only '.' and '..'. It means that we have some problem with wrapper around getdents which is used by coreutils. And coreutils use readdir glibc wrapper for getdents. Also to prove that there are no problems with getdents i tested little prog from example section of getdents' man page. This prog showed all files.
Maybe we just found a bug at the glibc? So i updated glibc package to the last version in my distro but didn't get any good result. Also i didn't find any correlating information in bugzilla.
So let's go deeper:
# gdb ls
(gdb) break readdir
(gdb) run
Breakpoint 1, 0x00007ffff7dfa820 in readdir () from /lib64/libncom.so.4.0.1
(gdb) info symbol readdir
readdir in section .text of /lib64/libncom.so.4.0.1
Wait, what? libncom.so.4.0.1? Not a libc? Yes, we just see a malicious shared library with libc functions for hiding malicious activity:
# LD_PRELOAD=/lib64/libc.so.6 find / > good_find
# find / > injected_find
# diff good_find injected_find
10310d10305
< /lib64/libncom.so.4.0.1
73306d73300
< /usr/bin/_-config
73508d73501
< /usr/bin/_-pud
73714d73706
< /usr/bin/_-minerd
86854d86845
< /etc/ld.so.preload
Removing rootkit files, checking all packages' files (rpm -Va in my case), auto-start scripts, preload/prelink configs, system files (find / + rpm -qf in my case), changing affected passwords, finding and killing rootkit's processes:
# for i in /proc/[1-9]*; do name=$(</proc/${i##*/}/comm); ps -p ${i##*/} > /dev/null || echo $name; done
_-minerd
In the end full system update, reboot and problem solved. Reason of the successful hacking: ipmi interface with very old firmware which suddenly was available from the public network.
| rmdir failed to remove empty directory |
1,397,407,725,000 |
So en route from my old laptop to a new one my old laptop's hard drive got some physical damage. badblocks reports 64 bad sectors. I had a two-month-old Ubuntu GNOME setup with a split / and /home partitions. From what I can tell, a few sectors in / were damaged, but that's not an issue. On the other hand, /home's partition gives me this annotated ddrescue log:
# Rescue Logfile. Created by GNU ddrescue version 1.17
# Command line: ddrescue -d -r -1 /dev/sdb2 home.img home.log
# current_pos current_status
0x6788008400 -
# pos size status
0x00000000 0x6788000000 +
0x6788000000 0x0000A000 -
first 10 sectors of the ext4 journal
0x678800A000 0x2378016000 +
0x8B00020000 0x00001000 -
inode table entries for /pietro (my $HOME) and a few folders within
0x8B00021000 0x00006000 +
0x8B00027000 0x00001000 -
unknown (inode table?)
0x8B00028000 0x00004000 +
0x8B0002C000 0x00001000 -
unknown (inode table?)
0x8B0002D000 0x001DC000 +
0x8B00209000 0x00001000 -
unknown (inode table?)
0x8B0020A000 0x00090000 +
0x8B0029A000 0x00001000 -
unknown (inode table?)
0x8B0029B000 0x4420E65000 +
I made the annotations with use of debugfs's icheck and testb commands; all the damaged blocks are marked used. Some superblock stats:
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 972
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
So my questions are:
Can I find out exactly what those five unknown blocks were, if not inode entries? My suspicion is that they are inode table entries, but icheck doesn't want to say. If they are, can I find out which inodes?
Can I still recover these inode table entries from the journal by hand, even though the first 10 blocks of the journal are lost?
I'd rather not do this data recovery with fsck, which will just dump all my files i n /lost+found in a giant mess of flattened directory structure and duplicate files...
Thanks.
|
All right, so for the first question it turns out the debugfs stats command tells what the starting blocks for every section of a group are. In addition, I guessed that inumbers had to be consecutive and increasing, so basic addition of the offset into the inode table and the imap command gave me the first inumbers; it also confirmed my suspicion about the last bad sector, where my block group calculations indicated it was in the wrong group.
byte address block group what first inumber
0x8B00020000 145752096 4448 inode table block 0 36438017
0x8B00027000 145752103 4448 inode table block 7 36438129
0x8B0002C000 145752108 4448 inode table block 12 36438209
0x8B00209000 145752585 4448 inode table block 489 36445841
0x8B0029A000 145752730 4449 inode table block 122 36448161
Since a block is 4096 bytes and each inode table entry is 256 bytes, there are 16 inodes per block. So I now have all 80 lost inode table entries by inumber.
Now let's turn to the journal. I wrote a small tool that dumps information in each block of the journal. Since the journal superblock was missing, there were two pieces of information that I needed for this that were lost:
whether the journal held 64-bit block numbers
whether the journal used version 3 checksums
Fortunately, if I forced one (or both) of these switches on, some of the descriptor blocks in the journal overflowed its block, proving that those flags were not set.
One awk script (fulllog.awk) later, I have a log of the form
0x0002A000 - descriptors
0x0002B000 -> block 159383670
0x0002C000 -> block 159383671
0x0002D000 -> block 0
0x0002E000 -> block 155189280
0x0002F000 -> block 195559440
0x00030000 -> block 47
0x00031000 -> block 195559643
0x00032000 -> block 195568036
0x00033000 -> block 159383672
0x0002B000 - invalid/data block
0x0002C000 - invalid/data block
0x0002D000 - invalid/data block
0x0002E000 - invalid/data block
0x0002F000 - invalid/data block
0x00030000 - invalid/data block
0x00031000 - invalid/data block
0x00032000 - invalid/data block
0x00033000 - invalid/data block
0x00034000 - commit record
commit time: 2014-12-25 16:53:13.703902604 -0500 EST
With this, another awk script (dumpallfor.awk) dumps all the blocks:
byte address block number of journaled blocks
0x8B00020000 145752096 6
0x8B00027000 145752103 10
0x8B0002C000 145752108 206
0x8B00209000 145752585 1
0x8B0029A000 145752730 0
So that last block is truly lost :( With any luck I can find out what files were there with debugfs's ncheck command.
So I have a bunch of blocks. And they all appear to differ! Now what?
I could go by the revocation records, but I can't seem to parse that structure meaningfully. I could go by the commit record timestamps, but before I try that, I want to see just how each inode table block differs. So I wrote another quick program (diff.go) to find that out.
For the most part, files that do differ differ only in timestamps, so we can just choose the file with the latest timestamps. We'll do that later. For all other files, we get this:
36438023 - size differs
36438139 - OSD1 (file version high dword) differs
36438209 - OSD1 differs
Hm, that's not good... The file with differing size will be a problem, and I have no idea what to do about the two OSD1 files. I also tried using debugfs's ncheck to see what the files were, but we don't have a match.
I then found out which block dumps have the latest timestamps for now (same repo, latest.go). The important thing to note is that I had the blocks scanned in chronological order by commit time. This is not necessarily the same as numerical order by block number; the journal is not always stored in chronologically increasing order.
As it turns out, however, the newest block (by commit time) is indeed the one with the latest timestamps!
Let's try these latest blocks and see if we can recover anything from them.
sudo dd if=BLOCKFILE of=DDRESCUEIMG bs=1 seek=BYTEOFFSET conv=notrunc
After that my home directory is back!
Now let's find out what those three differing files were...
Inode Pathname
36438023 /pietro/.cache/gdm/session.log
36438209 /pietro/.config/liferea
36438139 /pietro/.local/share/zeitgeist/fts.index
The only important thing there is Liferea's configuration directory, but I don't think that was corrupted; it was one of the OSD1-differing ones.
And let's find out about those 16 inodes in the final block, the one that we could not recover:
Inode Pathname
36448176 /pietro/k2
36448175 /pietro/Downloads/sOMe4P7.jpg
36448174 /pietro/Downloads/picture.png
36448164 /pietro/Downloads/tumblr_nfjvg292T21s4pk45o1_1280.png
36448169 /pietro/Downloads/METROID Super Zeromission v.2.3+HARD_v2.4.zip
36448165 /pietro/Downloads/tumblr_mrfex1kuxa1sbx6kgo1_500.jpg
36448173 /pietro/Downloads/1*-vuzP4JAoPf9S6ZdHNR_Jg.jpeg
36448162 /pietro/.cache/upstart/gnome-settings-daemon.log.6.gz
36448163 /pietro/.cache/upstart/dbus.log.7.gz
36448171 /pietro/.cache/upstart/gnome-settings-daemon.log.3.gz
36448161 /pietro/.local/share/applications/Knytt Underground.desktop
36448166 /pietro/Documents/Screenshots/Screenshot from 2014-12-03 15:47:29.png
36448170 /pietro/Documents/Screenshots/Screenshot from 2014-12-03 16:51:26.png
36448172 /pietro/Documents/Screenshots/Screenshot from 2014-12-03 19:08:54.png
36448168 /pietro/Documents/transactions/premiere to operating transaction 4305747926.pdf
36448167 /pietro/Documents/transactions/transaction 4315883542.pdf
In short:
a text file with only one or two things in that I could get back by brute force since I know that it has a date stamp and something that's also in my chat logs
some images downloaded from the internet; if I can't get the URLs back from Firefox's history then I can use photorec
a ROM hack that I can easily get on the Internet again =P
log files; no loss here
the .desktop file for a Steam game
screenshots; I can get these back with photorec assuming gnome-screenshot added the datestamp as metadata
bank account transaction records; if I can't get them from the bank I could probably use them with photorec
So not casualtyless but not a total loss, and I learned more about ext4 in the process. Thanks anyway!
UPDATE
Might as well put this out there:
NOT YET /pietro/k2
FOUND /pietro/Downloads/sOMe4P7.jpg
NOT YET /pietro/Downloads/picture.png
FOUND /pietro/Downloads/tumblr_nfjvg292T21s4pk45o1_1280.png
GOOGLEIT /pietro/Downloads/METROID Super Zeromission v.2.3+HARD_v2.4.zip
FOUND /pietro/Downloads/tumblr_mrfex1kuxa1sbx6kgo1_500.jpg
FOUND /pietro/Downloads/1*-vuzP4JAoPf9S6ZdHNR_Jg.jpeg
UNNEEDED /pietro/.cache/upstart/gnome-settings-daemon.log.6.gz
UNNEEDED /pietro/.cache/upstart/dbus.log.7.gz
UNNEEDED /pietro/.cache/upstart/gnome-settings-daemon.log.3.gz
UNNEEDED /pietro/.local/share/applications/Knytt Underground.desktop
NOT YET /pietro/Documents/Screenshots/Screenshot from 2014-12-03 15:47:29.png
NOT YET /pietro/Documents/Screenshots/Screenshot from 2014-12-03 16:51:26.png
NOT YET /pietro/Documents/Screenshots/Screenshot from 2014-12-03 19:08:54.png
NOT YET /pietro/Documents/transactions/premiere to operating transaction 4305747926.pdf
NOT YET /pietro/Documents/transactions/transaction 4315883542.pdf
And in case I'm not weird enough, the downloaded pictures were:
sOMe4P7.jpg (a parody of the Law & Order title card with "& KNUCKLES" added to it)
tumblr_nfjvg292T21s4pk45o1_1280.png (screenshot of this tweet from J. K. Rowling)
tumblr_mrfex1kuxa1sbx6kgo1_500.jpg (picture of a "Windows did not shut down successfully." error message on a billboard at what appears to be some sporting event)
1*-vuzP4JAoPf9S6ZdHNR_Jg.jpeg (this comic)
These were all shared by friends in chats.
I guess I'll keep this updated? (Not like it would make a difference...) I know I can recover everything; the only question is when =P
| Can I find out if a given ext4 block is in the inode table, and if so, can I pick it out of a journal with no header by hand? |
1,397,407,725,000 |
I't trying to move around 4.5 million files (size ranges from 100 - 1000 bytes) from one partition to other. The total size of the folder is ~2.4 GB
First I tried to zip it and move the zipped file to the new location. It is able to paste only ~800k files and shows "out of space" error.
Next I tried the mv command and it also resulted in same condition.
Using rsync also resulted in the same error with only ~800k files being moved.
I checked the disk free status and it is way under the limit. ( The new partition has ~700 GB free space and the required space is ~2.4 GB).
I checked the free inode for that partition it is the same. It is using only ~800k out of the maximum possible 191 M inodes. ( I had actually formatted the partition with 'mkfs.ext4 -T small /dev/sdb3' )
I have no idea of what is going wrong here. Everytime it is only able to copy or move ~800k files only.
|
I have found the reason for the error (found it on a different forum).
The error was due to the hashing algorithm used by ext4 which is enabled by "dir_index" parameter. There were too many hashing collisions for me so I disabled it by the following command:
tune2fs -O "^dir_index" /dev/sdb3
The downside is that my partition is slower than before due to no indexing.
For more information on the problem :
ext4: Mysterious “No space left on device”-errors
| Moving millions of small files results in "out of space" error |
1,397,407,725,000 |
I am trying to format an sdcard following this guide. I am able to successfully create the partition table, but attempting to format the Linux partition with mkfs yields the following output:
mke2fs 1.42.9 (4-Feb-2014)
Discarding device blocks: 4096/1900544
where it appears to hang indefinitely. I have left the process running for a while but nothing changes. If I eject the sdcard then mkfs writes the expected output to the terminal:
mke2fs 1.42.9 (4-Feb-2014)
Discarding device blocks: failed - Input/output error
Warning: could not erase sector 2: Attempt to write block to filesystem resulted in short write
warning: 512 blocks unused.
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
476064 inodes, 1900544 blocks
95026 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1946157056
58 block groups
32768 blocks per group, 32768 fragments per group
8208 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Warning: could not read block 0: Attempt to read block from filesystem resulted in short read
Warning: could not erase sector 0: Attempt to write block to filesystem resulted in short write
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: 0/58
Warning, had trouble writing out superblocks.
Why is mkfs reporting that we are "discarding" blocks and what might be causing the hangup?
EDIT
I am able to successfully create two partitions -- one at 100MB and the other 7.3GB. I then can format, and mount, the 100MB partition as FAT32 -- it's the ext4 7.3GB partition that is having this trouble.
dmesg is flooded with:
[ 9350.097112] mmc0: Got data interrupt 0x02000000 even though no data operation was in progress.
[ 9360.122946] mmc0: Timeout waiting for hardware interrupt.
[ 9360.125083] mmc_erase: erase error -110, status 0x0
[ 9360.125086] end_request: I/O error, dev mmcblk0, sector 3096576
EDIT 2
It appears the problem manifests when I am attempting to format as ext4. If I format the 7.3GB partition as FAT32, as an example, the operation succeeds.
EDIT 2
To interestingly conclude the above, I inserted the sdcard into a BeagleBone and formatted it in the exact same way I was on Mint and everything worked flawlessly. I removed the sdcard, reinserted it into my main machine and finished copying over the data to the newly created and formatted partitions.
|
I actually suspect you are being bitten by a much talked ext4 corruption bug in kernel 3 and 4. Have a look at this thread,
http://bugzilla.kernel.org/show_bug.cgi?id=89621
There have been constant reports of corruption bugs with ext4 file systems, with varying setups. Lots of people complaining in forums. The bug seems to affect more people with RAID configurations.
However, they are supposedly fixed in 4.0.3.
"4.0.3 includes a fix for a critical ext4 bug that can result in major data loss."
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=785672
There are other ext4 bugs, including bugs fixed as of the 30th of November [of 2015].
https://lists.ubuntu.com/archives/foundations-bugs/2015-November/259035.html
There is also here a very interesting article talking about configuration options in ext4, and possible corruption with it with power failures.
http://www.pointsoftware.ch/en/4-ext4-vs-ext3-filesystem-and-why-delayed-allocation-is-bad/
I would test the card with other filesystem other than ext4, maybe ext3.
Those systematic bugs with ext4 are one of the reasons I am using linux-image-4.3.0-0.bpo.1-amd64 from the debian backports repository in Jessie in my server farm at work.
Your version in particular, kernel 3.13 seems to be more affected by the bug.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1298972
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1389787
I would not put it aside maybe some combination of configuration and hardware at your side is triggering the bug more than usual.
SD cards also go bad with wear and tear, and due to the journaling filesystem, an ext4fs system is not the ideal for an SD card. As a curiosity, I am using a Lamobo R1 and using the SD card just for booting the kernel, with an SSD disk.
http://linux-sunxi.org/Lamobo_R1
| Formatting an sdcard with mkfs hangs indefinitely |
1,397,407,725,000 |
As you can read here , ext4 file system has an extent feature that groups blocks into extents. Each of them can have up to 128MiB contiguous space. In e4defrag , there are lines similar to the following:
[325842/327069]/file: 100% extents: 100 -> 10 [ OK ]
The size of the file is around 150MiB. So according to the wiki page, there should be 2 extents instead of 10.
Does anyone know why the extents are 15MiB instead of 128MiB?
Is there a a tool that can check the exact extent size?
How can I change the size so it could be 128MiB?
|
I think I know how it works.
I connected another disk to my machine because it has a big almost empty partition ~458G . I checked its free space via e2freefrag:
HISTOGRAM OF FREE EXTENT SIZES:
Extent Size Range : Free extents Free Blocks Percent
64M... 128M- : 6 146233 0.12%
128M... 256M- : 5 322555 0.27%
256M... 512M- : 3 263897 0.22%
512M... 1024M- : 6 1159100 0.98%
1G... 2G- : 228 116312183 98.40%
It's just a contiguous free blocks. So because the partition is almost empty, there's lots of free space and you have 228 chunks of 1-2G.
I placed a big 2,5G file inside of the partition, and the table above changed a little bit:
HISTOGRAM OF FREE EXTENT SIZES:
Extent Size Range : Free extents Free Blocks Percent
2M... 4M- : 5 5114 0.00%
64M... 128M- : 7 170777 0.14%
128M... 256M- : 1 64511 0.05%
256M... 512M- : 4 361579 0.31%
512M... 1024M- : 5 930749 0.79%
1G... 2G- : 227 116025495 98.16%
This doesn't tell anything about the allocated block extents, but it gave me some ideas. When I looked at the file in e4defrag, there was something like this:
# e4defrag -cv file
<File>
[ext 1]: start 34816: logical 0: len 32768
[ext 2]: start 67584: logical 32768: len 30720
[ext 3]: start 100352: logical 63488: len 32768
[ext 4]: start 133120: logical 96256: len 30720
[ext 5]: start 165888: logical 126976: len 32768
[ext 6]: start 198656: logical 159744: len 30720
[ext 7]: start 231424: logical 190464: len 32768
[ext 8]: start 264192: logical 223232: len 30720
[ext 9]: start 296960: logical 253952: len 32768
[ext 10]: start 329728: logical 286720: len 32768
[ext 11]: start 362496: logical 319488: len 32768
[ext 12]: start 395264: logical 352256: len 32768
[ext 13]: start 428032: logical 385024: len 32768
[ext 14]: start 460800: logical 417792: len 32768
[ext 15]: start 493568: logical 450560: len 30720
[ext 16]: start 557056: logical 481280: len 32768
[ext 17]: start 589824: logical 514048: len 32768
[ext 18]: start 622592: logical 546816: len 32768
[ext 19]: start 655360: logical 579584: len 32768
[ext 20]: start 688128: logical 612352: len 32768
[ext 21]: start 720896: logical 645120: len 622
The number 32768 means blocks (4K), which equals to 128MiB. Some of them have fewer blocks and I don't know why because the filesystem is empty and I think all the extents should have 32768 blocks.
Anyway I checked the main partition to see its free space, and there was something like this:
HISTOGRAM OF FREE EXTENT SIZES:
Extent Size Range : Free extents Free Blocks Percent
4K... 8K- : 3955 3955 0.06%
8K... 16K- : 3495 8194 0.13%
16K... 32K- : 2601 13165 0.20%
32K... 64K- : 2622 28991 0.45%
64K... 128K- : 2565 58267 0.90%
128K... 256K- : 1576 71371 1.11%
256K... 512K- : 1331 118346 1.83%
512K... 1024K- : 1058 190532 2.95%
1M... 2M- : 1202 444210 6.89%
2M... 4M- : 1211 884489 13.71%
4M... 8M- : 1249 1803998 27.97%
8M... 16M- : 622 1643226 25.48%
16M... 32M- : 198 1024999 15.89%
32M... 64M- : 16 163082 2.53%
As you can see, there's no free contiguous blocks that could provide 128M (and more) space and that's why they've written on the wiki that you can have extents "up to" 128M.
I'm not sure why the file in question has 10 extents because there's still 16 chunks that are at least 32M.
| How to change the extent size in the ext4 file system? |
1,397,407,725,000 |
Usually, block device drivers report correct size of the device, and it is possible to actually use all the "available" blocks. So, the filesystem knows how much it can write to such device in prior.But in some special cases, like it is with dm-thin or dm-vdo devices, this statement is false. This kind of block devices can return ENOSPC error at any moment, if their underlying storage (which the upper-level FS knows nothing about) gets full.Therefore, my question is, what happens in such scenario: an EXT4 filesystem is mounted r/w, in async mode (which is the default), and it is doing a massive amount of writes. The disk cache (dirty memory) gets involved too, and at the moment there is a lot of data to be written if user runs sync command.But suddenly, the underlying block device of that EXT4 filesystem starts to refuse any writes due to "no space left". What will be the behavior of the filesystem?Will it print errors and go to r/o mode aborting all the writes and possibly causing data loss? If not, will it just wait for space, periodically retrying writes and refusing new ones? In that case, what will happen to the huge disk cache, if other processes try to allocate lots of RAM? (On Linux, dirty memory is considered Available, isn't it?).Considering worst scenario, if the disk cache was taking up most of the RAM at the moment of ENOSPC error (because admin has set vm.dirty_ratio too high), can the kernel crash or lock up? Or it will just make all processes which want allocate memory wait/hang? Finally, does the behavior differ across filesystems?
Thanks in advance.
|
When the block device overcommits its available data capacity like when using thin provisioning or has other reasons to not be able to accept more writes, like having a snapshot full, it has to send a message to what is writing to it. ENOSPC would make no sense in this context, so the error chosen is usually EIO (Input/output error).
UPDATE: actually LVM has configurable behavior. For Thin provisioned LV:
--errorwhenfull n (default): blocks for up to (configurable) 60 seconds, just as OP considered, then errors. Unless an automatic action is performed during these 60s chances are the result will be the same as immediate error.
Note also that if the timeout is completely disabled:
Disabling timeouts can result in the system running out of resources,
memory exhaustion, hung tasks, and deadlocks. (The timeout applies to
all thin pools on the system.)
--errorwhenfull y: immediately returns an error
If the "user" is a filesystem, it will react to I/O error the same as if this was caused by an actual media error, possibly depending on mount options (eg, for ext4 possible options are errors={continue|remount-ro|panic}). I can't tell for sure what happens to dirty data still in cache when one of the non-panic options is chosen. One could imagine it's either left in cache or will be lost, but one should assume it will be lost anyway.
As this is a severe result, such disk space should be actively monitored and once a threshold is reached, there should be either data freed or more actual space added so the overcommitted space never gets full. Same for snapshots, especially the non-thin-provisioned kind which uses more space over time: it should be removed when not needed anymore. There are even options to auto-increase the thin-provisioned space for emergencies (when the layer providing space to the thin provisioning layer can still provide more).
further references:
Automatically extend thin pool LV
Managing free space on VDO volumes
| How does EXT4 handle sudden lack of space in the underlying storage? |
1,397,407,725,000 |
If I kill a running e4defrag, is there a risk of data loss/corruption? Is there a safe way to interrupt it?
For example: running e4defrag on large partition (such as the root directory) or large file (such as a squashfs system image file) is very slow, so sometimes, stopping/killing e4defrag before it is done is needed, but I'm not sure whether killing it (by sending either of SIGINT,SIGTERM,SIGKILL, etc. to it) e4defrag is safe?
I'm running Debian Stretch and the filesystem is ext4.
My kernel version:4.14.13
My e2fsprogs version:1.43.4-2
|
I haven't checked the code itself, but since e4defrag is only working on a single file at a time, it definitely can't corrupt the whole filesystem.
In any case, the actual data movement is done in the kernel in the context of a journal transaction, so it should be immune to whatever you do in userspace. It shouldn't even be able to cause a problem if you reboot in the middle.
| How to kill/terminate running e4defrag without damaging my data? |
1,397,407,725,000 |
Ext4 has a maximum filesystem size of 1EB and maximum filesize of 16TB.
However is it possible to make the maximum filesize smaller at filesystem level ? For example I wouldn't like to allow to create files greater than a specified value (e.g. 1MB). How can this be achieved on ext4 ?
If not ext4 then any other modern filesystem has support for such feature ?
|
ext4 has a max_dir_size_kb mount option to limit the size of directories, but no similar option for regular files.
A process however can be prevented from creating a file bigger than a limit using limits as set by setrlimit() or the ulimit or limit builtin of some shells. Most systems will also let you set those limits system-wide, per user.
When a process exceeds that limit, it receives a SIGXFSZ signal. And when it ignores that signal, the operation that would have caused that file size to be exceeded (like a write() or truncate() system call) fails with a EFBIG error.
To move that limit to the file system, one trick you could do is use a fuse (file system in user space) file system, where the user space handler is started with that limit set. bindfs is a good candidate for that.
If you run bindfs dir dir (that is bind dir over itself), with bindfs started as (zsh syntax):
(limit filesize 1M; trap '' XFSZ; bindfs dir dir)
Then any attempt to create a file bigger than 1M in that dir will fail. bindfs forwards the EFBIG error to the process writing the file.
Note that that limit only applies to regular files, that won't stop directories to grow past that limit (for instance by creating a large number of files in them).
| limit the maximum size of file in ext4 filesystem |
1,397,407,725,000 |
From some foggy memories I thought I would "improve" the default settings when creating a Linux partition and increased the inode size to 1024, and also turned on -O bigalloc ("This ext4 feature enables clustered block allocation").
Now, though, I can't find any concrete benefits to these settings cited on the net, and I see that with 20% disk usage I'm already using 15% of the inodes.
So should I simply reformat the partition, or is there a positive to look on (or to use as justification)? E.g. for directories with lots of files?
|
Larger inodes are useful if you have many files with a large amount of metadata. The smallest inode size has room for classical metadata: permissions, timestamps, etc., as well as the address of a few blocks for regular files, or the target of short symbolic links. Larger inodes can store extended attributes such as access control lists and SELinux contexts. If there is not enough room for the extended attributes in the inode, they have to be stored in a separate block, which makes opening the file or reading its metadata slower.
Hence you should use a larger inode size if you're planning on having large amounts of extended attributes such as complex ACLs, or if you're using SELinux. SELinux is the primary motivation for larger inodes.
| any large inode size benefits? (ext4) |
1,397,407,725,000 |
I was reading about Linux 5.2 patch note released at last year, I noticed that they started to optional support for case-insensitive names in ext4 file system.
So... what I am wondering is the reason why the case-insensitive option (including casefold and normalization) was needed in the kernel. I could find out another article written by Krisman who wrote the kernel code for supporting case-folding file system, but case-insensitive file system allows us to resolve important bottlenecks for applications being ported from other operating systems does not reach my heart and I cannot understand how the process of normalization and casefolding allow us to optimize our disk storage.
I appreciate so much for your help!
|
case-insensitive file system allows us to resolve important bottlenecks for applications being ported from other operating systems
does not reach my heart and I cannot understand how the process of normalization and casefolding allow us to optimize our disk storage.
Wine, Samba, and Android have to provide case-insensitive filesystem semantics. If the underlying filesystem is case-sensitive, every time a case-sensitive lookup fails, Wine et al. has to scan each directory to prove there are no case-insensitive matches (e.g. if looking up /foo/bar/readme.txt fails, you have to perform a full directory listing and case-folded comparison of all files in foo/bar/* and all directories in foo/*, and /*).
There are a few problems with this:
It can get very slow with deeply nested paths (which can generate hundreds of FS calls) or directories with tens of thousands of files (i.e. storing incremental backups over SMB).
These checks introduce race conditions.
It's fundamentally unsound: if both readme.txt and README.txt exist but an application asks for README.TXT, which file is returned is undefined.
Android went so far as to emulate case-insensitivity using FUSE/wrapfs and then the in-kernel SDCardFS. However, SDCardFS just made everything faster by moving the process into kenel space†. It still had to walk the filesystem (and was thus IO bound), introduced race conditions, and was fundamentally unsound. Hence why Google funded† development of native per-directory case-insensitivity in F2FS and have since deprecated SDCardFS.
There have been multiple attempts in the past to enable case-insensitive lookups via VFS. The most recent attempt in 2018 allowed mounting a case-insensitive view of the filesystem. Ted Tso specifically cited the issues with wrapfs for adding this functionality, as it would at least be faster and (I believe) free of race conditions. However, it was still unsound (requesting README.TXT could return readme.txt or README.txt). This was rejected in favor of just adding per-directory support for case-insensitivity and is unlikely to ever make it into VFS††.
Furthermore, users expect case-insensitivity thus any consumer oriented operating system has to provide it. Unix couldn't supported it natively because Unicode didn't exist and strings were just bags-of-bytes. There are plenty of valid criticisms of how case-folding was handled in the past, but Unicode provides an immutable case-fold function that works for all but a single locale (Turkic, and even then it's just two codepoints). And the filesystem b-tree is the only reasonable place to implement this behavior.
†AFAICT
††I emailed Krisman, the author of both the VFS-based case-insensitive lookups and per-directory case-insensitive support on EXT4 and F2FS.
| Why case-insensitive option in ext4 was needed? |
1,433,980,862,000 |
I want to create an ext4 filesystem, and add some files to it, then "freeze" it so it is henceforth read-only.
I know it's possible to use the ro mount option. But is there some way to indicate in the filesystem itself that it is read-only?
I see that tune2fs has an option -o to set default mount options, but -o ro is not a valid option.
I also see that tune2fs has an option -E mount_opts. I tried -E mount_opts=ro on a loopback filesystem (Ubuntu 14.10):
dd if=/dev/zero of=ext4test bs=1M count=32
mkfs.ext4 -L test ext4test
tune2fs -E mount_opts=ro ext4test
mkdir ext4testmnt
sudo mount ext4test ext4testmnt
However, the file system is still mounted as read-write.
|
This is supported in recent kernels (4.0 and later) and, since late February 2015, in e2fsprogs (available since version 1.42.13).
With the appropriate kernel and tools, you can flag an ext4 filesystem read-only using tune2fs:
tune2fs -O read-only ext4test
and clear the flag as always with
tune2fs -O ^read-only ext4test
| Mark an ext4 filesystem as read-only |
1,433,980,862,000 |
System file: ext4
I changed the owner of files to apache: with the command:
chown -R apache: wp.localhost
Then, I could not change the permissions of directories in wp.localhost nor the wp.localhost itself
I use the command chmod +w wp.localhost for example. and I do not see any permission change on it.
I also changed the group of folders by the command again, But did not solve the problem.
chown -R apache:users wp.localhost
Commads and permissions before and after:
#ls -ld wp.localhost
drwxr-xr-x 6 apache users 4096 Mar 28 15:26 wp.localhost/
# chmod +w wp.localhost
# ls -ld wp.localhost
drwxr-xr-x 6 apache users 4096 Mar 28 15:26 wp.localhost/
|
If you want to grant global write permission on that directory, you have to do
chmod a+w wp.localhost [1]
This is because omitting the 'who is affected' letter (u, g, o or a) implies a, but won't set bits that are set in your current umask. So, for example, if your umask was 0022, the 'write' bit is set in the 'group' and 'other' positions, and chmod will ignore it if you don't specify a explicitly.
The chmod man page is explicit about this:
If none of these ['who is affected' letters] are given, the effect is
as if a were given, but bits that are set in the umask are not
affected.
[1] Think carefully before doing this!
| chmod does not change permissions of certain directories |
1,433,980,862,000 |
Using ext4 filesystem I was able to read out the creation time of a file using the approach here. As a result I am indeed provided with a table featuring the crtime (creation time) of the inode(respective file) in question.
What confuses me and to which I could not find an answer in the man debugfs is why it shows me 2 lines with crtime, moreover not even being the same time.
This is the output I get
[user ~] $ sudo debugfs -R "stat <274742>" /dev/sda2
debugfs 1.43.1 (08-Jun-2016)
Inode: 274742 Type: regular Mode: 0644 Flags: 0x80000
Generation: 3666549610 Version: 0x00000000:00000001
User: 1000 Group: 1000 Project: 0 Size: 0
File ACL: 0 Directory ACL: 0
Links: 0 Blockcount: 0
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x57b4c632:1e30ee34 -- Wed Aug 17 22:16:50 2016
atime: 0x57b4c4c0:afa082b0 -- Wed Aug 17 22:10:40 2016
mtime: 0x57b4c632:1e30ee34 -- Wed Aug 17 22:16:50 2016
crtime: 0x57b4c4c0:afa082b0 -- Wed Aug 17 22:10:40 2016
crtime: 0x57b4c632:(1e30ee34) -- Wed Aug 17 22:16:50 2016
Size of extra inode fields: 32
Also note that the second (and not realy correct) crtime is in brackets and equals the mtime, since I saved to the file obviously twice.
|
This is the result of an editing error in the e2fsprogs patch debugfs: add support to properly set and display extended timestamps. The second crtime: line ought to be dtime:.
if (inode->i_dtime)
fprintf(out, "%scrtime: 0x%08x:(%08x) -- %s", prefix,
large_inode->i_dtime, large_inode->i_ctime_extra,
inode_time_to_string(inode->i_dtime,
large_inode->i_ctime_extra));
I submitted a bug report.
| Why does my file have multiple crtime entries? |
1,433,980,862,000 |
My partition table looks like this:
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 32505855 16251904 83 Linux
/dev/sda2 32505856 33554431 524288 83 Linux
When I went to lay down a filesystem on sda2, it threw this error:
sudo mkfs -t ext4 /dev/sda2
mke2fs 1.42.9 (4-Feb-2014)
mkfs.ext4: inode_size (128) * inodes_count (0) too big for a
filesystem with 0 blocks, specify higher inode_ratio (-i) or lower inode count (-N).
I have tried both with an extended partition and a primary partition and get the same error. I have Ubuntu 14.04TLS. What to do?
|
1: it doesn't have to do anything with primary/extended/logical partitions.
2: I think you wanted to say "logical" partition instead of "extended".
3: mkfs thinks your partition size if 0 bytes. It was very surely, because the kernel wasn't able to update the partition table after a repartitioning. After you edited the partition table, didn't you get some warnings about that a reboot is needed?
On Linux, there is two different partition table: there is one on the zeroth block of the hard disk. And there is one in the kernel memory. You can read the first with an fdisk -l /dev/sda command. And the second can you read with a cat /proc/partitions command. These two need to be in sync, but it is not always possible. For example, you can't change the limits of a currently used partition. In this case, the kernel partition table won't changed.
You can let the kernel re-read the disk partition table with the command blockdev --rereadpt /dev/sda. Most partitioning tools execute this command after they wrote out your newly changed partition table to the disk.
The problem is that only newer linux kernels are capable to re-read a partition table of a used hard disk. From this viewpoint, a hard disk is considered as "used" if there is a simple partition which is used on it, either by a tool, or a mount point or it is an active swap partition.
And even these newer kernels aren't able to change the limits of a partition currently being used.
I think, your root system is on /dev/sda, thus you need to do a reboot after you did a repartitioning.
| "inode_size (128) * inodes_count (0) too big for a filesystem with 0 blocks" while creating a file system |
1,433,980,862,000 |
ext4.wiki.kernel.org makes it sound like e2fsck was simply renamed to e4fsck so that e4fsprogs and e2fsprogs could coexist without overlapping. However, there is no mention of any difference in the code of the command.
The e2fsck man page makes no mention of ext4, but does mention that it works with ext3 (i.e. ext2 with journaling turned on).
For Ubuntu, apparently e2fsck can handle ext2, 3 and 4 filesystems.
And of course there's good ol' vanilla fsck which itself makes no mention of ext4.
If I need to fsck an ext4 file system on a RHEL based system, which tool do I use? e4fsck? But if it's just a rename of e2fsck, can I just use that instead? Why does Ubuntu mention ext4 in its e2fsck man page but no one else seems to? And what about plain fsck on ext4?
EDIT:
On a Fedora 14 machine there is fsck.ext4, fsck.ext3 and fsck.ext2 in /sbin/. They all have the exact same file size. Curioser and Curioser.
EDIT 2:
Furthermore, when running fsck.ext4, you see that it appears to be e2fsck running. For example, I see this line when running fsck.ext4: e2fsck: aborted Tricksters!
|
e4fsprogs on RHEL5 is just a newer version of e2fsprogs. Red Hat has a policy of not upgrading to newer, binary-incompatible versions of things, so they "had to" stay on the old e2fsprogs they were using, and the solution they came up with to support ext4 was to introduce the newer version as e4fsprogs (with s/2/4/ on all of the command names). To make matters worse, ext4 support on RHEL 5 is technically a "technology preview".
I understand why they did it, but it is annoying, since you won't find e4fs* on any other distribution, including RHEL 6.
| Is there any difference between e2fsck and e4fsck on CentOS / RHEL systems? |
1,433,980,862,000 |
I have a raid array, in fact, two raid arrays which are very similar, however one is being written to constantly (by jbd2 it seems) and the other is not. Here are the arrays:
md9 : active raid5 sdl4[4] sdk4[2] sdh4[1] sdb4[0]
11626217472 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 2/29 pages [8KB], 65536KB chunk
md8 : active raid5 sdf3[2] sdc3[1] sda3[0] sdi3[3]
11626217472 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/29 pages [0KB], 65536KB chunk
As you can see, no "checking" or anything special is going. Both arrays are 4x 4TB.
So far so good.
Both of these arrays (/dev/md8 and /dev/md9) contain data only, no root filesystem. In fact, they're rarely used by anything at all. Both have a single ext4 partition mounted with noatime and are "bcache" ready (but there is no cache volume yet):
df -h:
/dev/bcache0 11T 7.3T 3.6T 67% /mnt/raid5a
/dev/bcache1 11T 7.4T 3.5T 68% /mnt/raid5b
cat /proc/mounts:
/dev/bcache0 /mnt/raid5a ext4 rw,nosuid,nodev,noexec,noatime,data=ordered 0 0
/dev/bcache1 /mnt/raid5b ext4 rw,nosuid,nodev,noexec,noatime,data=ordered 0 0
However, iostat reports that there is constant writing going to /dev/bcache1 (and it's backing volume /dev/md9), while nothing similar is happening to the identical array /dev/md8...
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
md8 0.00 0.00 0.00 0 0
bcache0 0.00 0.00 0.00 0 0
md9 1.50 0.00 18.00 0 36
bcache1 1.00 0.00 12.00 0 24
md8 0.00 0.00 0.00 0 0
bcache0 0.00 0.00 0.00 0 0
md9 2.50 0.00 18.00 0 36
bcache1 2.50 0.00 18.00 0 36
This has been going on for hours.
What I tried:
Killed anything gvfs related. ps ax |grep gvfs gives zero results now. Writes keep happening.
Checked with lsof if anything is happening. It shows nothing.
Used iotop. I see a process called [jbd2/bcache1-8] that is often at the top. Nothing similar for the other array.
I tried unmounting the volume. This works without a hitch and iostat reports no further accesses (seemingly indicating that nobody is using it). Remounting it however triggers these low volume writes again immediately...
I'm very curious what could possibly be writing to this array. As I said, it only contains data, literally one folder and lost+found, which is empty...
|
Looks like I already found the culprit after typing a full question...
Even though the volume is already over a week old (vs the other array which is two weeks old), another process ext4lazyinit is still busy initializing inodes (which I even limited to a very sane 4 million, instead of the insane 4 gazillion mkfs.ext4 normally would create for such a large volume).
df -h -i:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/bcache1 4.1M 2.1K 4.1M 1% /mnt/raid5b
After remounting the volume yet again with init_itable=0, iostat shows the same writes except in a much higher volume:
md8 0.00 0.00 0.00 0 0
bcache0 0.00 0.00 0.00 0 0
md9 101.50 0.00 584.00 0 1168
bcache1 101.50 0.00 584.00 0 1168
...which seems to confirm that it is indeed still busy initializing inodes.
| Constant low volume writes to raid array (jbd2), what's causing it? |
1,433,980,862,000 |
I have a folder (watch) that got filled with a lot of temporary files by mistake. I've cleared out all of those files but the the folder is still 356 kB in size. In the past I've moved the folder out of the way, created a new folder with the same name, and copied all the files into it to get it back down to its former small size. Is there any way to get it back down to a small size without recreating the folder?
drwxr-xr-x 2 apache apache 4096 Nov 29 2014 details
drwxr-xr-x 2 apache apache 364544 Jan 21 17:24 watch
drwxr-xr-x 3 apache apache 4096 Jan 21 17:19 settings
watch has two small files: an .htaccess and an index.php.
I have an ext4 filesystem.
|
e4fsck supports -D flag which seems to do what you want:
try to optimize all directories, either by reindexing them if the filesystem supports directory indexing, or by sorting and compressing directories for smaller directories, or for filesystems using traditional linear directories.
Of course, you'll need to unmount the filesystem to use fsck, meaning downtime for your server.
You'll want to use the -f option to make sure e4fsck processes the file system even if clean.
Testing:
# truncate -s1G a; mkfs.ext4 -q ./a; mount ./a /mnt/1
# mkdir /mnt/1/x; touch /mnt/1/x/{1..4000}
# ls -ld /mnt/1/x
drwxr-xr-x 2 root root 69632 Nov 22 12:54 /mnt/1/x/
# rm -f /mnt/1/x/*
# ls -ld /mnt/1/x
drwxr-xr-x 2 root root 69632 Nov 22 12:55 /mnt/1/x/
# umount /mnt/1
# e2fsck -f -D ./a
e2fsck 1.43.3 (04-Sep-2016)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 3A: Optimizing directories
Pass 4: Checking reference counts
Pass 5: Checking group summary information
./a: ***** FILE SYSTEM WAS MODIFIED *****
./a: 12/65536 files (0.0% non-contiguous), 12956/262144 blocks
# mount ./a /mnt/1
# ls -ld /mnt/1/x
drwxr-xr-x 2 root root 4096 Nov 22 12:55 /mnt/1/x/
| How do I reset the folder metadata size without recreating the folder? |
1,433,980,862,000 |
I imagine that adding n xattrs of length l of to f files and d directories may generate costs:
storage
path resolution time / access time ?
iteration over directories? (recursive find over (fresh after reboot not-cached) filesystem?)
I wonder what are those costs?
E.g. if tagging all files would significantly impact storage and performance?
What are critical values below which it's negligible, and after which is hammering file-system?
For such analysis, obviously it would be nice to consider what are limits of xattr -> how much and how bit xattrs we can put on different filesystems.
(Be welcome to include bits regarding other filesystems than just ext4 and btrfs if you find it handy - Thank you)
|
For ext4 (I can't speak for BtrFS), storing small xattrs fit directly into the inode, and do not affect path resolution or directory iteration performance.
The amount of space available for "small" xattrs depends on what size the inodes are formatted as. Newer ext4 filesystems use a default inode size of 512 bytes, older ext4 filesystems used 256 bytes, less about 192 bytes for the inode itself and xattr header. The rest can be used for xattrs, though typically there are already xattrs for SELinux and possibly others ("getfattr -d -m - -e hex /path/to/file" will dump all xattrs on an inode). Any xattrs that do not fit into this space will be stored in an external block, or if they are larger than 4KB and you have a new kernel (4.18ish or newer) they can be stored in an external inode.
It is possible to change the inode size at format time with the "mke2fs -I <size>" option to provide more space for xattrs if xattr performance is important for your workload (e.g. Samba).
| What are costs of storing xattrs on ext4, btrfs filesystems? |
1,433,980,862,000 |
Suppose I have a sparse file F on a Linux ext4 filesystem, and process P1 is writing to a disjoint 50% subset of F while P2 writes to the other 50% of F. I would like to minimize fragmentation while the file "grows". (I put "grows" in quotes because the file is pre-allocated as a sparse file, but as the blocks get written they fill in phantom blocks with actual data.)
I realize since P1 and P2 are running in parallel that one may get ahead of the other, but barring this, is it best to have P1 write blocks 1,3,5,7,... while P2 writes 2,4,6,...? Or better to have P1 write 1,2,3,4,...n/2 and P2 write n/2+1, ...., n?
|
The kernel caches the writes and lazily flushes them to disk in the background, allocating disk space as it does so in such a way that it minimizes fragmentation. In other words, you're over thinking things -- don't worry about it.
More specifically when it does to to flush some dirty cache buffers, ext4 goes to allocate enough disk space to hold all of the dirty buffers in the cache, as well as reserving additional space for further growth.
The load you are describing sounds a lot like bit torrent. I recently downloaded the Ubuntu 11.10 iso via bit torrent and checking it with filefrag shows that it is only broken into 3 fragments, which is not bad at all for a 700mb file.
| Fragmentation and ext4 |
1,433,980,862,000 |
Is the quota approach still in use to limit the usage of disk space and/or the concurrency between users.
Quota works with aquota.user files in the concerned directories AND some settings in /etc/fstab with options like usrquota…
But some times, regarding with journalised filesystems, this options change for usrjquota=aquota.user,jqfmt=vfsv1 .
Is this abstract still correct?
https://wiki.archlinux.org/index.php/Disk_quota
I'm very surprised to see both quota and jquota set of options. Are they backward compatible, deprecated, replaced???
Could another approach use cgroups to limit space access? It seems not: How to set per process disk quota?
Are there other methods nowadays?
|
Is the quota approach still in use?
Yes it is. Since disks have grown in size, quotas might not be of much worth to common users, but still find their usage in multi-user environment e.g. on servers. Android uses quotas on ext4 and f2fs to clear caches and and control per-app disk usage. In-kernel implementations as well as userspace tools are up-to-date.
Quota works with aquota.user files in the concerned directories AND some settings in /etc/fstab with options like usrquota.
Linux disk quota works on per-filesystem basis, so aquota.user (and aquota.group) files are created in the root of concerned filesystem. usrquota (or usrjquota=) mount option has to be passed when mounting filesystem. Or quota filessytem feature has to be enabled when formatting or later using tune2fs.
I'm very surprised to see both quota and jquota set of options
jquota is evolution of quota. From ext4(5): "Journaled quotas have the advantage that even after a crash no quota check is required." jqfmt= specifies quota database file format. See more details in Difference between journaled and plain quota.
Are they backward compatible, deprecated, replaced?
No they are two different sets of mount options, not deprecated or replaced. Mount options are different and not compatible, either one of the two can be used. Journaled quota is only supported by version 2 quota files (vfsv0 and vfsv1), which can also be hidden files (associated to reserved inodes 3 and 4 on ext4) if quota filesystem feature is enabled. Version 1 quota file format (vfsold) works with both. Also upgrading to journaled quota is not very complex, so backward compatibility doesn't matter much.
Could another approach use cgroups to limit space access?
No. Control groups limit resource usage (e.g. processor, RAM, disk I/O, network traffic) on per process basis while files are saved on filesystems with UID/GID information. When a process accesses a file for reading or writing, kernel enforces DAC to allow or deny access by comparing process UID/GID with filesystem UID/GID. So it's quite simple to enforce quota limits at the same time as the filesystem always maintains total space usage on per-UID basis (when quota is enabled).
Are there other methods nowadays?
No. Or at least not very commonly known.
| What is the most recent technique to implement quotas? |
1,433,980,862,000 |
I'm assuming that this means that if the average file stored (including directories etc) is less than 16384 bytes, it may be possible to run out of inodes before using the full storage capacity of the filesystem. However, should the files being stored consume over 16384 bytes, on average, a physical space storage limit should be reached before one would run out of inodes.
|
Yes that is about right. A couple of minor points to note are:
As far as I can see the overhead of the filsystem itself isn't considered when calculating the number of inodes from this ratio, so the actual average size that a file can be will be slightly lower than 16834 when you consider the overhead superblock, inode table etc. Each inode itself is 256 bytes by default on ext4. So if this ratio is very low, the size of the inodes themselves is substantial.
Symlinks also count as inodes, so remember that a large number will bring down the average file size.
16834 is the default inode_ratio on Linux and should suit most needs. Only change it if you have a good reason to. There are other values defined in /etc/mke2fs.conf for specific usage types. Consider if one of these suits your needs (specify it with the -T option to mkfs.ext4) before defining your own.
| What are the implications of using an inode_ratio of 16384 in terms of storage use on ext4? |
1,433,980,862,000 |
A directory inode isn't substantially different from that of a regular file's inode, what I comprehend from Ext4 Disk Layout is that:
Directory Entries:
Therefore, it is more accurate to say that a directory is a series of data blocks and that each block contains a linear array of directory entries.
The directory entry stores the filename together with a pointer to its inode. Hence, if the documentation says each block contains directory entries, why debugfs reports something different that the filenames stored in the directory's inode? This is a debugging session on an ext4 formatted flash drive:
debugfs: cat /sub
�
.
..�
spam�spam2�spam3��spam4
I don't think inode_i_block can store those filenames, I've created files with really long filenames, more than 60 bytes in size. Running cat on the inode from debugfs displayed the filenames too, so the long filenames were in the inode again!
The Contents of inode.i_block:
Depending on the type of file an inode describes, the 60 bytes of storage in inode.i_block can be used in different ways. In general, regular files and directories will use it for file block indexing information, and special files will use it for special purposes.
Also, there's no reference to the inode storing the filenames in Hash Tree Directories
section which is the newer implementation. I feel I missed something in that document.
The main question is if a directory's inode contain filenames, what do its data blocks store then?
|
Directory entries are stored both in inode.i_block and the data blocks. See "Inline Data" and "Inline Directories" in the document you linked to.
| How come that inodes of directories store filenames in ext4 filesystem? |
1,433,980,862,000 |
Since 5.10 kernel there is a new feature called fast_commit. In arch wiki https://wiki.archlinux.org/title/ext4 you can read that it can be enabled in existing filesystem with:
tune2fs -O fast_commit /dev/drivepartition
but in https://lwn.net/Articles/842385/ there is:
Fast commits are activated at filesystem creation time, so users will
have to recreate their filesystems to use this feature.
So is tune2fs -O fast_commit truly enable this feature in existing filesystem ?
|
tune2fs -O fast_commit is supported since e2fsprogs 1.46.0, which was released two weeks after the LWN article was published. So the article was correct at the time of publication, and the Arch wiki is correct now.
tune2fs -O fast_commit doesn’t just set the corresponding flag, it creates all the required data structures; the required functions were added in late January 2021, and tune2fs was then updated to use them. You can run it even on a mounted system, and check with dumpe2fs that the feature was indeed enabled (look for “Fast commit length” and check that it’s non-zero).
| Ext4 fast_commit feature |
1,433,980,862,000 |
First, I have create the directory that I will want to mount to.
mkdir /mnt/ramdisk
Now, I could easily turn this into a ramdisk using ramfs or tmpfs via
mount -t tmpfs -o size=512m tmpfs /mnt/ramdisk
I've found a tutorial on how to create a ramdisk which breaks this syntax down as:
mount -t [TYPE] -o size=[SIZE] [FSTYPE] [MOUNTPOINT]
The tutorial indicates that I can replace [FSTYPE] with ext4 to change the FS to ext4. However, I am not convinced this method is correct and that the author has misjudged what changing the [FSTYPE] argument actually does.
UPDATE: For those interested, G-Man and Johan Myréen have weighed in on my speculations about [FSTYPE]. Essentially, the [FSTYPE] argument acts as a necessary (but ignored) placeholder used by mount. See this post's comments for more details.
I would like to know the proper way to create an ext4 ramdisk. That is, I want a temporary directory in memory that uses the ext4 file system. How can this be achieved?
|
I have combined an idea given to me by Ipor Sircer's answer with Stephen Kitt's suggestion of using a RAM disk block device.
First, I compiled CONFIG_BLK_DEV_RAM into my kernel. I changed the default number of RAM disks from 16 to 8 (BLK_DEV_RAM_COUNT), though that is based on preference and not necessity.
Next, I created the folder I want to mount to.
mkdir /mnt/ext4ramdisk
Finally, I formatted my RAM disk block device with ext4 and mounted it.
mkfs.ext4 /dev/ram0
mount -t ext4 /dev/ram0 /mnt/ext4ramdisk
| How can I create an ext4 ramdisk? |
1,433,980,862,000 |
I want to grow an ext4 volume on a host, but I noticed that there is no valid partition table to delete and remake:
fdisk -u /dev/vdb
/dev/vdb: device contains a valid 'ext4' signature; it is strongly recommended to wipe the device with wipefs(8) if this is unexpected, in order to avoid possible collisions
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xd2971c02.
root@host:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 20G 0 disk
`-vda1 253:1 0 20G 0 part /
vdb 253:16 0 1T 0 disk /mnt/redacted
vdc 253:32 0 64M 0 disk
If I grow the size of the underlying disk to add a few hundred GB, how am I supposed to let the OS know about the increase before resize2fs? I'm not seeing a partition table to grow in the first place.
Could I essentially just grow the disk, then create a new partition of the entire disk, write the changes, and resize2fs?
|
I'm not seeing a partition table to grow in the first place.
Because there isn't one. In general partition table is not needed, you can format a disk to ext4 (or other filesystem) and use it directly without partitions. It's a perfectly valid use case if you want to use the entire disks without partitioning it. Just resize the disk, reboot the VM (or disconnect and connect the disk back) and resize the filesystem using resize2fs without the size parameter to resize it to the size of the disk.
| Device does not contain a recognized partition table |
1,433,980,862,000 |
We have Linux Redhat version 7.2 , with xfs file system.
from /etc/fstab
/dev/mapper/vgCLU_HDP-root / xfs defaults 0 0
UUID=7de1ab5c-b605-4b6f-bdf1-f1e8658fb9 /boot xfs defaults 0 0
/dev/mapper/vg
/dev/mapper/vgCLU_HDP-root / xfs defaults 0 0
UUID=7de1dc5c-b605-4a6f-bdf1-f1e869f6ffb9 /boot xfs defaults 0 0
/dev/mapper/vgCLU_HDP-var /var xfs defaults 0 0 var /var xfs defaults 0 0
The machines are used for hadoop clusters.
I just thinking what is the best file-system for this purpose?
So what is better EXT4, or XFS regarding that machines are used for hadoop cluster?
|
This is addressed in this knowledge base article; the main consideration for you will be the support levels available: Ext4 is supported up to 50TB, XFS up to 500TB. For really big data, you’d probably end up looking at shared storage, which by default means GFS2 on RHEL 7, except that for Hadoop you’d use HDFS or GlusterFS.
For local storage on RHEL the default is XFS and you should generally use that unless you have specific reasons not to.
| big data + what is the right filesystem ext4 or xfs? |
1,433,980,862,000 |
This will still write to /dev/foo if there is a journal:
mount -oro /dev/foo /mnt/disk
How can I treat /dev/foo as read-only?
|
mount -oro,noload /dev/foo /mnt/disk
| Mount ext4 read-only |
1,433,980,862,000 |
I have some files with special characters like accented letters.
They are valid names, but for some reason when they are copied across the network to a drive, maybe in another format, the name still looks the same but it is not the same.
I can copy the file back and now I have two files that appear to have the exact same name in the exact same path.
My guess is there are two different values represented the accented letter so they appear the same. Is there any way to view the hex of the name itself, not the file?
This is important because one of my synching apps is getting confused and creating duplicates.
|
Pipe the file names to od or a similar tool:
printf '%s\n' * | od -t x1 -a
$ ls
Accentué bar foo
$ printf '%s\n' * | od -t x1 -a
0000000 41 63 63 65 6e 74 75 c3 a9 0a 62 61 72 0a 66 6f
A c c e n t u C ) nl b a r nl f o
0000020 6f 0a
o nl
0000022
Many characters can have different representations, even in the same encoding; for example, in UTF-8, 0xC3 0xA9 represents é, and 0x65 0xCC 0x81 represents e followed by “combining acute accent”, which is also displayed as é. Such strings need to be normalised if they are to be compared, but even normalisation has different variants, and different operating systems can store the same string in different ways.
| View file names in hex? |
1,433,980,862,000 |
What is the difference between disabling journal on ext4 file system using:
tune2fs -O ^has_journal /dev/sda1
and using data=writeback when mounting? I thought ext4 - journal = ext2. means when we remove journal from a ext4 file system, it is automatically converted to ext2(thus we can not benefit from other ext4 features)
|
The two are in no way equivalent. Disabling the journal does exactly that: turns journaling off. Setting the journal mode to writeback, on the other hand, turns off certain guarantees about file data while assuring metadata consistency through journaling.
The data=writeback option in man(8) mount says:
Data ordering is not preserved - data may be written into the main
filesystem after its metadata has been committed to the journal. This is
rumoured to be the highest- throughput option. It guarantees internal
filesystem integrity, however it can allow old data to appear in files
after a crash and journal recovery.
Setting data=writeback may make sense in some circumstances when throughput is more important than file contents. Journaling only the metadata is a compromise that many filesystems make, but don't disable the journal entirely unless you have a very good reason.
| disabling journal vs data=writeback in ext4 file system |
1,433,980,862,000 |
I saw that kernel 5.2 got handling of ext4 case-insensitivity per directory by flipping a +F bit in inode.
This EXT4 case-insensitive file-name lookup feature works on a
per-directory basis when an empty directory is enabled by flipping the
+F inode attribute.
https://www.phoronix.com/scan.php?page=news_item&px=EXT4-Case-Insensitive-Linux-5.2
But how to do that? Does any chmod handle that? My distributions doesn't look like it.
So how do I use this feature?
|
First you need recent enough software:
Linux kernel >= 5.2 for the kernel-side support in EXT4
userland tools: e2fsprogs >= 1.45 (eg: on Debian 10 which ships only version 1.44 this requires buster-backports). Provides among others mke2fs (alias mkfs.ext4), tune2fs and chattr.
UPDATE:
e2fsprogs >= 1.45.7 needed to allow enabling casefold using tune2fs on an unmounted filesystem after it was created without it.
e2fsprogs >= 1.46.6 needed to allow disabling casefold using tune2fs after it was enabled, and only if no directory still has the +Fflag.
to also use filesystem encryption, this requires Linux kernel >= 5.13.
With this installed, the documentation from man ext4 does reflect the existence of this feature:
casefold
This ext4 feature provides file system level character encoding support for directories with the casefold (+F) flag enabled. This
feature is name-preserving on the disk, but it allows applications to
lookup for a file in the file system using an encoding equivalent
version of the file name.
The casefold feature must first be enabled as a filesystem-wide ext4 option. Sadly, I couldn't manage to enable it on an already formatted filesystem. So using a sparse file created with dd if=/dev/zero of=/tmp/image.raw bs=1 count=1 seek=$((2**32-1)) to test on a newly created filesystem.
# tune2fs -O casefold /tmp/image.raw
tune2fs 1.45.3 (14-Jul-2019)
Setting filesystem feature 'casefold' not supported.
#
UPDATE: Since this commit it's possible to use tune2fs to enable casefold on an unmounted filesystem. When this answer was written this feature was not yet available:
# tune2fs -O casefold /tmp/image.raw
tune2fs 1.47.0 (5-Feb-2023)
#
So when formatting, this will enable the feature:
# mkfs.ext4 -O casefold /tmp/image.raw
or to specify an other encoding rather than default (utf8). It appears that currently there is only utf8-12.1, of which utf8 is an alias anyway:
# mkfs.ext4 -E encoding=utf8-12.1 /tmp/image.raw
You can verify what was done with tune2fs:
# tune2fs -l /tmp/image.raw |egrep 'features|encoding'
Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent 64bit flex_bg casefold sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Character encoding: utf8-12.1
Now to use the feature:
# mount -o loop /tmp/image.raw /mnt
# mkdir /mnt/caseinsensitivedir
# chattr +F /mnt/caseinsensitivedir
# touch /mnt/caseinsensitivedir/camelCaseFile
# ls /mnt/caseinsensitivedir/
camelCaseFile
# ls /mnt/caseinsensitivedir/camelcasefile
/mnt/caseinsensitivedir/camelcasefile
# mv /mnt/caseinsensitivedir/camelcasefile /mnt/caseinsensitivedir/Camelcasefile
mv: '/mnt/caseinsensitivedir/camelcasefile' and '/mnt/caseinsensitivedir/Camelcasefile' are the same file
| How to enable new in kernel 5.2 case-insensitivity for ext4 on a given directory? |
1,433,980,862,000 |
My root-partition is formatted as ext4-filesystem.
I notice that, whenever my machine crashes and I have to hard-reset it, when booting up again and the root filesystem is checked this step takes a bit (like one to two seconds) longer than when booting from a cleanly shut down system, but it is reported as "clean" (and nothing like /dev/<rootpartition> was not cleanly unmounted, check forced). The filesystem is 92% full (352 GiB).
My question: I wonder if this is the normal and a safe behaviour of ext4 or some bug in the startup-scripts. I know that ext4 has much faster fsck than ext3, but I am worried about that it is reported as "clean" after a system crash.
When I run e2fsck -f manually on that partition the check lasts comparable to an ext2/ext3 filesystem. So I am worried and since beeing so i tuned my filesystem to be checked at every boot (tune2fs -c 1), which results in a full check taking as long as e2fsck -f every boot.
Edit, just to clarify:
After a non-clean reset, usually, on /var, which is reiserfs, fsck replays journal entries; on /boot, which is ext2, fsck runs, displays progress bar, and reports "clean" after running. Only on the root filesystem no "check forced" and no fsck-progress appears, which do appear for the other file systems even if they turn out to be clean. That is the worrying difference!
|
The fsck already takes place within the initrd/ initramfs (after an unclean shutdown it takes several seconds longer with a lot of disk activity at this stage, where the journal seems to be replayed), and thus, when the normal, more verbose, file system checks are beeing run from the main system, it is already clean.
| ext4 reported as clean by fsck after hard reset: Is that normal? |
1,433,980,862,000 |
I would like to reduce the size of an ext4 partition from my disk and I would like to know if it is possible that my files become corrupted during the operation ? I learn that ext4 file system use large extents for each file, so is it possible that a file is located at the end of the partition and become corrupted/deleted during the process ?
|
Yes, it is safe
As long as the process is not interrupted by i.e. power loss, your data will be fine. This is what resize2fs is made for. It will move data around so nothing is lost. it will warn you if you attempt something potentially harmful. I used resize2fs numerous times for offline shrinking and never experienced any problems (except human error).
| Is it safe to resize partition in ext4? |
1,433,980,862,000 |
“Multiply claimed blocks” is an error reported by fsck when blocks appear to belong to more than one file. This causes data corruption since both files change when one of the files are written.
But what can be the original causes of multiply claimed blocks? How are they created and how can I avoid them?
|
As stated very early by Theodore Tso himself, there can be two immediate reasons for “Multiply claimed blocks” to be reported by fsck :
One is that one or more blocks in the inode table get written to the
wrong place, overwriting another block(s) in the inode table.
This is most often triggered by some kernel bug. (you can read T'Tso capable to describe some easily recognizable pattern => not at random as whatever spurious corruption caused by the outside would generate.).
This exceptionally occurs in the early times of new features for the EXT family of filesystems and mainly, because of rare race conditions :
with bigalloc
delayed allocation,
More, recently, as pointed out by frostschutz in OP's comments, concerning the fast_commit feature.
The second case is one where the block allocation bitmap gets
corrupted, such that some blocks which are in use are marked as free,
and then the file system is remounted and files are written to the
file system, such that the blocks are reallocated for new files.
These appear much more at random following some corruption which root cause is not likely to be a kernel bug.
This including unclean shutdowns, ill written applications, non-sensible mount options with regards to the hardware environment, misc. memory & other hardware faults.
Of course, one should not forget the possible responsibility of fsck itself producing erroneous reports or even at the root cause of the problem when badly trying to fix some other file system inconsistency, there have actually been
How can you avoid them, well, from what here-above told you can only expect to limit the probability of their occurrence :
Stay low-tech ;-) Avoid setting all-brand-new features as soon as they become available,
Use ECC memory and reliable storage devices,
Fine-tune your filesystem options (offered at mkfs time) and select mount options wisely (in coherence with the environment),
Run all untrusted softwares sandboxed.
Ultimately work as T'Tso in case of a crash, run e2croncheck :
What I'm actually doing right now is after every crash, I'm rebooting,
logging in, and running e2croncheck right after I log in. This
allows me to notice any potential file system corruptions before it
gets nasty … E2croncheck is much more convenient, since I can be doing
other things while the e2fsck is running in one terminal window.
| What can cause “multiply claimed blocks” on an ext4 drive? |
1,433,980,862,000 |
My VirtualBox filesystem looks like:
# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda2 29799396 5467616 22795012 20% /
devtmpfs 1929980 0 1929980 0% /dev
tmpfs 1940308 12 1940296 1% /dev/shm
tmpfs 1940308 8712 1931596 1% /run
tmpfs 1940308 0 1940308 0% /sys/fs/cgroup
/dev/sdb 31441920 1124928 30316992 4% /srv/node/d1
/dev/sdc 31441920 49612 31392308 1% /srv/node/d2
/dev/sdd 31441920 34252 31407668 1% /srv/node/d3
/dev/sda1 999320 253564 676944 28% /boot
tmpfs 388064 0 388064 0% /run/user/0
Disks /dev/sdb, /dev/sdc, /dev/sdd are VDI data disks. I removed some data from them (not everything) and would like to use zerofree to compress them afterwards.
Looks like I can't use zerofree on those disks. Here is an execution:
# zerofree -v /dev/sdb
zerofree: failed to open filesystem /dev/sdb
Is it possible to use zerofree on such disks? If not, is there any alternative solution? I need to keep the existing data on those disks, but use zerofree (or anything else) to fill removed data with zeros.
|
I didn't find the answer on how to use zerofree on such disks but I found an alternative solution which works well.
Mount your disk somewhere (in my case 3 disks are mounted to locations: /srv/node/d1, /srv/node/d2, /srv/node/d3).
Enter the directory where your disk is mounted (cd /srv/node/d1).
Perform the command: dd if=/dev/zero of=zerofillfile bs=1M
Remove the a created file: rm -f zerofillfile
Perform the above operations for all disks.
P.S. not related to this question, but for virtual box disk compaction, use the command after performing the above commands:
VBoxManage modifyhd --compact /path/to/my/disks/disk1.vdi
| How to use zerofree on a whole disk? |
1,433,980,862,000 |
I have replaced my hard drive with an SSD and have installed Fedora on it exactly the same as my HDD. I'm attempting to read the data from the hard drive, but since both of the LVM partitions have partitions named fedora-home I can't mount it and it causes this error:
mount: /media: wrong fs type, bad option, bad superblock on /dev/mapper/fedora-home, missing codepage or helper program, or other error.
|
LVM requires each VG / LV to have its own unique name. It will refuse to activate duplicate names. If these are coming from separate installs, they'll each have their own unique VG UUID as shown in vgdisplay output.
Using this UUID you can rename one of them...
vgrename $VGUUID homburg
...and that should resolve the problem.
What the linked answer seems to be discussing is an even more problematic case, when a VG has been cloned outright so UUIDs of all layers (from partition through PV, VG, LV, down to the filesystem) are identical and so you have to re-generate them ALL.
However that does not seem to be your situation. If it's coming from separate installs, your UUIDs are fine, only a clash of regular names.
| How to mount lvm partitions with duplicate names |
1,433,980,862,000 |
summary
Suppose one is setting up an external drive to be a "write-once archive": one intends to reformat it, copy some files that will (hopefully) never be updated, then set it aside until I need to read something (which could be a long while or never) from the archive from another linux box. I also want to be able to get as much filespace as possible onto the archive; i.e., I want the filesystem to consume as little freespace as possible for its own purposes.
specific question 1: which filesystem would be better for this usecase: ext2, or ext4 without journaling?
Since I've never done the latter before (I usually do this sort of thing with GParted), just to be sure:
specific question 2: is "the way" to install journal-less ext4 mke2fs -t ext4 -O ^has_journal /dev/whatever ?
general question 3: is there a better filesystem for this usecase? or Something Completely Different?
details
I've got a buncha files from old projects on dead boxes (which will therefore never be updated) saved on various external drives. Collectively size(files) ~= 250 GB. That's too big for DVDs (i.e., would require too many--unless I'm missing something), and I don't have a tape drive. Hence I'm setting up an old USB2 HFS external drive to be their archive. I'd prefer to use a "real Linux" filesystem, but would also prefer a filesystem that
consumes minimum space on the archive drive (since it's just about barely big enough to hold what I want to put on it.
will be readable from whatever (presumably Linux) box I'll be using in future.
I had planned to do the following sequence with GParted: [delete old partitions, create single new partition, create ext2 filesystem, relabel]. However, I read here that
recent Linux kernels support a journal-less mode of ext4
which provides benefits not found with ext2
and noted the following text in man mkfs.ext4
"mke2fs -t ext3 -O ^has_journal /dev/hdXX"
will create a filesystem that does not have a journal
So I'd like to know
Which filesystem would be better for this usecase: ext2, or ext4 without journaling?
Presuming I go ext4-minus-journal, is the commandline to install it mke2fs -t ext4 -O ^has_journal /dev/whatever ?
Is there another, even-better filesystem for this usecase?
|
I don't agree with the squashfs recommendations. You don't usually write a squashfs to a raw block device; think of it as an easily-readable tar archive. That means you would still need an underlaying filesystem.
ext2 has several severe limitations that limit its usefulness today; I would therefore recommend ext4. Since this is meant for archiving, you would create compressed archives to go on it; that means you would have a small number of fairly large files that rarely change. You can optimize for that:
specify -I 128 to reduce the size of individual inodes, which reduces the size of the inode table.
You can play with the -i option too, to reduce the size of the inode table even further. If you increase this value, there will be less inodes created, and therefore the inode table will also be smaller. However, that would mean the filesystem wastes more space on average per file. This is therefore a bit of a trade-off.
You can indeed switch off the journal with -O ^has_journal. If you go down that route, though, I recommend that you set default options to mount the filesystem read-only; you can do this in fstab, or you could use tune2fs -E mount_opts=ro to record a default in the filesystem (you cannot do this at mkfs time)
you should of course compress your data into archive files, so that the inode wastage isn't as bad a problem as it could be. You could create squashfs images, but xz compresses better, so I would recommend tar.xz files instead.
You could also reduce the number of reserved blocks with the -m option to either mkfs or tune2fs. This sets the percentage (set to 5 by default) which is reserved for root only. Don't set it to zero; the filesystem requires some space for efficient operation.
| "write-once archive": ext2 vs ext4^has_journal vs |
1,433,980,862,000 |
I'm trying to play around with OS development, and I started with a boot loader, where phase 0 loads phase 1 from a file (specified by inode) on an ext4 partition (specified by first LBA). Of course, I need something to boot from, so I grabbed QEMU. Now what?
What has worked fine so far is this:
truncate -s64M /tmp/SomeVolume
/sbin/mke2fs -t ext4 -F /tmp/SomeVolume
yasm phase0.asm
dd if=phase0 of=/tmp/SomeVolume conv=notrunc
I make a volume of about 64 MB, format it as ext4, and overwrite the first 1024 octets with phase0 (which is always 1024 bytes in size). This works fine.
But now I want to make a properly partitioned file, to test it for more realistic scenarios. I know I could /sbin/cfdisk my volume file, but mke2fs doesn't have a parameter that lets me choose a span within the file.
Now I'm aware of solutions using loop, but unfortunately, it doesn't seem to work for me (it seems I'm not able to change max_part in Debian jessie). There seems to be another module called nbd, but I don't have the server and client for that module installed. And it's getting a little ridiculous that I need root privileges for something that could clearly be done in userland.
How can I do this as a user? Or should I just build the MBR/GPT-partitioned volume around the ext4-formatted file I created?
|
The long way around. But for the fun of it:
1. Create a temporary image:
$ truncate -s64MiB tmp.img
2. Create two partitions using fdisk:
Rather detailed, but OK.
$ fdisk tmp.img
First partition:
: n <Enter>
: <Enter> (p)
: <Enter> (1)
: <Enter> (2048)
: +40M <Enter>
Second partition:
: n <Enter>
: <Enter> (p)
: <Enter> (2)
: <Enter> (83968)
: <Enter> (131071)
Print what we are about to write:
: x
: p
Nr AF Hd Sec Cyl Hd Sec Cyl Start Size ID
1 00 32 33 0 57 52 5 2048 81920 83
2 00 57 53 5 40 32 8 83968 47104 83
Write and exit:
:w (Dont! forget ;-) )
We have two partitions of 40 and 23 MiB:
81920 * 512 / 1024 / 1024 = 40MiB
47104 * 512 / 1024 / 1024 = 23MiB
3. Create two file systems:
truncate -s40MiB ext4.img
truncate -s23MiB ext3.img
mke2fs -t ext4 -F -L part_ext4 ext4.img
mke2fs -t ext3 -F -L part_ext3 ext3.img
4. Stitch it all together:
Extract first 2048*512 bytes from temporary image:
dd if=tmp.img of=disk.img bs=512 count=2048
Combine them:
cat ext4.img ext3.img >> disk.img
Fine.
| How can I partition a volume in a regular file without loop? |
1,433,980,862,000 |
I'm trying to keep a bunch of plain text files compressed using the extended attribute option - c on a debian ppc64 system. I ran the following commands:
# mkfs.ext4 /dev/test/compressed
# mount /dev/test/compressed /mnt/compressed/
# mkdir /mnt/compressed/some/txts/
# chattr +c /mnt/compressed/some/txts/
# df -l
# cp /some/txts/* /mnt/compressed/some/txts/
# sync
# df -l
To my surprise, the output of df -l tells me the files I copied weren't compressed at all. I also tried to mount the test file system with the option user_xattr and I tried creating it with mkfs.ext4dev, but neither worked. I also checked the output of the commands lsattr /mnt/compressed/some/txts/; every line has a c in it.
Did I miss something? How come the xattr option c doesn't work as expected?
|
Makes sense to have a look at the man page of the programs you use:
BUGS AND LIMITATIONS
The c', 's', andu' attributes are not honored by the ext2 and ext3 filesystems as implemented in the current mainline Linux kernels.
This is not supposed to mean "ext4 works" I guess.
| What does the command "chattr +c /some/dir/" do? |
1,433,980,862,000 |
I need to identify the last sector used by an ext4 filesystem so that I can move it to another device.
The filesystem has been shrunk (with resize2fs) and is smaller than the partition that contains it, so I am not asking how to find the last sector in the partition.
I have done tune2fs -l and identified that
Block count: 48934
First block: 0
Block size: 4096
From that I would postulate that the filesystem uses 48934 * 4096 / 512 = 391472 sectors and that I can move that many sectors with dd starting at the first sector of the partition (as reported by gdisk).
I am uncertain whether that block count includes any ext4 overhead or if there is additional size that needs to be considered. I read this question which implies there is additional space to be considered.
|
You are right. There shouldn't be any problem.
To avoid some calculations you could use the bs option and use the partition name of the device rather than starting at an offset.
dd count=48934 bs=4096 if=/dev/sdxN of=...
To be 100% sure about the size you could test it before. "Simulate" a smaller partition:
umount /dev/XYZ
losetup --offset N-BYTES --sizelimit $(( 48934 * 4096 )) /dev/loop1 /dev/XYZ
mount or fsck of /dev/loop1 should tell you if you made it too small. resize2fs would tell if the partition is still too large but there is no dry-run. You could also play around with fsadm -v --dry-run check/resize ... which I have never used yet. If paranoid you should use losetup --read-only. Don't forget losetup --detach when done.
| How can I find the last sector used by an ext4 filesystem? |
1,433,980,862,000 |
I ran tune2fs -l /dev/sda on my production server today and got the following output:
tune2fs 1.42.9 (4-Feb-2014)
Filesystem volume name: <none>
Last mounted on: /
Filesystem UUID: a5b1c696-aa59-43db-a252-88b2e6d8212c
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: journal_data user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 60923904
Block count: 243670272
Reserved block count: 12183513
Free blocks: 223441953
Free inodes: 60799595
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 965
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Fri May 9 19:48:11 2014
Last mount time: Fri Jun 6 20:17:28 2014
Last write time: Fri Jun 6 20:17:01 2014
Mount count: 1
Maximum mount count: -1
Last checked: Fri Jun 6 20:17:01 2014
Check interval: 0 (<none>)
Lifetime writes: 194 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
First orphan inode: 17301533
Default directory hash: half_md4
Directory Hash Seed: 1fbb5b3a-79fe-42b3-b69d-0f8073618d27
Journal backup: inode blocks
What stood out to me was this line:
First orphan inode: 17301533
I've always understood orphan inodes to mean inodes that are left over after a crash. However the file system in question has always been cleanly unmounted and the system is on a UPS and has never shutdown uncleanly.
Is there a reason why there are orphaned inodes and does it indicate a problem?
|
An orphaned inode is one that has been unlinked but is still open in another process. For example running tail -f {file} in one shell followed by rm {file} in another. The filesystem keeps track of these so they can be cleaned up when the process quits.
See this note on Ext4 Disk Layout.
| "First orphan inode" in tune2fs output |
1,433,980,862,000 |
I want to encrypt the content of a directory in a container with an ext4 filesystem using cryptsetup. The size of the container should be as small as possible and as big as necessary, because I only want to write once and then backup.
First try: setting the size of the container to the size of the content.
dirsize=$(du -s -B512 "$dir" | cut -f 1)
dd if=/dev/zero of=$container count=$dirsize
losetup /dev/loop0 $container
fdisk /dev/loop0 # 1 Partition with max possible size
cryptsetup luksFormat --key-file $keyFile /dev/loop0
cryptsetup luksOpen --key-file $keyFile /dev/loop0 container
mkfs.ext4 -j /dev/mapper/container
mkdir /mnt/container
mount /dev/mapper/container /mnt/container
rsync -r "$dir" /mnt/container
Rsync returns that there is not enough space for the data. Seems reasonable as there has to be some overhead for the encryption and the file system.
I tried it with a relative offset:
dirsize=$(($dirsize + ($dirsize + 8)/9))
This fixes the problem for dirs with > 100 MB, but not for dirs with < 50 MB.
How can I determine the respective amount of bytes the container has to be bigger than the directory?
|
LUKS by default uses 2 MiB for its header, mainly due to data alignment reasons. You can check this with cryptsetup luksDump (Payload offset: in sectors). If you don't care about alignment, you can use the --align-payload=1 option.
As for ext4, it's complicated. Its overhead depends on the filesystem size, inode size, journal size and such. If you don't need a journal, you might prefer ext2. It may be that other filesystems have less overhead than ext*, might be worth experimenting. Also some of the mkfs flags (like -T largefile or similar) might help, depending on what kind of files you're putting on this thing. E.g. you don't need to create the filesystem with a million inodes if you're only going to put a dozen files in it.
If you want the container to be minimal size, you could start out with a larger container, and then use resize2fs -M to shrink it to the minimum size. You can then truncate the container using that size plus the Payload offset: of LUKS.
That should be pretty close to small, if you need it even smaller, consider using tar.xz instead of a filesystem. While tar isn't that great for hundreds of GB of data (need to extract everything to access a single file), it should be okay for the sizes you mentioned and should be smaller than most filesystems...
| How much storage overhead comes along with cryptsetup and ext4? |
1,433,980,862,000 |
I fear I may have to revert to system defaults if I can't get this sorted out.
I'm trying to set various system configurations for more robust ext4 for a single-user desktop environment. Trying to assign desired configuration settings where they will take effect properly.
I understand that some of these should be included in the file mke2fs.conf so that the filesystems are initially created with those proper settings. But I will address that later, keeping the distro default file for the following.
I understand that the EXT4 options I wanted could be set in /etc/fstab. This following entry shows what I would typically want:
UUID=00000000-0000-0000-0000-000000000000 /DB001_F2 ext4 defaults,nofail,data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard 0 0
where each DB001_F{p} is a partition on the root disk ( p = [2-8] ).
I repeat those options here, in the same sequence as a list, in case that makes it more easy to assimilate:
defaults
nofail
data=journal
journal_checksum
journal_async_commit
commit=15
errors=remount-ro
journal_ioprio=2
block_validity
nodelalloc
data_err=ignore
nodiscard
Mounting during boot, the below syslog shows all as reporting what I believe to be acknowledged acceptable settings:
64017 Sep 4 21:04:35 OasisMega1 kernel: [ 21.622599] EXT4-fs (sda7): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
64018 Sep 4 21:04:35 OasisMega1 kernel: [ 21.720338] EXT4-fs (sda4): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
64019 Sep 4 21:04:35 OasisMega1 kernel: [ 21.785653] EXT4-fs (sda8): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
64021 Sep 4 21:04:35 OasisMega1 kernel: [ 22.890168] EXT4-fs (sda12): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
64022 Sep 4 21:04:35 OasisMega1 kernel: [ 23.214507] EXT4-fs (sda9): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
64023 Sep 4 21:04:35 OasisMega1 kernel: [ 23.308922] EXT4-fs (sda13): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
64024 Sep 4 21:04:35 OasisMega1 kernel: [ 23.513804] EXT4-fs (sda14): mounted filesystem with journalled data mode. Opts: data=journal,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,block_validity,nodelalloc,data_err=ignore,nodiscard
But mount shows that some drives are not reporting as expected, even after reboot, and this is inconsistent as seen below:
/dev/sda7 on /DB001_F2 type ext4 (rw,relatime,nodelalloc,journal_checksum,journal_async_commit,errors=remount-ro,commit=15,data=journal)
/dev/sda8 on /DB001_F3 type ext4 (rw,relatime,nodelalloc,journal_checksum,journal_async_commit,errors=remount-ro,commit=15,data=journal)
/dev/sda9 on /DB001_F4 type ext4 (rw,relatime,nodelalloc,journal_checksum,journal_async_commit,errors=remount-ro,commit=15,data=journal)
/dev/sda12 on /DB001_F5 type ext4 (rw,relatime,nodelalloc,journal_async_commit,errors=remount-ro,commit=15,data=journal)
/dev/sda13 on /DB001_F6 type ext4 (rw,relatime,nodelalloc,journal_async_commit,errors=remount-ro,commit=15,data=journal)
/dev/sda14 on /DB001_F7 type ext4 (rw,relatime,nodelalloc,journal_async_commit,errors=remount-ro,commit=15,data=journal)
/dev/sda4 on /DB001_F8 type ext4 (rw,relatime,nodelalloc,journal_async_commit,errors=remount-ro,commit=15,data=journal)
I read somewhere about a limitation regarding the length of the option string in fstab, so I used tune2fs to pre-set some parameters at a lower level. Those applied via tune2fs are:
journal_data,block_validity,nodelalloc
which is confirmed when using tune2fs -l:
Default mount options: journal_data user_xattr acl block_validity nodelalloc
With that in place, I modified the fstab for entries to show as
UUID=00000000-0000-0000-0000-000000000000 /DB001_F2 ext4 defaults,nofail,journal_checksum,journal_async_commit,commit=15,errors=remount-ro,journal_ioprio=2,data_err=ignore,nodiscard 0 0
I did a umount for all my DB001_F? (/dev/sda*), then I did a mount -av, which reported the following:
/ : ignored
/DB001_F2 : successfully mounted
/DB001_F3 : successfully mounted
/DB001_F4 : successfully mounted
/DB001_F5 : successfully mounted
/DB001_F6 : successfully mounted
/DB001_F7 : successfully mounted
/DB001_F8 : successfully mounted
No errors reported for the options string for each of the drives.
I tried using journal_checksum_v3, but mount -av failed all with that setting. I used the mount command to see what was reported.
I also did a reboot and repeated that mount again for these reduced settings, and mount shows again that the drives are not reporting as expected, and this is still inconsistent as seen here:
/dev/sda7 on /DB001_F2 type ext4 (rw,relatime,journal_checksum,journal_async_commit,commit=15)
/dev/sda8 on /DB001_F3 type ext4 (rw,relatime,journal_checksum,journal_async_commit,commit=15)
/dev/sda9 on /DB001_F4 type ext4 (rw,relatime,journal_checksum,journal_async_commit,commit=15)
/dev/sda12 on /DB001_F5 type ext4 (rw,relatime,journal_async_commit,commit=15)
/dev/sda13 on /DB001_F6 type ext4 (rw,relatime,journal_async_commit,commit=15)
/dev/sda14 on /DB001_F7 type ext4 (rw,relatime,journal_async_commit,commit=15)
/dev/sda4 on /DB001_F8 type ext4 (rw,relatime,journal_async_commit,commit=15)
Since these are all ext4 type filesystems, and all on the same physical drive, I don't understand the behaviour of the journal_checksum not be uniformly actioned! I also, I find it interesting that there is a dividing line in terms of the 2 classes of behaviour, since the order listed above is the order specified in the fstab (according to /DB001_F?), which presumably is the mounting order ... so what "glitch" is causing the "downgrading" of the remaining mount actions ?
My thinking (possibly baseless) is that some properties might be better set at time of creation of the filesystems, and that this would make them more "persistent/effective" than otherwise. When I tried to again shift some of the property settings by pre-defining those in mke2fs.conf. mke2fs.ext4 fails AGAIN, I suspect, because the option string is restricted to a limited length (64 characters ?). So ... I have backed away from making any changes to the mke2fs.conf.
Ignoring the mke2fs.conf issue for now, and focusing on the fstab and tune2fs functionality, can someone please explain to me what I am doing wrong that is preventing mount from correctly reporting what is the full range of settings currently in effect?
At this point, I don't know what I can rely on to provide the actual real state of the ext4 behaviour and am considering simply reverting to distro defaults, which leaves me wanting.
Is it possible that all is well and that the system is simply not reporting correctly? I am not sure that I could comfortably accept that viewpoint. It is counter-intuitive.
Can someone please assist?
Environment
UbuntuMATE 20.04 LTS
Linux OasisMega1 5.4.0-124-generic #140-Ubuntu SMP Thu Aug 4 02:23:37 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
RAM = 4GB
DSK = 2TB (internal, 8 data partitions, 3 1GB swap partitions) [ROOT]
DSK = 500GB (internal, 2 data partitions, 1 1GB swap partitions)
DSK = 4TB (external USB, 16 data partitions) [BACKUP drive]
This is what is being reported by debugfs:
Filesystem features:
has_journal
ext_attr
resize_inode
dir_index
filetype
needs_recovery
extent
flex_bg
sparse_super
large_file
huge_file
dir_nlink
extra_isize
metadata_csum
Not very useful for additional insights into the problem.
debugfs shows following supported features:
debugfs 1.45.5 (07-Jan-2020)
Supported features: (...snip...) journal_checksum_v2 journal_checksum_v3
Noteworthy is that debugfs is showing either journal_checksum_v2 or journal_checksum_v3 available but not the journal_checksum which is referenced in the manual pages.
Does that mean that I should be using v2 or v3, instead of journal_checksum?
|
Given the discussion that has transpired as comments on my original post, I am prepared to conclude that the many changes to the Kernel over the 2+ years since my original install of the UbuntuMATE 20.04 LTS distro are the source of the differences in behaviour observed by the set of 8 ext4 filesystems that were created at different times, notwithstanding the fact that they reside on the same physical device.
Consequently, the only way to ensure that all filesystems of a given fstype (i.e. ext4) react identically to mounting options, tune2fs options and behave/report identically by debuge2fs or mount commands, is to ensure that they are created with the same frozen version of an OS Kernel and the various filesystem utilities that are used to create and tune those filesystems.
So, to answer my original question, there is no problem with the filesystems reporting differently because they are reporting correctly, each for their own historical context leading to their current state.
Looking forward to my pending upgrade to UbuntuMATE 22.04 LTS (why I was digging into all this to begin with), to avoid the discrepencies, because the install disk is not the latest for the Kernel or utilities, my defined process must be to:
upgrade to newer OS,
reboot,
apply all updates,
create backup image of the upgraded+updated OS now residing on the root partition,
re-create root partition with latest Kernel and utilities (using a duplicate fully-updated OS residing on secondary internal disk, which is the reason for existence of my 500 GB drive, namely testing, proving, confirming final desired install before rolling over into "production"),
recover the primary fully-updated OS from backup image to its proper ROOT partition,
reboot, then
backup all other partitions on the primary disk, recreate those partitions, then restore the data for each of those partitions.
Only in this manner can all the partitions be created as "equals" with the latest and best offered at the one snapshot in time. Otherwise, the root partition is out of step with all other partitions that are created post-updates following the distro installation.
Also, having a script similar to the one I created ensures the required actions will be applied uniformly, avoiding any possible errors that might slip in from the tedium when performing it manually many times.
For those who want to be able to manage and review these options in a consistent fashion with a script, here is the script I created for myself:
#!/bin/sh
####################################################################################
###
### $Id: tuneFS.sh,v 1.2 2022/09/07 01:43:18 root Exp $
###
### Script to set consistent (local/site) preferences for filesystem treatment at boot-time or mounting
###
####################################################################################
TIMESTAMP=`date '+%Y%m%d-%H%M%S' `
BASE=`basename "$0" ".sh" `
###
### These variables will document hard-coded 'mount' preferences for filesystems
###
BOOT_MAX_INTERVAL="-c 10" ### max number of boots before fsck [10 boots]
TIME_MAX_INTERVAL="-i 2w" ### max calendar time between boots before fsck [2 weeks]
ERROR_ACTION="-e remount-ro" ### what to do if error encountered
#-m reserved-blocks-percentage
###
### This OPTIONS string should be updated manually to document
### the preferred and expected settings to be applied to ext4 filesystems
###
OPTIONS="-o journal_data,block_validity,nodelalloc"
ASSIGN=0
REPORT=0
VERB=0
SINGLE=0
while [ $# -gt 0 ]
do
case ${1} in
--default ) REPORT=0 ; ASSIGN=0 ; shift ;;
--report ) REPORT=1 ; ASSIGN=0 ; shift ;;
--force ) REPORT=0 ; ASSIGN=1 ; shift ;;
--verbose ) VERB=1 ; shift ;;
--single ) SINGLE=1 ; shift ;;
* ) echo "\n\t Invalid parameter used on the command line. Valid options: [ --default | --report | --force | --single | --verbose ] \n Bye!\n" ; exit 1 ;;
esac
done
workhorse()
{
case ${PARTITION} in
1 )
DEVICE="/dev/sda3"
OPTIONS=""
;;
2 )
DEVICE="/dev/sda7"
;;
3 )
DEVICE="/dev/sda8"
;;
4 )
DEVICE="/dev/sda9"
;;
5 )
DEVICE="/dev/sda12"
;;
6 )
#UUID="0d416936-e091-49a7-9133-b8137d327ce0"
#DEVICE="UUID=${UUID}"
DEVICE="/dev/sda13"
;;
7 )
DEVICE="/dev/sda14"
;;
8 )
DEVICE="/dev/sda4"
;;
esac
PARTITION="DB001_F${PARTITION}"
PREF="${BASE}.previous.${PARTITION}"
reference=`ls -t1 "${PREF}."*".dumpe2fs" 2>/dev/null | grep -v 'ERR.dumpe2fs'| tail -1 `
if [ ! -s "${PREF}.dumpe2fs.REFERENCE" ]
then
mv -v ${reference} ${PREF}.dumpe2fs.REFERENCE
fi
reference=`ls -t1 "${PREF}."*".verify" 2>/dev/null | grep -v 'ERR.verify'| tail -1 `
if [ ! -s "${PREF}.verify.REFERENCE" ]
then
mv -v ${reference} ${PREF}.verify.REFERENCE
fi
BACKUP="${BASE}.previous.${PARTITION}.${TIMESTAMP}"
BACKUP="${BASE}.previous.${PARTITION}.${TIMESTAMP}"
rm -f ${PREF}.*.tune2fs
rm -f ${PREF}.*.dumpe2fs
### reporting by 'tune2fs -l' is a subset of that from 'dumpe2fs -h'
if [ ${REPORT} -eq 1 ]
then
### No need to generate report from tune2fs for this mode.
( dumpe2fs -h ${DEVICE} 2>&1 ) | awk '{
if( NR == 1 ){ print $0 } ;
if( index($0,"revision") != 0 ){ print $0 } ;
if( index($0,"mount options") != 0 ){ print $0 } ;
if( index($0,"features") != 0 ){ print $0 } ;
if( index($0,"Filesystem flags") != 0 ){ print $0 } ;
if( index($0,"directory hash") != 0 ){ print $0 } ;
}'>${BACKUP}.dumpe2fs
echo "\n dumpe2fs REPORT [$PARTITION]:"
cat ${BACKUP}.dumpe2fs
else
### Generate report from tune2fs for this mode but only as sanity check.
tune2fs -l ${DEVICE} 2>&1 >${BACKUP}.tune2fs
( dumpe2fs -h ${DEVICE} 2>&1 ) >${BACKUP}.dumpe2fs
if [ ${VERB} -eq 1 ] ; then
echo "\n tune2fs REPORT:"
cat ${BACKUP}.tune2fs
echo "\n dumpe2fs REPORT:"
cat ${BACKUP}.dumpe2fs
fi
if [ ${ASSIGN} -eq 1 ]
then
tune2fs ${BOOT_MAX_INTERVAL} ${TIME_MAX_INTERVAL} ${ERROR_ACTION} ${OPTIONS} ${DEVICE}
rm -f ${PREF}.*.verify
( dumpe2fs -h ${DEVICE} 2>&1 ) >${BACKUP}.verify
if [ ${VERB} -eq 1 ] ; then
echo "\n Changes:"
diff ${BACKUP}.dumpe2fs ${BACKUP}.verify
fi
else
if [ ${VERB} -eq 1 ] ; then
echo "\n Differences:"
diff ${BACKUP}.tune2fs ${BACKUP}.dumpe2fs
fi
rm -f ${BACKUP}.verify
fi
fi
}
if [ ${SINGLE} -eq 1 ]
then
for PARTITION in 2 3 4 5 6 7 8
do
echo "\n\t Actions only for DB001_F${PARTITION} ? [y|N] => \c" ; read sel
if [ -z "${sel}" ] ; then sel="N" ; fi
case ${sel} in
y* | Y* ) DOIT=1 ; break ;;
* ) DOIT=0 ;;
esac
done
if [ ${DOIT} -eq 1 ]
then
workhorse
fi
else
for PARTITION in 2 3 4 5 6 7 8
do
workhorse
done
fi
exit 0
exit 0
exit 0
For those who are interested, there is a modified/expanded script in a follow-on posting.
Thank you all for your input and feedback.
| OS seems to apply ext4 filesystem options in arbitrary fashion |
1,433,980,862,000 |
Disk/Partition Backup
What are the backup options and good practice to make a solid and easy to use full system backup?
With the following requirement:
Live backup
Image backup
Encrypted backup
Incremental backups
Mount/access the backup disk/files easily
Full system backup, restorable in one shot
Can be scheduled automatically (with cron or else)
Encrypted or classic backup source (luks, dm-crypt, ext3/ext4/btrfs).
|
Linux system backup
When targeting a true full system backup, disk image backup (as asked) offer substantial advantage (detailed bellow) compared to files based backup.
With files based backup disk/partition structure is not saved; Most of the time for a full restore, the process is a huge time consumer in fact many time consuming steps (like system reinstall) are required; and finally backing up installed applications can be tricky; Image disk backup avoid all these cons and restore process is a one shot step.
Tools like clonezilla, fsarchiver are not suitable for this question because they are missing one or multiple requested features.
As a reminder, luks encrypted partition are not dependent on the used file system (ext3/ext4/etc.) keep in mind that the performance are not the same depending on the chosen file system (details), also note that btrfs (video-1, video-2) may be a very good option because of its snapshot feature and data structure. This is just an additional protection layer because btrfs snapshot are not true backups! (classic snapshots reside on the same partition).
As a side note, in addition to disk image backup we may want to do a simple file sync backup for some particular locations, to achieve this, tools like rsync/grsync (or btrfs-send in case of btrfs) can be used in combinaison with cron (if required) and an encrypted backup destination (like luks-partition/vault/truecrypt). Files based backup tools can be: rsync/grsync, rsnapshot, cronopete, dump/restore, timeshift, deja-dup, systemback, freefilesync, realtimesync, luckybackup, vembu.
Annotations
lsblk --fs output:
sda is the main disk
sda1/sda2 are the encrypted partitions
crypt_sda1/crypt_sda2 virtual (mapped) un-encrypted partitions
sda
├─sda1 crypto_LUKS f3df6579-UUID...
│ └─crypt_sda1 ext4 bc324232-UUID... /mount-location-1
└─sda2 crypto_LUKS c3423434-UUID...
└─crypt_sda2 ext4 a6546765-UUID... /mount-location-2
Method #1
Backup the original luks disk/partition (sda or sda1) encrypted as it is to any location
bdsync/bdsync-manager is an amazing tool that can do image backup (full/incremental) by fast block device syncing; This can be used along with luks directly on the encrypted partition, incremental backups works very well in this case as well. This tool support mounting/compression/network/etc.
dd: classic method for disk imaging, can be used with command similar to dd if=/dev/sda1 of=/backup/location/crypted.img bs=128K status=progress but note that imaging a mounted partition with dd may lead data corruption for the used files while the backup is done, like sql databases, x config files, or documents being edited, to guarantee data integrity with such backup closing all running application and data base is recommended, we can also mount the image after its creation and check its integrity with fsck.
Cons for #1: backup size, compression, and incremental backups can be tricky
Method #2
This method is for disk without encryption or to backup the mapped luks un-encrypted partition crypt_sda1/crypt_sda2... An encrypted backup destination location (like luks-partition/vault/truecrypt) or an encrypted archive/image if the backup tool support such feature is recommended.
Veeam: free/paid professional backup solution (on linux only command line and TUI), kernel module is opensource, this tool can not be used for the fist method, backup can be encrypted, incremental and mounting backups are supported.
bdsync/bdsync-manager same as in the first method but the backup is made from the un-encrypted mapped partition (crypt_sda1/crypt_sda2).
dd: classic method for disk imaging, can be used with command similar to dd if=/dev/mapper/crypt_sda1 of=/backup/location/un-encrypted-sda1.img bs=128K status=progress but note that imaging a mounted partition with dd may lead data corruption for the used files while the backup is done, like sql databases, x config files, or documents being edited, to guarantee data integrity with such backup closing all running application and data base is recommended, we can also mount the image after its creation and check its integrity with fsck.
Cons for #2: disk headers, mbr, partitions structure, uid etc. are not saved additional backup steps (detailed bellow) are required for a full backup
Backup luks headers: cryptsetup luksHeaderBackup /dev/sda1 --header-backup-file /backup/location/sda1_luks_heanders_backup
Backup mbr: dd if=/dev/sda of=/backup/location/backup-sda.mbr bs=512 count=1
Backup partitions structure: sfdisk -d /dev/sda > /location/backup-sda.sfdisk
Backup disk uuid
Note:
Images done with dd can be mounted with commands similar to:
fdisk -l -u /location/image.img
kpartx -l -v /location/image.img
kpartx -a -v /location/image.img
cryptsetup luksOpen /dev/mapper/loop0p1 imgroot
mount /dev/mapper/imgroot /mnt/backup/
Alternatives:
Bareos: open source backup solution (demo-video)
Bacula: open source backup solution (demo-video)
Weresync: disk image solution with incremental feature.
Other tools can be found here, here, here or here
There is a Wikipedia page comparing disk cloning software
An analyse by Gartner of some professional backup solutions is available here
Other tools
Acronis backup may be used for both methods but their kernel module is always updated very lately (not working with current/recent kernel version) plus mounting backups is not working as of 02/2020.
Partclone: used by clonezilla, this tool only backup disk used blocks, it support image mounting but does not support live/hot backup nor encryption/luks.
Partimage: dd alternative with a TUI, it support live/hot backups but images can not be mounted and it does not support luks (but ext4/btrfs).
Doclone: very nice live/hot backup imaging solution, supporting many systems (but not lucks...) ext4 etc. support network, mounting is not possible.
Rsnapshot: snapshot file backup system using rsync. used in many distro (like mageia) the backup jobs are scheduled with cron, when running in background the backup status is not automatically visible.
Rsync/Grsync: sync folders with rsync command, grsync is the gui...
Cronopete: file backup alternative to rsync (the application is limited on how it work compared to modern solution)
Simple-backup: file backup solution with tray icon and incremental feature, backup are made to tars archives
Backintime: python backup app for file based backup (the app have many unsolved issues)
Shadowprotect: acronis alternative with mount feature... luks support is not obvious.
Datto: professional backup solution, luks support is not obvious, linux agent need to be networked to a backup server... kernel module is opensource on github... the interface is web based without using a modern design.
FSArchiver: live/hot image backup solution, backup can not be mounted.
Dump: image backup system, mount is not supported.
| Serious backup options for linux disk (dmcrypt, luks, ext4, ext3, btrfs) normal and encrypted system |
1,433,980,862,000 |
Instead of configuring my Nextcloud (Linux/Nginx/PGsql/PHP) server to look for a folder on my spinning hard drive mounted at /mnt/HDDfs/, I Sym-Linked /var/Nextcloud_Data so it points to /mnt/HDDfs/Nextcloud_Data and then pointed my Nextcloud config to /var/Nextcloud_Data. This way, if I ever decide to change the name of my mountpoint, I don't have to touch the DB, as I can simply edit the Symbolic Link.
At first it seemed like a great idea, but then I remembered that my root / drive is an SSD, which can only withstand limited wear compared to a traditional magnetic platter; even if wear by usage is marginal on nowadays' disks, hammering specific cells of a drive over and over isn't exactly the best idea.
What I'm asking is: when a program loads and/or writes to a location with a symlink in it, does the OS load the symlink every single time from the source location and then follow it to the real target and perform actions there or does it "cache" symlinks and translate /var/Nextcloud_Data/filename to /mnt/HDDfs/Nextcloud_Data/filename directly?
Additional info:
Operating system: Ubuntu Server 18.04 LTS with all latest patches and upgrades.
Disk Drives: a WD RED Hard Disk connected via SATA and a PCIe M.2 (Samsung 960 EVO) SSD.
File Systems: both the drives are GPT formatted with Ext4 file systems.
Motherboard: Asus Z170-Deluxe (a desktop board)
|
It's fine, for many reasons.
First, the concern with flash drives is the number of writes, not the number of reads.
Second, this concern applies to older or cheaper drives with poor firmware or poor drivers, but not to modern drives on modern operating systems. Modern SSD have good enough wear leveling and modern OSes have drivers that distinguish overwrite from erase (TRIM) so it takes a very long time before the number of writes starts to become a concern. At this age, magnetic drives are often dead from mechanical-related reasons such as humidity or dust in the wrong place or mechanical damage.
Reading through a symbolic link may update its access time depending on the system configuration. Linux defaults to updating a file's access time only once a day. So even if there was a concern over the number of writes to the drive, one write would be one day, not one access through the symbolic link.
The kernel keeps information about the symbolic link in its disk cache like any other information it reads from the disk. It doesn't keep a cache that says “/var/Nextcloud_Data/filename redirects to /mnt/HDDfs/Nextcloud_Data/filename”, but it maintains a cache that says “/var/Nextcloud_Data is a symbolic link whose target is /mnt/HDDfs/Nextcloud_Data”. This means that as long as the cache entry is still present, it won't read from the drive. This has no bearing on how often the access time is updated: that's a function of when the file is accessed, not of when the information about the file is transferred from the drive.
| Am I hammering my SSD by keeping a symlink on it? |
1,433,980,862,000 |
When I was installing Mint Debian edition unlike the classic edition, the installation automatically formated my home partition when I did not specify to format.
So the formatting previously was ext4 as is now. I believe the data is still there as it was a quick format.
I have now booted the computer up on a live USB to prevent writing on it.
Ran testDisk.
Is there anyway to recover to a previous superblock so i can recover my data?
|
Take a look at the e2fsprogs package. It seems that you can get all your backup superblocks from dumpe2fs /dev/sd<partition-id> | grep -i superblock and then have e2fsck check the FS for you, or just try to do mount -o sb=<output-of-dumpe2fs> /dev/sd<partition-id> /your/mountpoint with a backup superblock. See this for reference: http://www.cyberciti.biz/faq/linux-find-alternative-superblocks/.
testdisk works well to recover partition tables, not clobbered file systems. Photorec is a last resort when you have really messed things up and can't get any of the filesystem structure recovered.
| Data recovery from an accidental format on ext4 partition |
1,507,018,455,000 |
Environment:
- Virtual machine on VMWare ESX 4.0
- OS: fully up to date RHEL 5.8
After adding a new (virtual) disk I want to create an ext4 partition on LVM on this disk.
Steps taken so far:
$ sudo /sbin/fdisk /dev/sdb
Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel. Changes will remain in memory only,
until you decide to write them. After that, of course, the previous
content won't be recoverable.
The number of cylinders for this disk is set to 10443.
There is nothing wrong with that, but this is larger than 1024,
and could in certain setups cause problems with:
1) software that runs at boot time (e.g., old versions of LILO)
2) booting and partitioning software from other OSs
(e.g., DOS FDISK, OS/2 FDISK)
Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)
Command (m for help): n
p
Partition number (1-4): 1
First cylinder (1-10443, default 1):
Using default value 1
Last cylinder or +size or +sizeM or +sizeK (1-10443, default 10443):
Using default value 10443
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Changed system type of partition 1 to 8e (Linux LVM)
Command (m for help): w
The partition table has been altered!
Calling ioctl() to re-read partition table.
Syncing disks.
$ sudo /usr/sbin/pvcreate /dev/sdb1
Writing physical volume data to disk "/dev/sdb1"
Physical volume "/dev/sdb1" successfully created
$ sudo /usr/sbin/vgcreate VGora /dev/sdb1
/dev/hdc: open failed: No medium found
Volume group "VGora" successfully created
$ sudo /usr/sbin/lvcreate -l "100%FREE" -n oradata VGora
Logical volume "oradata" created
Creating an ext4 partition fails:
$ sudo /sbin/mke2fs -t ext4 /dev/VGora/oradata
mke2fs 1.39 (29-May-2006)
mke2fs: invalid blocks count - /dev/VGora/oradata
More information on the created partition:
$ sudo /usr/sbin/vgdisplay VGora
--- Volume group ---
VG Name VGora
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size 80.00 GB
PE Size 4.00 MB
Total PE 20479
Alloc PE / Size 20479 / 80.00 GB
Free PE / Size 0 / 0
VG UUID ggjERV-sGbG-1nCv-HW61-LJk4-I7cX-Z3Infh
$ sudo /usr/sbin/lvdisplay VGora
--- Logical volume ---
LV Name /dev/VGora/oradata
VG Name VGora
LV UUID nia1PK-7JJ2-jg5T-uN4X-ggYH-R0mS-oqCooY
LV Write Access read/write
LV Status available
# open 0
LV Size 80.00 GB
Current LE 20479
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4
$ sudo /sbin/fdisk -l /dev/sdb
Disk /dev/sdb: 85.8 GB, 85899345920 bytes
255 heads, 63 sectors/track, 10443 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 10443 83883366 8e Linux LVM
I can't find any relevant information on this error and don't know how to resolve this.
How can I check the block count? and
How can I correct it to a valid number for ext4?
|
It's bad error checking in the argument parsing I think causing that message. The version of mke2fs on RHEL 5 doesn't support the -t type argument, so it's somehow parsing the /dev/VGora/oradata path as the last (optional) block count argument.
Anyway, the way you'll want to do it is to ensure you have e4fsprogs installed and then use mkfs.ext4 /dev/VGora/oradata or mke4fs.
| Creating an ext4 partition fails with "invalid blocks count" |
1,507,018,455,000 |
I have a 2TB Western Digital external hard disk.
Its original filesystem was NTFS, but I formatted it to EXT4.
I had no problem in Linux; but today after I mounted it using ext2fsd in a Windows box, I can't mount it in Linux anymore!
The drive had no partition, but after that Windows mount, Disk utility shows it has a 1KB partition and 2TB unallocated space!!!
My data is not corrupted (I still can view my files using ext2fsd in Windows).
Trying to mount using mount -t ext4 fails and dmesg says:
EXT4-fs (sdb): VFS: Can't find ext4 filesystem
also fsck gives:
e2fsck 1.41.11 (14-Mar-2010)
e2fsck: Bad magic number in super-block while trying to open /dev/sdb
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
I think that Western Digital's internal application is started in Windows and corrupted the filesystem by creating a partition for itself.
How can I fix it?
I also tried e2fsck -b 8193 /dev/sdb and got same result:
e2fsck 1.41.11 (14-Mar-2010)
e2fsck: Bad magic number in super-block while trying to open /dev/sdb
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
So I tried to find super backup superblocks using testdisk, it gave:
superblock 0, blocksize=4096 [Ariyan2T]
superblock 32768, blocksize=4096 [Ariyan2T]
superblock 98304, blocksize=4096 [Ariyan2T]
superblock 163840, blocksize=4096 [Ariyan2T]
superblock 229376, blocksize=4096 [Ariyan2T]
superblock 294912, blocksize=4096 [Ariyan2T]
superblock 819200, blocksize=4096 [Ariyan2T]
superblock 884736, blocksize=4096 [Ariyan2T]
superblock 1605632, blocksize=4096 [Ariyan2T]
superblock 2654208, blocksize=4096 [Ariyan2T]
I tried to repair it using fsck.ext4 -b 32768 -B 4096 /dev/sdb and the result was:
e2fsck 1.41.11 (14-Mar-2010)
fsck.ext4: Bad magic number in super-block while trying to open /dev/sdb
The superblock could not be read or does not describe a correct ext2
filesystem. If the device is valid and it really contains an ext2
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
fdisk -l /dev/sdb returns:
Warning: invalid flag 0x0000 of partition table 5 will be corrected by w(rite)
Disk /dev/sdb: 2000.4 GB, 2000365289472 bytes
255 heads, 63 sectors/track, 243197 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00021365
Device Boot Start End Blocks Id System
/dev/sdb1 1 243198 1953480704 85 Linux extended
sudo dd if=/dev/sdb bs=512 count=1 skip=262144 | xxd -a:
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.42691 s, 1.2 kB/s
0000000: ae61 5cc1 3be6 6d8d ceed 0cc8 293b fa1b .a\.;.m.....);..
0000010: 5931 fa58 7420 550a b40e 7b1c b0a6 ad60 Y1.Xt U...{....`
0000020: 1f29 dcae af2a a935 7185 f1d9 6b64 7f29 .)...*.5q...kd.)
0000030: fed0 4c79 fc3b 1544 becd bda0 e7a4 836b ..Ly.;.D.......k
0000040: ea50 1800 0868 89ac 592d 63a2 05e5 116d .P...h..Y-c....m
0000050: 4654 9870 671a e11d 7ae0 6bdd dd23 bf5a FT.pg...z.k..#.Z
0000060: 94ac a20d 695b d010 d8f2 4620 5930 561b ....i[....F Y0V.
0000070: af93 7d8c 06c3 72c7 3757 7815 e955 3278 ..}...r.7Wx..U2x
0000080: 5773 22b3 2908 52b5 f7e9 59ea b618 5830 Ws".).R...Y...X0
0000090: b29f d244 9a72 ead9 5a77 d3ce e83a 8c44 ...D.r..Zw...:.D
00000a0: 96d9 a89f dd82 b72a f624 10a8 0f44 31a5 .......*.$...D1.
00000b0: 29b6 811a f9cd 175a c00b 670a 5051 ce87 )......Z..g.PQ..
00000c0: 5b00 bd80 20d5 c6e5 f0d0 593e f923 005d [... .....Y>.#.]
00000d0: 1a6f 83ea 7f28 3305 dc72 7d92 4258 cb4e .o...(3..r}.BX.N
00000e0: 00de 6a6c 4575 d355 3682 28dd f765 e099 ..jlEu.U6.(..e..
00000f0: 1193 d0cc 64ad a841 ecd7 2c24 08e2 96f5 ....d..A..,$....
0000100: 0fb2 e4fd ef04 1914 f63c 30ce 0df9 3470 .........<0...4p
0000110: 166f 080d 7872 dfce a854 ef20 a237 447a .o..xr...T. .7Dz
0000120: 05b1 653f 109b 52c3 553b 966c 9733 838e ..e?..R.U;.l.3..
0000130: c2c9 52cd 4b8f 1e85 cd70 abf4 f9b6 c0c5 ..R.K....p......
0000140: 1412 0f2f 8389 9f4b 94af a523 c6c5 6e04 .../...K...#..n.
0000150: 25d4 d049 fde8 cd9d 94bd 608e e08a f6c6 %..I......`.....
0000160: 389a 5571 9182 d642 7680 f905 9fb6 179a 8.Uq...Bv.......
0000170: 9c6c 5290 ec62 a44f 3f05 fa39 f2a1 18c7 .lR..b.O?..9....
0000180: ba96 297f 2d04 a646 8cc8 e50c ee90 76c0 ..).-..F......v.
0000190: f9ae e586 0f89 6227 35bb b390 9477 8720 ......b'5....w.
00001a0: 2a6c c2b1 9f15 ecdd 8216 523c 2b61 731e *l........R<+as.
00001b0: 1b1f 0d24 5914 7e8a 7c32 957b 4f24 a464 ...$Y.~.|2.{O$.d
00001c0: ccb4 ecd9 7d1e 967d 9d6b ee20 fa02 9e65 ....}..}.k. ...e
00001d0: 593c 640e fbd2 4f6e e0f8 53b8 4b4a b3fa Y<d...On..S.KJ..
00001e0: a630 30f1 8170 55a4 dd91 805c d522 9412 .00..pU....\."..
00001f0: 7c0f 1afa ff47 ab23 9721 5a3d f87a 181f |....G.#.!Z=.z..
and sudo dd if=/dev/sdb bs=512 count=1 skip=2 | xxd -a:
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.000880215 s, 582 kB/s
0000000: 0000 0000 0000 0000 0000 0000 0000 0000 ................
*
00001f0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
a question: Can I repair it with mkfs.ext4 -S /dev/sdb without losing data?
At skip=2050 there is a 53ef at position 0x38 (sudo dd if=/dev/sdb bs=512 count=1 skip=2050 | xxd -a):
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.000688801 s, 743 kB/s
0000000: 0000 4707 00f0 1b1d cc98 7401 769e 2a1a ..G.......t.v.*.
0000010: 9df7 4607 0000 0000 0200 0000 0200 0000 ..F.............
0000020: 0080 0000 0080 0000 0020 0000 ce17 0750 ......... .....P
0000030: ce17 0750 1a00 2600 53ef 0100 0100 0000 ...P..&.S.......
0000040: a26d 9d4f 004e ed00 0000 0000 0100 0000 .m.O.N..........
0000050: 0000 0000 0b00 0000 0001 0000 3c00 0000 ............<...
0000060: 4202 0000 7b00 0000 8160 9a1f f334 4827 B...{....`...4H'
0000070: b6df 00c2 8981 7b36 4172 6979 616e 3254 ......{6Ariyan2T
0000080: 0000 0000 0000 0000 2f6d 6564 6961 2f41 ......../media/A
0000090: 7269 7961 6e32 5400 4bb7 7001 80fd 39e9 riyan2T.K.p...9.
00000a0: 607c e2c0 8098 6ced 94be 7fed 1529 21c0 `|....l......)!.
00000b0: 6026 a0c5 c280 0000 6026 a0c5 2042 b0c5 `&......`&.. B..
00000c0: 0000 0000 2042 b0c5 0000 0000 0000 8b03 .... B..........
00000d0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
00000e0: 0800 0000 0000 0000 0000 0000 b536 a950 .............6.P
00000f0: 02a9 455e 9fa8 b9a3 0f2b 61b1 0101 0000 ..E^.....+a.....
0000100: 0000 0000 0000 0000 a26d 9d4f 0af3 0200 .........m.O....
0000110: 0400 0000 0000 0000 0000 0000 ff7f 0000 ................
0000120: 0080 880e ff7f 0000 0100 0000 ffff 880e ................
0000130: 0000 0000 0000 0000 0000 0000 0000 0000 ................
0000140: 0000 0000 0000 0000 0000 0000 0000 0008 ................
0000150: 0000 0000 0000 0000 0000 0000 1c00 1c00 ................
0000160: 0100 0000 0000 0000 0000 0000 0000 0000 ................
0000170: 0000 0000 0400 0000 5962 660c 0000 0000 ........Ybf.....
0000180: 0000 0000 0000 0000 0000 0000 0000 0000 ................
*
00001f0: 0000 0000 0000 0000 0000 0000 0000 0000 ................
mount -t ext4 /dev/sdb1 /media/tmpmp/ returns:
mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
file -s /dev/sdb1 returns:
/dev/sdb1: data
|
It is quite unusual to format a filesystem on a hard disk without a partition table. It appears that you did in fact, have a partition table before, and the partition started at sector 2048, which is the usual starting location for the first partition on a disk these days. If you run fdisk and use the u command to change its units to sectors, then create a new partition with the n command, and set the starting sector to 2048, then w to save and exit, then everything should be kosher. Make sure with e2fsck -f on the partition.
| Can't mount EXT4 hard drive after mounting it in Windows |
1,507,018,455,000 |
I'm about to setup my new USB key with Grub or Grub2. In the old days I used ext2 for the boot partition.
I'm wondering if I could use ext4 for Grub2?
And if use Grub 0.9X, what about support of ext3?
|
Grub legacy (0.9x) supports ext2 and ext3 (ext3 is backward compatible with ext2) but not ext4 (unless you've turned off the backward-incompatible features, which doesn't leave much additional goodness compared with ext3). The development of Grub legacy stopped before ext4 was mature. There are unofficial patches to support ext4 on Grub legacy; the discussion on Debian bug #511121 has a pointer to two patches (one of which is in some versions of Ubuntu).
Grub2 (1.9x, more precisely since 1.97) supports ext2, ext3 and ext4, with the same module (ext2.mod).
None of the new features of ext4 are particularly useful for a separate /boot partition, so if that's what you have, you might as well stick to ext2. But if you keep your kernel and Grub configuration on the root partition, if it's ext4, make sure your Grub version is recent enough or patched.
| Ext4 support in Grub 0.9X (legacy) and Grub 1.9X (Grub2) |
1,507,018,455,000 |
I am writing 4 * 4KB blocks to a file. It is consistently around 50% slower if I have used fallocate() to pre-allocate the file with 9 blocks, instead of only pre-allocating the 4 blocks. Why?
There seems to be a cut-off point between pre-allocating 8 and 9 blocks. I'm also wondering why the 1st and 2nd block writes are consistently slower.
This test is boiled down from some file copy code I'm playing with. Inspired by this question about dd, I am using O_DSYNC writes so that I can measure the real progress of the disk writes. (The full idea was to start copying a small block to measure minimum latency, then adaptively increase block size to improve throughput).
I am testing Fedora 28, on a laptop with a spinning hard disk drive. It was upgraded from an earlier Fedora, so the filesystem is not brand-new. I don't think I've been fiddling with the filesystem defaults.
Kernel: 4.17.19-200.fc28.x86_64
Filesystem: ext4, on LVM.
Mount options: rw,relatime,seclabel
Fields from tune2fs -l
Default mount options: user_xattr acl
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Block size: 4096
Free blocks: 7866091
Timings from strace -s3 -T test-program.py:
openat(AT_FDCWD, "out.tmp", O_WRONLY|O_CREAT|O_TRUNC|O_DSYNC|O_CLOEXEC, 0777) = 3 <0.000048>
write(3, "\0\0\0"..., 4096) = 4096 <0.036378>
write(3, "\0\0\0"..., 4096) = 4096 <0.033380>
write(3, "\0\0\0"..., 4096) = 4096 <0.033359>
write(3, "\0\0\0"..., 4096) = 4096 <0.033399>
close(3) = 0 <0.000033>
openat(AT_FDCWD, "out.tmp", O_WRONLY|O_CREAT|O_TRUNC|O_DSYNC|O_CLOEXEC, 0777) = 3 <0.000110>
fallocate(3, 0, 0, 16384) = 0 <0.016467>
fsync(3) = 0 <0.000201>
write(3, "\0\0\0"..., 4096) = 4096 <0.033062>
write(3, "\0\0\0"..., 4096) = 4096 <0.013806>
write(3, "\0\0\0"..., 4096) = 4096 <0.008324>
write(3, "\0\0\0"..., 4096) = 4096 <0.008346>
close(3) = 0 <0.000025>
openat(AT_FDCWD, "out.tmp", O_WRONLY|O_CREAT|O_TRUNC|O_DSYNC|O_CLOEXEC, 0777) = 3 <0.000070>
fallocate(3, 0, 0, 32768) = 0 <0.019096>
fsync(3) = 0 <0.000311>
write(3, "\0\0\0"..., 4096) = 4096 <0.032882>
write(3, "\0\0\0"..., 4096) = 4096 <0.010824>
write(3, "\0\0\0"..., 4096) = 4096 <0.008188>
write(3, "\0\0\0"..., 4096) = 4096 <0.008266>
close(3) = 0 <0.000012>
openat(AT_FDCWD, "out.tmp", O_WRONLY|O_CREAT|O_TRUNC|O_DSYNC|O_CLOEXEC, 0777) = 3 <0.000050>
fallocate(3, 0, 0, 36864) = 0 <0.022417>
fsync(3) = 0 <0.000260>
write(3, "\0\0\0"..., 4096) = 4096 <0.032953>
write(3, "\0\0\0"..., 4096) = 4096 <0.033265>
write(3, "\0\0\0"..., 4096) = 4096 <0.033317>
write(3, "\0\0\0"..., 4096) = 4096 <0.033237>
close(3) = 0 <0.000019>
test-program.py:
#! /usr/bin/python3
import os
# Required third party module,
# install with "pip3 install --user fallocate".
from fallocate import fallocate
block = b'\0' * 4096
for alloc in [0, 4, 8, 9]:
# Open file for writing, with implicit fdatasync().
fd = os.open("out.tmp", os.O_WRONLY | os.O_DSYNC |
os.O_CREAT | os.O_TRUNC)
# Try to pre-allocate space
if alloc:
fallocate(fd, 0, alloc * 4096)
os.write(fd, block)
os.write(fd, block)
os.write(fd, block)
os.write(fd, block)
os.close(fd)
|
The reason for the difference between 8 and 9 4KB blocks is because ext4 has a heuristic when converting an unallocated extent created by fallocate() to an allocated extent. For unallocated extents 32KB or less, it just fills the whole extent with zeroes and rewrites the whole thing, while larger extents are split into two or three smaller extents and written out.
In the 8-block case, the whole 32KB extent is converted to a normal extent, the first 16KB is written with your data and the remainder is zero-filled and written out. In the 9-block case, the 36KB extent is split (because it is over 32KB), and you are left with a 16KB extent for your data and a 20KB unwritten extent.
Strictly speaking, the 20KB unwritten extent should also just be zero filled and written out, but I suspect it doesn't do that. However, that would just change the break-even point a bit (to 16KB+32KB = 12 blocks in your case), but wouldn't change the underlying behavior.
You could use filefrag -v out.tmp after the first write to see the block allocation layout on disk.
That said, you could just avoid fallocate and O_DSYNC completely and let the filesystem do its job to write out the data as quickly as possible instead of making the file layout worse than it needs to be....
| Why is it slower to write the same data to a *larger* pre-allocated file? |
1,507,018,455,000 |
We have beaglebone black based custom board with 256MB RAM and 4GB eMMC.
Board runs Linux kernel 4.9
we are running into a situation where we create a file in tempfs and then after validation, we have to move it to the ext4 partition of eMMC. File nothing but a certificate
in some situations, we have multiple certs in a directory so we have to move the whole directory from tempfs to the ext4 partition on eMMC.
So one of the problems we are worried about is atomicity of mv(move) operation.
As per rename system call Linux man page renaming file is an atomic operation.
http://man7.org/linux/man-pages/man2/rename.2.html
However we are not sure if rename operation involves moving files between two filesystems, atomicity is still available or not. So question is
Is moving file from tmpfs to ext4 atomic?
Obviously, one possible solution is to keep files in a different folder on the same partition (on same filesystem obviously ) and rename it using mv.
For directory using below approach of renaming
SRC_dir --> TMP_DEST_dir
DEST_dir --> BAK_DEST_dir
TMP_DEST_dir --> DEST_dir
delete BAK_DEST_dir
Any suggestion for alternatives ?
EDIT
After i got reply i tried following test code on the board,
#include <stdio.h>
#include <errno.h>
int main()
{
int retcode = 0;
system("touch /tmp/rename_test");
retcode = rename("/tmp/rename_test", "/home/fs_rename_test");
if ( retcode < 0) {
printf("errno : %d\n",errno );
perror("Error occurred while renaming file");
return 1;
}
return 0;
}
Which returned following output. And confirmed that rename doesn't work cross file-system.
errno : 18
Error occurred while renaming file: Invalid cross-device link
|
Is moving file from tmpfs to ext4 atomic?
No. Renames as such only work within a filesystem. The manual page for rename(2) explicitly mentions the error that is returned if trying to rename across mount points:
EXDEV oldpath and newpath are not on the same mounted filesystem.
Moves across file systems need to be done as a combination of a copy and a delete. mv will do this for you if the rename() doesn't work, but it will not be atomic in that case.
The simple way to work around that would indeed be to first copy the file to a temporary location on the same filesystem. In general, it's simplest to place the temporary file in the same directory as the final destination, since that's the only place that's guaranteed to be on the same filesystem.
Of course that requires that any process working on the files there will have some logic to ignore the temporary based on its name.
Roughly, something like this should work for one file:
cp /src/filename /dst/filename.tmp &&
mv /dst/filename.tmp /dst/filename &&
rm /src/filename
Note that the process you describe for a directory is essentially this:
cp -r /src/dir /dst/dir.tmp &&
mv /dst/dir /dst/dir.bak &&
mv /dst/dir.tmp /dst/dir &&
rm -r /dst/dir.bak
Which is not bad, but is not atomic. There's a moment of time between the two runs of mv (or calls to rename()), when /dst/dir does not exist. That could be worked around by accessing the directory through a symlink, since the link can be atomically replaced with a rename.
| is there a way to atomically move file and directory from tempfs to ext4 partition on eMMC |
1,507,018,455,000 |
On my linux system if I display the current date "t1", touch a file "f" then display the modification time "t2" of that "f" I would expect t1 < t2.
But that's not what I always get when I execute this on my system:
date +'%Y-%m-%d %H:%M:%S.%N'; \
touch f; \
stat -c %y f
Example output:
2017-09-18 21:47:48.855229801
2017-09-18 21:47:48.853831698 +0200
Notice the second timestamp (stat) is before the first one (date): 855229801 > 853831698
My fs is ext4, but I also tried with a file on tmpfs, same effect.
Why is it the case?
Thanks
Some info about the setup
% which date
/usr/bin/date
% which touch
/usr/bin/touch
% pacman -Qo /usr/bin/date /usr/bin/touch
/usr/bin/date is owned by coreutils 8.28-1
/usr/bin/touch is owned by coreutils 8.28-1
% uname -a
Linux machine 4.12.12-1-ARCH #1 SMP PREEMPT Sun Sep 10 09:41:14 CEST 2017 x86_64 GNU/Linux
% findmnt
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda1 ext4 rw,relatime,data=ordered
└─/tmp tmpfs tmpfs rw,nosuid,nodev
|
Per https://stackoverflow.com/questions/14392975/timestamp-accuracy-on-ext4-sub-millsecond :
ext4 filesystem code calls current_fs_time() which is the current cached kernel time truncated to the time granularity specified in the file system's superblock which for ext4 is 1ns.
The current time within the Linux kernel is cached, and generally only updated on a timer interrupt. So if your timer interrupt is running at 10 milliseconds, the cached time will only be updated once every 10 milliseconds. When an update does occur, the accuracy of the resulting time will depend on the clock source available on your hardware.
| linux: touch date precision |
1,507,018,455,000 |
After a crash, I've got an Ext4 filesystem (on an LVM LV) that gives the following error when running fsck.ext4 -nf:
e2fsck 1.42.12 (29-Aug-2014)
Corruption found in superblock. (blocks_count = 0).
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
I've run dumpe2fs to find the other copies of the superblock, but no matter which of them I add after fsck.ext4s -b option, I get the exact same output.
Moreover, dumpe2fs sees the correct block count (Block count: 4294967296, a 16TB filesystem). Here's the (truncated) output:
Filesystem volume name: <none>
Last mounted on: /storage
Filesystem UUID: fef00ffc-5341-4158-9279-88cad6cc211f
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 268435456
Block count: 4294967296
Reserved block count: 42949672
Free blocks: 534754162
Free inodes: 268391425
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 2048
Inode blocks per group: 128
Flex block group size: 16
Filesystem created: Wed Jan 16 11:07:07 2013
Last mount time: Sun Feb 1 21:21:31 2015
Last write time: Sun Feb 1 21:21:45 2015
Mount count: 18
Maximum mount count: -1
Last checked: Wed Jan 16 11:07:07 2013
Check interval: 0 (<none>)
Lifetime writes: 14 TB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 28
Desired extra isize: 28
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: c7ec9ee0-002b-431d-a37c-33db922c6057
Journal backup: inode blocks
Journal features: journal_incompat_revoke journal_64bit
Journal size: 128M
Journal length: 32768
Journal sequence: 0x0000e3fe
Journal start: 0
Group 0: (Blocks 0-32767) [ITABLE_ZEROED]
Checksum 0x4623, unused inodes 2034
Primary superblock at 0, Group descriptors at 1-2048
Block bitmap at 2049 (+2049), Inode bitmap at 2065 (+2065)
Inode table at 2081-2208 (+2081)
28637 free blocks, 2036 free inodes, 1 directories, 2034 unused inodes
Free blocks: 4130-4133, 4135-32767
Free inodes: 11, 14-2048
Group 1: (Blocks 32768-65535) [INODE_UNINIT, ITABLE_ZEROED]
Checksum 0xfd95, unused inodes 2048
Backup superblock at 32768, Group descriptors at 32769-34816
Block bitmap at 2050 (bg #0 + 2050), Inode bitmap at 2066 (bg #0 + 2066)
Inode table at 2209-2336 (bg #0 + 2209)
1522 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes
Free blocks: 34817, 35343-36863
Free inodes: 2049-4096
Group 2: (Blocks 65536-98303) [INODE_UNINIT, ITABLE_ZEROED]
Checksum 0x95d0, unused inodes 2048
Block bitmap at 2051 (bg #0 + 2051), Inode bitmap at 2067 (bg #0 + 2067)
Inode table at 2337-2464 (bg #0 + 2337)
115 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes
Free blocks: 85901-86015
Free inodes: 4097-6144
Group 3: (Blocks 98304-131071) [INODE_UNINIT, ITABLE_ZEROED]
Checksum 0x6e40, unused inodes 2048
Backup superblock at 98304, Group descriptors at 98305-100352
Block bitmap at 2052 (bg #0 + 2052), Inode bitmap at 2068 (bg #0 + 2068)
Inode table at 2465-2592 (bg #0 + 2465)
1505 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes
Free blocks: 100895-102399
Free inodes: 6145-8192
Group 4: (Blocks 131072-163839) [INODE_UNINIT, ITABLE_ZEROED]
Checksum 0x4788, unused inodes 2048
Block bitmap at 2053 (bg #0 + 2053), Inode bitmap at 2069 (bg #0 + 2069)
Inode table at 2593-2720 (bg #0 + 2593)
1808 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes
Free blocks: 141552-143359
Free inodes: 8193-10240
Group 5: (Blocks 163840-196607) [INODE_UNINIT, ITABLE_ZEROED]
Checksum 0x0d39, unused inodes 2048
Backup superblock at 163840, Group descriptors at 163841-165888
Block bitmap at 2054 (bg #0 + 2054), Inode bitmap at 2070 (bg #0 + 2070)
Inode table at 2721-2848 (bg #0 + 2721)
2023 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes
Free blocks: 165913-167935
Free inodes: 10241-12288
Group 6: (Blocks 196608-229375) [INODE_UNINIT, ITABLE_ZEROED]
Checksum 0xc119, unused inodes 2048
Block bitmap at 2055 (bg #0 + 2055), Inode bitmap at 2071 (bg #0 + 2071)
Inode table at 2849-2976 (bg #0 + 2849)
1755 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes
Free blocks: 198541-198655, 223640-225279
Free inodes: 12289-14336
Group 7: (Blocks 229376-262143) [INODE_UNINIT, ITABLE_ZEROED]
Checksum 0xf858, unused inodes 2048
Backup superblock at 229376, Group descriptors at 229377-231424
Block bitmap at 2056 (bg #0 + 2056), Inode bitmap at 2072 (bg #0 + 2072)
Inode table at 2977-3104 (bg #0 + 2977)
1796 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes
Free blocks: 231676-233471
Free inodes: 14337-16384
Group 8: (Blocks 262144-294911) [INODE_UNINIT, ITABLE_ZEROED]
Checksum 0x6a75, unused inodes 2048
Block bitmap at 2057 (bg #0 + 2057), Inode bitmap at 2073 (bg #0 + 2073)
Inode table at 3105-3232 (bg #0 + 3105)
1700 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes
Free blocks: 278876-280575
Free inodes: 16385-18432
Group 9: (Blocks 294912-327679) [INODE_UNINIT, ITABLE_ZEROED]
Checksum 0x3840, unused inodes 2048
Backup superblock at 294912, Group descriptors at 294913-296960
Block bitmap at 2058 (bg #0 + 2058), Inode bitmap at 2074 (bg #0 + 2074)
Inode table at 3233-3360 (bg #0 + 3233)
1986 free blocks, 2048 free inodes, 0 directories, 2048 unused inodes
Free blocks: 297022-299007
Free inodes: 18433-20480
... truncated ...
The strange thing is that I can mount the filesystem without any (apparent) problems (although I haven't yet dared to write to it).
Any suggestions/pointers/ideas for a solution that allows me to finish the fsck?
|
Your device has exactly 4294967296 blocks, which is 232, so this smells like a variable-size problem... If you’re running a 32-bit e2fsck, that could explain the error message; the error you’re seeing comes from e2fsck/super.c:
check_super_value(ctx, "blocks_count", ext2fs_blocks_count(sb),
MIN_CHECK, 1, 0);
where check_super_value() is defined as
static void check_super_value(e2fsck_t ctx, const char *descr,
unsigned long value, int flags,
unsigned long min_val, unsigned long max_val)
So on a 32-bit system where unsigned long is four bytes, your blocks_count will end up being 0 and fail the minimum-value check, without it indicating an actual problem with the filesystem.
The reason you’d only see this after a crash is that fsck is only run after a crash or if the filesystem hasn’t been checked for too long.
The answer to your question would then be, if you are running a 32-bit e2fsck, to try a 64-bit version...
| ext4 corruption found in superblock, but filesystem can be mounted |
1,507,018,455,000 |
so that it appears in Thunar's sidebar under the name 'Schijf-2'?
I am running linux mint13 xfce and this has been a headache causer for the last couple of days.
the UUID of this partition is:
this is the output from blkid:
/dev/sda2: UUID="913aedd1-9c06-46fa-a26e-32bf5ef0a150" TYPE="ext4"
How should I enter this in fstab so that it mounts to this directory:
/media/Schijf-2/
I have tried so many things, I have read so many stackexchange questions, but I still have not succeeded.
Edit:
without an entry in fstab, the drive is shown as Schijf-2 in the file manager now.
But this partition is not automatically mounted at startup.
Which causes links to be not working, Dropbox asking for a new location etc.
And to have this automatically mounted, I need an entry in fstab. Right?
Or is there an other place where I can set to mount it automatically at startup/login?
edit 2:
After adding it again to fstab as @jasonwryan suggested, the partition shows up in Thunar when I am logged in into my own account. After logging in into my dad's account, it does not show up. Which again confirms my thoughts that somehow my dad's account has got messed up.
Which files or directory from my account should I copy paste to my dad's account to have the same settings as my own account?
I already tried removing my dad's account and adding again, but that got me into totally different trouble. (but this is a different question and has nothing to do with mounting my /dev/sda2 in fstab).
|
By default, if your fstab entry is:
UUID=913aedd1... /media/Schijf-2 ext4 rw,relatime 0 2
your partition will not be shown as Schijf-2 in your sidebar, unless it is labelled Schijf-2. You have two options:
Leave the fstab entry as is and label your partition (e.g. if sda2 is your partition):
e2label /dev/sda2 Schijf-2
Leave the partition as is and add x-gvfs-name=Schijf-21 to your mount options in fstab:
UUID=913aedd1 /media/Schijf-2 ext4 rw,relatime,x-gvfs-name=Schijf-2 0 2
1
this works even if the partition has a different label and you want it to be shown as Schijf-2
| how should I mount my ext4 partition in fstab |
1,507,018,455,000 |
This is mostly theorethical question without real practical usage.
As you may know, filenames are stored in directory inode. That means the more files we have and the longer filenames are the more space directory uses.
Unfortunately if files were deleted from the directory the space which is used by directory is not freed and is still used.
$ mkdir test ; cd test
# next command will take a while ; for me it was about 6 minutes
$ for ((i=1;i<103045;i++)); do touch very_long_name_to_use_more_space_$i ; done
$ ls -lhd .
drwxr-xr-x 2 user user 8.6M Nov 9 22:36 .
$ find . -type f -delete
$ ls -l
total 0
$ ls -lhd .
drwxr-xr-x 2 user user 8.6M Nov 9 22:39 .
Why the space used by directory isn't updated after the files removal?
Is there a way free the space without directory recreation?
|
You can optimize the directory using fsck.ext4 -D on an unmounted filesystem:
-D Optimize directories in filesystem. This option causes e2fsck
to try to optimize all directories, either by reindexing them if
the filesystem supports directory indexing, or by sorting and
compressing directories for smaller directories, or for filesys‐
tems using traditional linear directories.
The option is also valid on ext3 and ext2.
Why it isn't done on-the-fly, I can't say. Maybe for performance issues?
| How to update directory size after file removal? |
1,507,018,455,000 |
Where does ext4 store directory sizes? Are they stored in the directory inode?
For example, when I run du -h, it returns directories' size instantly, so I don't believe it calculates it at that time.
I'm using ext4 on Linux.
|
Using strace would seem to indicate that the file sizes are indeed calculated by querying the files within the directory.
Example
Say I fill a directory with 3 1MB files.
$ mkdir adir
$ fallocate -l 1M adir/afile1.txt
$ fallocate -l 1M adir/afile2.txt
$ fallocate -l 1M adir/afile3.txt
Now when we trace the du -h command:
$ strace -s 2000 -o du.log du -h adir/
3.1M adir/
Looking at the resulting strace log file du.log:
...
newfstatat(AT_FDCWD, "adir/", {st_mode=S_IFDIR|0775, st_size=4096, ...}, AT_SYMLINK_NOFOLLOW) = 0
fcntl(3, F_DUPFD, 3) = 4
fcntl(4, F_GETFD) = 0
fcntl(4, F_SETFD, FD_CLOEXEC) = 0
getdents(3, /* 5 entries */, 32768) = 144
getdents(3, /* 0 entries */, 32768) = 0
close(3) = 0
newfstatat(4, "afile2.txt", {st_mode=S_IFREG|0644, st_size=1048576, ...}, AT_SYMLINK_NOFOLLOW) = 0
newfstatat(4, "afile3.txt", {st_mode=S_IFREG|0644, st_size=1048576, ...}, AT_SYMLINK_NOFOLLOW) = 0
newfstatat(4, "afile1.txt", {st_mode=S_IFREG|0644, st_size=1048576, ...}, AT_SYMLINK_NOFOLLOW) = 0
brk(0) = 0x231a000
...
Notice the newfstatat system calls? These are getting the size of each file in turn.
Additional Background
If you're interested here's a bit more on the subject.
This behavior has nothing to do with EXT4. This is just how filesystems work in Unix.
The stat command provides no facility for querying anything other then the size of a filesystem object (directory or file).
$ stat adir/
File: ‘adir/’
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fd02h/64770d Inode: 11539929 Links: 2
Access: (0775/drwxrwxr-x) Uid: ( 1000/ saml) Gid: ( 1000/ saml)
Context: unconfined_u:object_r:user_home_t:s0
Access: 2014-04-15 22:29:25.289639888 -0400
Modify: 2014-04-15 22:29:44.977638542 -0400
Change: 2014-04-15 22:29:44.977638542 -0400
Birth: -
Notice it's 4096 bytes. That's the actual size of the directory itself, not what it contains.
References
Some new system calls
| Where does ext4 store directory sizes? |
1,507,018,455,000 |
I see this in my dmesg log
EXT4-fs (md1): re-mounted. Opts: commit=0
EXT4-fs (md2): re-mounted. Opts: commit=0
EXT4-fs (md3): re-mounted. Opts: commit=0
I think that means that dealloc is disabled? does mdadm not support dealloc?
|
mdadm supports dealloc.
commit=sec is the time, the filesystem syncs its data and metadata. Setting this to 0 has the same effect as using the default value 5.
So I don't get the link between mdadm and commit=0 in your question?
| what is commit=0 for ext4? does mdadm not support it? |
1,507,018,455,000 |
I have an SSD with 2 partitions formatted with ext4. On the second partition, I enabled discard as a default option at the filesystem level with this command:
$ sudo tune2fs -o discard /dev/sda2
tune2fs 1.45.5 (07-Jan-2020)
$ sudo tune2fs -l /dev/sda2 | grep 'mount options'
Default mount options: user_xattr acl discard
I also added the discard option to both partitions on /etc/fstab:
/dev/sda2 / ext4 rw,relatime,discard,stripe=8191 0 1
/dev/sda1 /boot ext4 rw,relatime,discard,stripe=8191 0 2
However, when I look into the output of mount, only the one without the discard fs-level default mount option seems to have it enabled:
$ mount | grep '^/dev'
/dev/sda2 on / type ext4 (rw,relatime,stripe=8191)
/dev/sda1 on /boot type ext4 (rw,relatime,discard,stripe=8191)
I notice that the other options mentioned by tune2fs are also not mentioned.
So, can I trust that discard is enabled in the current mount of /dev/sda2 despite mount not mentioning it? Is there some way to verify it? I mean, even tune2fs's output isn't about the current mount.
EDIT: I should mention that I also tried mounting with mount -o discard in the command line and it still doesn't show in mount output:
$ sudo tune2fs -o discard /dev/sda1
tune2fs 1.45.5 (07-Jan-2020)
$ sudo umount /boot
$ sudo mount -o discard /boot
$ mount | grep sda1
/dev/sda1 on /boot type ext4 (rw,relatime,stripe=8191)
|
/proc/mounts and mount don’t show settings which are included in the default settings, including defaults set in the file system options using tune2fs, so unfortunately this is normal.
To determine whether discard is enabled, you need to check the defaults, check the mount options, and combine the two sets of information.
| Is it normal for tune2fs default mount options to not appear in mount output? |
1,507,018,455,000 |
If I create a small filesystem, and grow it when I need to, will the number of inodes increase proportionally?
I want to use Docker with the overlay storage driver. This can be very inode hungry because it uses hardlinks to merge lower layers. (The original aufs driver effectively stacked union mounts, which didn't require extra inodes, but instead caused extra directory lookups at runtime). EDIT: hardlinks don't use extra inodes themselves, I can only think the issue is extra directories which have to be created.
(Closed question here. I believe the answer is incorrect. However it says the question is closed, and that I need to create a new one).
|
Yes. See man mkfs.ext4:
-i bytes-per-inode
Specify the bytes/inode ratio. mke2fs creates an inode for
every bytes-per-inode bytes of space on the disk. The larger
the bytes-per-inode ratio, the fewer inodes will be created.
This value generally shouldn't be smaller than the blocksize of
the filesystem, since in that case more inodes would be made
than can ever be used. Be warned that it is not possible to
change this ratio on a filesystem after it is created, so be
careful deciding the correct value for this parameter. Note
that resizing a filesystem changes the numer of inodes to maintain this ratio.
I verified this experimentally, resizing from 1G to 10G and looking at tune2fs /dev/X | grep Inode. The inode count went from 64K to about 640K.
I believe it's a natural consequence of Unix filesystems which use "block groups". The partition is divided into block groups, each of which has their own inode table. When you extend the filesystem, you're adding new block groups.
| If I grow an ext4 partition, will it increase the number of inodes available? |
1,507,018,455,000 |
I bought a new 2.5-inch external hard drive of 5TB in size from Seagate.
On my Linux Mint 21.1 now, I need to format the newly created partition with gdisk:
Model: Expansion HDD
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 52CB8F84-EFAF-4EC9-B65D-6F8541A65F53
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 9767541133
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 9767541133 4.5 TiB 8300 Seagate_5TB_Ext4
visible now with fdisk as:
Disk /dev/sdb: 4.55 TiB, 5000981077504 bytes, 9767541167 sectors
Disk model: Expansion HDD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 52CB8F84-EFAF-4EC9-B65D-6F8541A65F53
Device Start End Sectors Size Type
/dev/sdb1 2048 9767541133 9767539086 4.5T Linux filesystem
and I want to use Ext4 as a file system.
The question is, is there some data checksumming in place by default, or do I need to use some option like explicitly:
mkfs.ext4 -O metadata_csum,64bit /dev/path/to/disk
as is stated on Ext4 Metadata Checksums Linux Kernel page?
Thank you.
Note that I, thus far, used these command-line options for Ext4:
mkfs.ext4 -L Seagate_5TB_Ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0 /dev/sdX
|
Defaults change over time, and might also depend on your distro.
You can check it yourself with tune2fs -l. Format in different ways and compare tune2fs output.
For testing only, you can also create sparse files of identical size. This avoids having to format your existing filesystems for testing.
The size should be similar (or identical) to your intended target device size, as some flags might also depend on size.
truncate -s 1T a.img b.img
Format them with different flags.
mkfs.ext4 a.img
mkfs.ext4 -O metadata_csum,64bit b.img
Compare tune2fs -l output with diff -u:
tune2fs -l a.img > a.img.tune2fs
tune2fs -l b.img > b.img.tune2fs
diff -u a.img.tune2fs b.img.tune2fs
Result:
# diff -U 0 a.img.tune2fs b.img.tune2fs
--- a.img.tune2fs 2023-02-19 14:08:59.338434366 +0100
+++ b.img.tune2fs 2023-02-19 14:09:03.321859687 +0100
@@ -4 +4 @@
-Filesystem UUID: 88952b27-467d-4232-a310-030eaf463d7c
+Filesystem UUID: b6720761-1fd9-45e6-afd4-2ec7fe63cafb
@@ -29 +29 @@
-Filesystem created: Sun Feb 19 14:08:35 2023
+Filesystem created: Sun Feb 19 14:08:39 2023
@@ -31 +31 @@
-Last write time: Sun Feb 19 14:08:35 2023
+Last write time: Sun Feb 19 14:08:39 2023
@@ -34 +34 @@
-Last checked: Sun Feb 19 14:08:35 2023
+Last checked: Sun Feb 19 14:08:39 2023
@@ -45 +45 @@
-Directory Hash Seed: efda347d-032b-4d84-81f0-8e86591be3c4
+Directory Hash Seed: 30a3a1b1-682f-4bcc-87f1-909fd577e2fa
@@ -48,2 +48,2 @@
-Checksum: 0x2fbbf9c2
-Checksum seed: 0x683d2fee
+Checksum: 0x40316d8f
+Checksum seed: 0x58dc22cf
In this case, there was no difference other than UUID, Timestamp, Hash/Checksums. Those are always different, so that's expected. So in my case on my system, specifying -O metadata_csum,64bit seems unnecessary.
Adding -m 0 -E lazy_itable_init=0,lazy_journal_init=0 results in:
@@ -15 +15 @@
-Reserved block count: 13421772
+Reserved block count: 0
Using mkfs.ext2 instead of mkfs.ext4 (just to illustrate that it does show when there are different flags active):
@@ -7 +7 @@
-Filesystem features: has_journal ext_attr resize_inode dir_index orphan_file filetype extent 64bit flex_bg metadata_csum_seed sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
+Filesystem features: ext_attr resize_inode dir_index filetype sparse_super large_file
(and many other changes)
So this is an example how to use tune2fs to check what kind of filesystem mke2fs actually made for you.
For final confirmation, you'll also have to check on the actual device, rather than test files as shown above (mkfs might pick some settings depending on device type).
If it turns out that you picked the wrong flags at mkfs time, some of them can also be changed on the fly (using either tune2fs or resize2fs) without needing to re-format.
| Checksumming on Ext4 (crc32c-intel) while formatting (5TB external HDD) |
1,507,018,455,000 |
I am on a test VM where I am trying to convert a second disk to btrfs.
The conversion fails with the error missing data block for bytenr 1048576 (see below).
I couldn't find any information about the error. What can I do to fix this?
$ fsck -f /dev/sdb1
fsck from util-linux 2.35.2
e2fsck 1.45.6 (20-Mar-2020)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/sdb1: 150510/4194304 files (0.5% non-contiguous), 2726652/16777216 blocks
$ btrfs-convert /dev/sdb1
create btrfs filesystem:
blocksize: 4096
nodesize: 16384
features: extref, skinny-metadata (default)
checksum: crc32c
creating ext2 image file
ERROR: missing data block for bytenr 1048576
ERROR: failed to create ext2_saved/image: -2
WARNING: an error occurred during conversion, filesystem is partially created but not finalized and not mountable
|
It was a bug
Now we have pinned down the bug, it's a bit overflow for multiplying
unsigned int.
Also see:
https://github.com/kdave/btrfs-progs/commit/c9c4eb1f3fd343512d50b075b40bba656cbd02cb
https://www.spinics.net/lists/linux-btrfs/msg103379.html
As a workaround you can resize your filesystem to something smaller/larger before the conversion.
| "missing data block" when converting ext4 to btrfs |
1,507,018,455,000 |
Today I went through my desktop stations running Linux Mint 17.3 Cinnamon and did a file system check of root partitions with Ext4 file systems as follows:
# fsck.ext4 -fn /dev/sdb2
The problem is that on all of the computers I see something similar to this one:
e2fsck 1.42.9 (4-Feb-2014)
Warning! /dev/sdb2 is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.
Pass 1: Checking inodes, blocks, and sizes
Deleted inode 524292 has zero dtime. Fix? no
Inodes that were part of a corrupted orphan linked list found. Fix? no
Inode 524293 was part of the orphaned inode list. IGNORED.
Inode 524294 was part of the orphaned inode list. IGNORED.
Inode 524299 was part of the orphaned inode list. IGNORED.
Inode 524300 was part of the orphaned inode list. IGNORED.
Inode 524301 was part of the orphaned inode list. IGNORED.
Inode 524302 was part of the orphaned inode list. IGNORED.
Inode 524310 was part of the orphaned inode list. IGNORED.
Inode 524321 was part of the orphaned inode list. IGNORED.
Inode 524322 was part of the orphaned inode list. IGNORED.
Inode 524325 was part of the orphaned inode list. IGNORED.
Inode 2492565 was part of the orphaned inode list. IGNORED.
Inode 2622677 was part of the orphaned inode list. IGNORED.
Inode 2622678 was part of the orphaned inode list. IGNORED.
Inode 2883748 was part of the orphaned inode list. IGNORED.
Inode 2884069 was part of the orphaned inode list. IGNORED.
Inode 2885175 was part of the orphaned inode list. IGNORED.
Pass 2: Checking directory structure
Entry 'Default_keyring.keyring' in /home/vlastimil/.local/share/keyrings (2495478) has deleted/unused inode 2498649. Clear? no
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Unattached inode 2491790
Connect to /lost+found? no
Pass 5: Checking group summary information
Block bitmap differences: -(34281--34303) -11650577 -(11650579--11650580) -11650591 -(11650594--11650595) -(13270059--13270073) -(13272582--13272583) -(20542474--20542475) +(26022912--26023347) -(26029568--26030003)
Fix? no
Free blocks count wrong (14476802, counted=14476694).
Fix? no
Inode bitmap differences: -(524292--524294) -(524299--524302) -524310 -(524321--524322) -524325 +2491790 -2492565 -2498649 -(2622677--2622678) -2883748 -2884069 -2885175
Fix? no
Free inodes count wrong (7371936, counted=7371916).
Fix? no
/dev/sdb2: ********** WARNING: Filesystem still has errors **********
/dev/sdb2: 443232/7815168 files (0.1% non-contiguous), 16757502/31234304 blocks
What I have tried:
# touch /forcefsck
This results in a 2-3 seconds lasting check on startup. Obviously not repairing anything.
That is most probably because my root file system is somehow clean.
# fsck.ext4 -n /dev/sdb2
e2fsck 1.42.9 (4-Feb-2014)
Warning! /dev/sdb2 is mounted.
Warning: skipping journal recovery because doing a read-only filesystem check.
/dev/sdb2: clean, 443232/7815168 files, 16757502/31234304 blocks
As I can't find almost anything for file system check on boot other than sudo touch /forcefsck, I tried these steps:
echo u > /proc/sysrq-trigger
umount /dev/sdb2
fsck -fy /dev/sdb2
This did show up that it had it repaired, to make sure I ran the fsck again without errors. BUT, they are back once I reboot. I am confused right now. Please don't instruct like "create a flash drive and boot from it and ...". I want a solution on reboot or before reboot without booting from some flash drives. Thank you.
|
First off, fsck'ing a mounted filesystem is expected to produce errors. The filesystem isn't consistent because the journal hasn't been replayed (nor has it been cleanly unmounted), and you can't replay the journal because that (like any other change) would corrupt the filesystem. If you're using LVM, you could take a snapshot and fsck the snapshot.
If you're on an SSD, fsck can be pretty fast. You could also try using tune2fs -C to set the mount count higher than the maximum (which you can get from dumpe2fs -h).
touch /forcefsck should work.
Editor's notes:
touch /forcefsck does not work.
Please refer to this answer for crystal clear evidence and a solution.
| How to fsck root file system before boot or on reboot [duplicate] |
1,507,018,455,000 |
I know there is option "--bind" for multiple mount operation to handle such case. But ext4 can be directly mounted at different mount points, without option "--bind". So, I wonder whether it is safe to mount ext4 filesystem at different mount points. And I find that ext4 support a feature "mmp"(multiple mount protection), is it used to handle this case?
|
Yes, it's perfectly safe. It's mentioned in the manpage for mount().
Since Linux 2.4 a single filesystem can be visible at multiple mount
points, and multiple mounts can be stacked on the same mount point.
I think mmp is something else. Something about mounting a block device which is shared between multiple computers.
So it's not always been possible. IIRC it used to check for it and give you a nice error message. Because that's very easy to implement, and if it did the wrong thing, you could very easily cause massive data loss. It sounds like the new system was implemented in tandem with bind mounts:
MS_BIND (Linux 2.4 onward)
Perform a bind mount
| Is it safe to mount same ext4 filesystem at different mount points? |
1,507,018,455,000 |
Is there a way to set atime writes to be cached for a very long time? I need atime (that is to say, relatime won't cut it), but I don't want it to effect performance so much. Data loss in of atimes (and atimes only) is acceptable in some cases (e.g. power failure).
|
I found lazytime, a mount option for ext4, that solves this satisfactorily for me.
https://lwn.net/Articles/620086/
This mode causes atime, mtime, and ctime updates to only be made to the in-memory version of the inode. The on-disk times will only get updated when (a) when the inode table block for the inode needs to be updated for some non-time related change involving any inode in the block, (b) if userspace calls fsync(), or (c) the refcount on an undeleted inode goes to zero (in most cases, when the last file descriptor assoicated with the inode is closed).
This option is available since kernel 4.0.
As well, it is necessary to override the default of relatime, otherwise you get relatime functionality in addition to the caching functionality of lazytime. To do this, mount with strictatime AND lazytime.
| Cache atime writes |
1,507,018,455,000 |
The default reserved blocks percentage for ext filesystems is 5%. On a 4TB data drive this is 200GB which seems excessive to me.
Obviously this can be adjusted with tune2fs:
tune2fs -m <reserved percentage> <device>
however the man page for tune2fs states that one of the reasons for these reserved blocks is to avoid fragmentation.
So given the following (I have tried to be specific to avoid wildly varying opinions):
~4TB HDD
Used to store large files (all >500mb)
Once full, Very few writes (maybe once a month 1-5 files are replaced)
Data only (no OS or applications running from the drive)
Moderate reads (approx 20tb a week and the whole volume read every 3 months)
HDD wear is of concern and killing a HDD for the sake of saving 20GB is not the desired outcome (Is this even a concern?)
What is the maximum percentage that the drive can be filled to without causing (noticeable from a performance and/or hdd wear perspective) fragmentation?
Are there any other concerns with filling a large data hdd to a high percentage and/or setting the reserved blocks count to say 0.1%?
|
The biggest problem with fragmentation is free space fragmentation, which means that when your filesystem gets full and there are no longer big chunks of free space left, your filesystem performance falls off a cliff. Each new file can allocate only small chunks of space at a time, so is very fragmented. Even when other files are deleted, the previously written files are splattered all over the disk, causing new files to be fragmented again.
In the usage case you describe above (~500MB files, relatively few overwrites or new files being written, old ~500MB files being deleted periodically, I'm assuming some kind of video storage system) you will get relatively little fragmentation - assuming your file size remains relatively constant. This is especially true if your writes are single-threaded, since multiple write threads will not be competing for the small amount of free space and interleaving their block allocations. For every old file deleted from disk, you will get a few hundred MB of contiguous space (assuming the file was not fragmented to begin with), and it would be filled up again.
If you do have multiple concurrent writers, then using fallocate() to reserve large chunks of space for each file (and truncate() at the end to free up any remaining space) will avoid fragmentation as well. Even without this, ext4 will try to reserve (in memory) about 8MB of space for a file while it is being written, to avoid the worst fragmentation.
I'd recommend that you keep at least a decent multiple of your file size free (e.g. 16GB or more) so that you don't ever get to the point of consuming all the dribs and drabs of free blocks and introducing permanent free space fragmentation.
| Recommended maximum percentage to fill a large ext4 data drive |
1,507,018,455,000 |
I am adding a second drive to my Ubuntu Server. It was previously in a FreeNas system, but I got rid of the XFS partition and created an ext4 partition (in an older Ubuntu system). I then backed up all my data onto it, then installed the disk in my Ubuntu Server.
dmesg | tail
[ 294.570830] EXT4-fs (sdb): VFS: Can't find ext4 filesystem
[ 365.523173] exe (1269): /proc/1269/oom_adj is deprecated, please use /proc/1269/oom_score_adj instead.
[ 516.249248] EXT4-fs (sdb): VFS: Can't find ext4 filesystem
[ 518.965799] EXT3-fs (sdb): error: can't find ext3 filesystem on dev sdb.
I also have a testdisk.log file as follows
Thu Jul 28 19:40:00 2011
Command line: TestDisk
TestDisk 6.11, Data Recovery Utility, April 2009
Christophe GRENIER <[email protected]>
http://www.cgsecurity.org
OS: Linux, kernel 2.6.38-8-server (#42-Ubuntu SMP Mon Apr 11 03:49:04 UTC 2011)
Compiler: GCC 4.5 - Oct 17 2010 19:13:58
ext2fs lib: 1.41.14, ntfs lib: 10:0:0, reiserfs lib: none, ewf lib: none
/dev/sda: LBA, HPA, LBA48, DCO support
/dev/sda: size 3907029168 sectors
/dev/sda: user_max 3907029168 sectors
/dev/sda: native_max 18446744073321613488 sectors
/dev/sda: dco 18446744073321613488 sectors
/dev/sdb: LBA, HPA, LBA48, DCO support
/dev/sdb: size 3907029168 sectors
/dev/sdb: user_max 3907029168 sectors
/dev/sdb: native_max 18446744073321613488 sectors
/dev/sdb: dco 18446744073321613488 sectors
Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512
Hard disk list
Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - ATA ST32000542AS
Disk /dev/sdb - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - ATA ST32000542AS
Partition table type (auto): EFI GPT
/dev/sdb: Host Protected Area (HPA) present.
Disk /dev/sdb - 2000 GB / 1863 GiB - ATA ST32000542AS
Partition table type: EFI GPT
Analyse Disk /dev/sdb - 2000 GB / 1863 GiB - CHS 243201 255 63
hdr_size=92
hdr_lba_self=1
hdr_lba_alt=3907029167 (expected 3907029167)
hdr_lba_start=34
hdr_lba_end=3907029134
hdr_lba_table=2
hdr_entries=128
hdr_entsz=128
Current partition structure:
1 P FreeBSD Swap 128 4194431 4194304 [swap-ada0]
2 P Unknown 4194432 3907029134 3902834703 [ada0]
search_part()
Disk /dev/sdb - 2000 GB / 1863 GiB - CHS 243201 255 63
recover_EXT2: s_block_group_nr=0/14888, s_mnt_count=5/25, s_blocks_per_group=32768, s_inodes_per_group=8192
recover_EXT2: s_blocksize=4096
recover_EXT2: s_blocks_count 487854337
recover_EXT2: part_size 3902834696
MS Data 4194432 3907029127 3902834696 [D1]
EXT4 Large file Sparse superblock, 1998 GB / 1861 GiB
Results
P MS Data 4194432 3907029127 3902834696 [D1]
EXT4 Large file Sparse superblock, 1998 GB / 1861 GiB
interface_write()
1 P MS Data 4194432 3907029127 3902834696 [D1]
write!
No extended partition
You will have to reboot for the change to take effect.
TestDisk exited normally.
I don't mean to just dump log files and ask someone else to "make it work" but I find that in this case the testdisk.log would provide better incite than my explanation.
I would really like to be able to use this drive without having to reformat it. Any help would be greatly appreciated!
|
It looks like you have an EFI GPT partition table there. You'll need support for that in your kernel. As a quick-check, do zgrep CONFIG_EFI_PARTITION /proc/config.gz. Here is a HOWTO on mounting partitions of such a disk.
| Unable to mount a second hard drive in Ubuntu Server |
1,507,018,455,000 |
I have been having several issues with a CentOS 9 VM related to file permissions. I've never had this much trouble before, and I'm wondering if it has something to do with the security options and file systems I selected during install (GUI STIG and ext4).
Example issue 1:
Two python files in the same directory, with the same permissions displayed by ls and stat
$ls -al config.py run_app.py
-rwx------. 1 myuser myuser 20K Aug 4 19:33 config.py
-rwx------. 1 myuser myuser 50K Jul 8 10:51 run_app.py
$stat config.py run_app.py
File: config.py
Size: 19873 Blocks: 40 IO Block: 4096 regular file
Device: fd05h/64773d Inode: 1971283 Links: 1
Access: (0700/-rwx------) Uid: ( 1000/myuser) Gid: ( 1000/myuser)
Context: unconfined_u:object_r:user_home_t:s0
File: run_app.py
Size: 51016 Blocks: 104 IO Block: 4096 regular file
Device: fd05h/64773d Inode: 1969096 Links: 1
Access: (0700/-rwx------) Uid: ( 1000/myuser) Gid: ( 1000/myuser)
Context: unconfined_u:object_r:user_home_t:s0
But lsattr doesn't work right:
$lsattr config.py run_app.py
--------------e------- config.py
lsattr: Operation not permitted While reading flags on run_app.py
$sudo lsattr run_app.py
--------------e------- run_app.py
I also cannot cat/edit/run run_app.py. While all three operations work just fine on config.py. Doing anything with run_app.py requires sudo/root.
Example issue 2:
I cannot install python packages into a virtual environment, but I can install them to the local user environment.
myuser@COS9-VM:~/sandbox
$python3 -m venv myvenv
myuser@COS9-VM:~/sandbox
$. myvenv/bin/activate
(myvenv) myuser@COS9-VM:~/sandbox
$python3 -m pip install pyyaml
Traceback (most recent call last):
File "/usr/lib64/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib64/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/__main__.py", line 29, in <module>
from pip._internal.cli.main import main as _main
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/cli/main.py", line 9, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/cli/autocompletion.py", line 10, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/cli/main_parser.py", line 8, in <module>
from pip._internal.cli import cmdoptions
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/cli/cmdoptions.py", line 23, in <module>
from pip._internal.cli.parser import ConfigOptionParser
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/cli/parser.py", line 12, in <module>
from pip._internal.configuration import Configuration, ConfigurationError
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/configuration.py", line 21, in <module>
from pip._internal.exceptions import (
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_internal/exceptions.py", line 7, in <module>
from pip._vendor.pkg_resources import Distribution
File "/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_vendor/pkg_resources/__init__.py", line 80, in <module>
from pip._vendor import appdirs
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 846, in exec_module
File "<frozen importlib._bootstrap_external>", line 982, in get_code
File "<frozen importlib._bootstrap_external>", line 1039, in get_data
PermissionError: [Errno 1] Operation not permitted: '/home/myuser/sandbox/myvenv/lib64/python3.9/site-packages/pip/_vendor/appdirs.py'
(myvenv) myuser@COS9-VM:~/sandbox
$deactivate
myuser@COS9-VM:~/sandbox
$python3 -m pip install pyyaml
Defaulting to user installation because normal site-packages is not writeable
Collecting pyyaml
Using cached PyYAML-6.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (661 kB)
Installing collected packages: pyyaml
WARNING: Value for scheme.platlib does not match. Please report this to <https://github.com/pypa/pip/issues/10151>
distutils: /home/myuser/.local/lib/python3.9/site-packages
sysconfig: /home/myuser/.local/lib64/python3.9/site-packages
WARNING: Additional context:
user = True
home = None
root = None
prefix = None
Successfully installed pyyaml-6.0
I am out of ideas... What am I missing?
|
After scouring the internet, I have an answer. Of course the answer was on Stack Overflow/Stack Exchange already (here), but it took me days to track it down.
My VM was running fapolicyd as part of the STIG compliance configuration I enabled at installation. This daemon inserts itself via hooks in the file permissions decision making process. It has rules files that by default disable access to certain executable files in certain non-system binary/executable directories. It does this based off its determination of the MIME type of the file as far as I can tell. In my example config.py had no shebang, whereas run_app.py does. This was enough to get the latter classified as text/x-python, while leaving the former alone.
Once I stopped/disabled the fapolicyd service, I was able to use files according to their displayed permissions/ACLs.
| File permissions not matching allowed operations...? |
1,507,018,455,000 |
ext4 is failed me again! the most unstable fs
tried to fix it by restoring block from backup, but without luck..
↪ sudo fsck.ext4 -v /dev/sdd
e2fsck 1.45.6 (20-Mar-2020)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/sdd
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
/dev/sdd contains DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 4294967295 sectors, extended partition table (last) data
↪ sudo mke2fs -n /dev/sdd
mke2fs 1.45.6 (20-Mar-2020)
/dev/sdd contains DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 4294967295 sectors, extended partition table (last) data
Proceed anyway? (y,N) y
Creating filesystem with 976754646 4k blocks and 244195328 inodes
Filesystem UUID: 0e4124ad-a390-4c60-bb4a-4f7c48dac23b
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544
↪ sudo e2fsck -b 32768 /dev/sdd
e2fsck 1.45.6 (20-Mar-2020)
e2fsck: Bad magic number in super-block while trying to open /dev/sdd
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
/dev/sdd contains DOS/MBR boot sector; partition 1 : ID=0xee, start-CHS (0x0,0,2), end-CHS (0x3ff,255,63), startsector 1, 429496725 sectors, extended partition table (last) data
i've tried blocks up to 11239424
↪ sudo fdisk -l /dev/sdd
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Disk /dev/sdd: 3.65 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: TOSHIBA MD04ABA4
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 7D5C7ECA-C305-3C44-AA4F-8503EB53A54F
Device Start End Sectors Size Type
/dev/sdd1 2048 7814031359 7814029312 3.7T Linux filesystem
↪ sudo e2fsck -b 32768 /dev/sdd1
e2fsck 1.45.6 (20-Mar-2020)
e2fsck: No such file or directory while trying to open /dev/sdd1
Possibly non-existent device?
|
since you have worked partition somewhere in backup inside hdd
↪ sudo fdisk /dev/sdb
Welcome to fdisk (util-linux 2.36.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
you can simply dump it and restore right after dump
sudo sfdisk -d /dev/sdb > sdb.dump
sudo sfdisk /dev/sdb < sdb.dump
and whoaliya =)
The primary GPT table is corrupt, but the backup appears OK, so that will be used.
Checking that no-one is using this disk right now ... OK
Disk /dev/sdb: 3.64 TiB, 4000787030016 bytes, 7814037168 sectors
Disk model: TOSHIBA MD04ABA4
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 7D5C7ECA-C305-3C44-AA4F-8503EB53A54F
Old situation:
Device Start End Sectors Size Type
/dev/sdb1 2048 7814031359 7814029312 3.6T Linux filesystem
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new GPT disklabel (GUID: 7D5C7ECA-C305-3C44-AA4F-8503EB53A54F).
/dev/sdb1: Created a new partition 1 of type 'Linux filesystem' and of size 3.6 TiB.
Partition #1 contains a ext4 signature.
/dev/sdb2: Done.
New situation:
Disklabel type: gpt
Disk identifier: 7D5C7ECA-C305-3C44-AA4F-8503EB53A54F
Device Start End Sectors Size Type
/dev/sdb1 2048 7814031359 7814029312 3.6T Linux filesystem
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
only 2 commands in terminal
only few seconds
it's so easy, don't waste your time and money for new hdd and rstudio backups ;)
| ext4 - Bad magic number in super-block |
1,507,018,455,000 |
I was brushing up and diving deeper into filesystem anatomy and in numerous resources it is said to be a requirement that the very first superblock start at an offset of 1024 bytes. I started looking for any sort of documentation as to why 1024 was chosen, it just seemed pretty arbitrary. All I could find was the following:
"For the special case of block group 0, the first 1024 bytes are unused, to allow for the installation of x86 boot sectors and other oddities. The superblock will start at offset 1024 bytes, whichever block that happens to be (usually 0). However, if for some reason the block size = 1024, then block 0 is marked in use and the superblock goes in block 1. For all other block groups, there is no padding."
Ext4 Disk Layout
I figured this region had something to do with the later stages of grub, so I did some more digging and came across this article:
Details of GRUB on the PC
Which, from the DOS compatibility region section, states that the entire first "cylinder" is reserved, which can be up to 63 sectors, which is far more than a 1024 byte offset, so now i'm just confused.
My Question:
Can someone please explain, from byte 0 to the first superblock of an EXT filesystem, how a disk is laid out?
|
The master boot record (MBR) at the beginning of a disk contains only 446 bytes of code, so it is tiny and cannot do much. Therefore, a common booting technique is to do what is called "chain loading," where the MBR loads code at the beginning of the active partition and jumps to that code. By leaving the first two sectors free, the EXT file system allows the beginning of the partition to be used for such chain-loading code, when your EXT file system is on the active partition. More information on how this booting process works can be found here:
http://wiki.osdev.org/Boot_Sequence#The_Traditional_Way
| EXT filesystem family: Why does the first superblock start at offset 1024? |
1,507,018,455,000 |
Let's suppose I create a file, map it to /dev/loop0, and create a LUKS partition inside
dd if=/dev/zero of=myfile bs=1M count=1000
losetup /dev/loop0 myfile
cryptsetup -c aes-cbc-essiv:sha256 -s 256 -v -y luksFormat /dev/loop0
I then open the encrypted partition
cryptsetup luksOpen /dev/loop0 myfile
Now, I have my encrypted file appear as a disk /dev/mapper/myfile. I need to create a filesystem before I can use it.
Here is my question:
Given that this my new filesystem-inside-file resides on another ext4 filesystem (which already is using journal), what options would be best for this new filesystem ?
Specifically, if I format my new filesystem-inside-file ext4, should I use journal?. Somehow, the idea of journal filesystem inside another journal filesystem seems not right to me, intuitively.
|
From my experience running an encrypted reiserfs with private information you should not put that on an journalling filesystem like ext3. I switched back from ext3 to having the file on an ext2 partition after I had to restore from a backup.
Over the years ( I have had this file for 5 years ), I had to run recovery several times, and when hosted on ext3 this was the only time reiserfsck could not recover. I think that was because ext3 did a recover which confused the internals of the encrypted disk.
I never tried a non-journal filesystem on a journal filesystem (e.g. encrypted ext2 file on reiserfs) for me the important (i.e. encrypted data should be journalled).
I am still running reiserfs, never used ext4 for this (but I am considering btrfs, just need to check some time if that is stable enough)
If you put your homedirectory on there, be prepared that this feels a bit sluggish, I don't think any finetuning with parameters could have helped that, and I don't think the ext4 ones will influence things much, given that encryption is a performance penalty hit in all directions.
| File containing ext filesystem |
1,507,018,455,000 |
Some preamble: I'm taking bitwise copy of disk devices (via dd command) from twin hosts (i.e. with the same virtualized hardware layout and software packages, but with different history of usage). To optimize image size I trailed all empty space on partitions with zeroes (e.g. from /dev/zero). I'm also aware of reserved blocks per partition and temporarily downgraded that value to 0% before trailing.
But I'm curious about discrepancy of the final compressed (by bzip2) images. All hosts have almost the same tar-gziped size of files, but compressed dd images have significant variety (up to 20%). So how could it be? Is there a reason in the filesystem journals data which I was unable to purge? There are over ten partitions on the host and each reported of 128Mb journal size. (I also checked defragmentation, it's all ok: 0 or 1 due to e4defrag tool report)
So, my question is it possible somehow to clean ext3/ext4 filesystem journals? (safely for stored data of course :)
CLARIFICATION
I defenitely asked a question about how to clean (purge/refresh) journals in ext3/ext4 filesystem if possible or maybe I'm mistaken and there is no such feature as reclaiming disk space occupied by filesystem journals, so all solutions are welcome. An intention to ask the question I put as premise into the preamble and the answer to my question would help me to investigate the issue I encountered with.
|
You can purge the journal by either un-mounting, or remounting read-only (arguably a good idea when cloning). With ext4 you can also turn off the journal altogether (tune2fs -O ^has_journal), the .journal magic immutable file will be removed automatically. The journal data will still be on the underlying disk of course, so removing the journal and then zero-filling free space might get the best results.
The comments above hit the nail on the head though, dd sees the bits underneath the filesystem, how they came to be in any particular arrangement depends on all the things that have happened to the filesystem, rather than just the final contents of files. Features such as pre-allocation, delayed allocation, multi-block allocation, nanosecond timestamps and of course the journal itself all contribute to this. Also, there is one potentially random allocation strategy: the Orlov allocator can fall-back to random allocation (see fs/ext4/ialloc.c).
For completeness the secure deletion feature with random scrubbing would also contribute to differences (assuming you deleted your zero-filled ballast files), though that feature is not (yet) mainline.
On many systems the dump and restore commands can be used for a similar cloning method, for various reasons it never quite caught on in Linux.
| How to clean journals in ext3/ext4 filesystem? [closed] |
1,507,018,455,000 |
My setup is the following:
Linux kernel 2.6.28
e2fsprogs 1.42.7
64 GB class 10 SD card
I am attempting to speed up the time it takes to format the entire card to an ext4 filesystem. My research has pointed me towards the lazy_itable_init=1 option for mkfs.ext4. If I understand correctly, these options will improve the speed of formatting the SD card partition considerably, however this is achieved by deferring the initialization of the inodes to when the filesystem is first mounted. This initialization will then be performed in the background by the kernel (v2.6.27+ only)
The man pages have the following sentence about this option:
This [flag] speeds up filesystem initialization noticeably, but it requires the kernel to finish initializing the filesystem in the background when the filesystem is first mounted.
My question is, what happens if the kernel does not finish initializing the filesystem in the background?
I have tested this by formatting using the lazy_itable_init=1 option, mounting the file system and then removing the SD card shortly after. When I insert the card again, I can mount the partition without problem and wrote several files of 100 MB containing zeros. These were read back and were correct.
Is this just a fluke positive, would I be guaranteed to see this behavior after such a sequence of events?
|
The reason the inode tables are initialized with zeros it to make sure that any garbage that happened to be there before does not get misinterpreted as a valid inode by e2fsck. Normally it won't make any difference but if e2fsck detects errors, it may try to recover by heuristically recognizing inodes whether or not the bitmap indicates they are in use, and so it may try to recover invalid inodes that you will then have to remove from /lost+found.
| Risks involved with lazy_itable_init=1 for ext4 fs on SD card |
1,445,260,167,000 |
I'm trying to understand how inode numbers (as displayed by ls -i) work with ext4 partitions.
I'm trying to understand whether they are a construct of the linux kernel and mapped to inodes on disk, or if they actually are the same numbers stored on disk.
Questions:
Do inode numbers change when a computer is rebooted?
When two partitions are mounted, can ls -i produce the same inode number for two different files as long as they are on different partitions.
Can inode numbers be recycled without rebooting or re-mounting partitions?
Why I'm asking...
I want to create a secondary index on a USB hard drive with 1.5TB of data and around 20 million files (filenames). Files range from 10s of bytes to 100s of GB. Many of them are hard linked multiple times, so a single file (blob on disk) might have anything up to 200 file names.
My task is to save space on disk by detecting duplicates and replacing the duplication with even more hard links.
Now as a single exercise, I think I can create a database of every file on disk, it's shasum, permissions etc... Once built, detecting duplication should be trivial. Bit I need to be certain I am using the right unique key. Filenames are inappropriate due to the large number of existing hard links. My hope is that I can use inode numbers.
What I would like to understand is whether or not the inode number us going to change when I next reboot my machine. Or if they are even more volatile (will they change while I'm building my database?)
All the documentation I read fudges the distinction between inode numbers as presented by the kernel and inodes on disk. Whether or not these are the same thing is unclear based on the articles I've already read.
|
I'm trying to understand how inode numbers (as displayed by ls -i) work with ext4 partitions.
Essentially, inode is a reference for a filesystem(!), a bridge between actual data on disk (the bits and bytes) and name associated with that data (/etc/passwd for instance). Filenames are organized into directories, where directory entry is filename with corresponding inode.
Inode then contains the actual information - permissions, which blocks are occupied on disk, owner, group, etc. In How are directory structures stored in UNIX filesystem, there is a very nice diagram, that explains relation between files and inodes a bit better:
And when you have a file in another directory pointing to the same inode number, you have what is known as hard link.
Now, notice I've emphasized that inode is reference specific to filesystem, and here's the reason to be mindful of that:
The inode number of any given file is unique to the filesystem, but not necessarily unique to all filesystems mounted on a given host. When you have multiple filesystems, you will see duplicate inode numbers between filesystems, this is normal.
This is in contrast to devices. You may have multiple filesystems on the same device, such as /var filesystem and /, and yet they're on the same drive.
Now, can inode number change? Sort of. Filesystem is responsible for managing inodes, so unless there's underlying issues with filesystem, inode number shouldn't change. In certain tricky cases, such as vim text editor,
renames the old file, then writes a new file with the original name, if it thinks it can re-create the original file's attributes. If you want to reuse the existing inode (and so risk losing data, or waste more time making a backup copy), add set backupcopy yes to your .vimrc.
The key point to remember is that where data might be the same to the user, under the hood it actually is written to new location on disk, hence the change in inode number.
So, to make things short:
Do inode numbers change when a computer is rebooted?
Not unless there's something wrong with filesystem after reboot
2.When two partitions are mounted, can ls -i produce the same inode number for two different files as long as they are on different partitions.
Yes, since two different partitions will have different filesystems. I don't know a lot about LVM, but under that type of storage management two physical volumes could be combined into single logical volume, which would in my theoretical guess be the case where ls - would produce one inode per file
Can inode numbers be recycled without rebooting or re-mounting partitions?
The filesystem does that when a file is removed( that is , when all links to file are removed, and there's nothing pointing to that inode).
My task is to save space on disk by detecting duplicates and replacing the duplication with even more hard links.
Well, detecting duplication can be done via md5sum or other checksum command. In such case you're examining the actual data, which may or may not live under different inodes on disk. One example is from heemayls answer:
find . ! -empty -type f -exec md5sum {} + | sort | uniq -w32 -dD
| How do inode numbers from ls -i relate to inodes on disk |
1,445,260,167,000 |
I'm working with Tensorflow's TFRecords format, which serializes a bunch of data points into a single big file. Typical values here would be 10KB per data point, 10,000 data points per big file, and a big file around 100MB. TFRecords are typically written just once - they are not appended. I think this means they will not be very fragmented.
I believe TFRecords were based on Google's internal RecordIO format.
Usually people run Tensorflow and TFRecords on Ubuntu 18.04 or 20.04, which I think is usually an ext4 filesystem.
And usually, deep learning engineers run on SSD/NVME disks. The cost delta over magnetic spinning platter disks is immaterial compared to the massive cost of the GPU itself.
Question 1:
In an ext4 filesystem, if I know that a specific datapoint is say 9,000,000 bytes into a file, can I seek to that location and start reading the datapoint in constant time? By constant time, I mean solely as a function of the depth of the seek. I'm not worried about the effect the total size of the file has.
If this is true, it would imply there's some kind of lookup table / index for each file in an ext4 file system that maps seek locations to disk sectors.
I haven't looked at filesystems in decades, but I seem to recall that FAT filesystems are linked lists - you have to read a disk sector to know what the next disk sector is. This implies that to seek to 9,000,000 bytes into a file, I need to read all the disk sectors from the first 8,999,999 bytes. E.g. seek time is linear to the "depth" of the seek. I'm hoping ext4 is constant time, not linear.
Question 2:
My ultimate goal is to perform random access into a TFRecord. For reasons I assume are related to optimizing read speed on magnetic spinning platter disks, TFRecords are designed for serial reading, not random access.
Regardless of whether the seek function is a constant time or not (as a function of the depth of the seek), would random access into a big file on an ext4 filesystem be "fast enough"? I don't honestly know exactly what fast enough would be, but for simplicity, let's say a very fast deep learning model might be able to pull 10000 data points per second, where each data point is around 10KB and randomly pulled from a big file.
|
Maybe, if the file is not fragmented on the disk. But it probably doesn't matter if it's strictly constant time.
ext2 and ext3 stored the locations of the data blocks in trees that had 1 to 4 levels, so lookups couldn't be constant-time. Also, the blocks of the tree could in principle be anywhere along the filesystem, so some disk seeks may be required.
ext4 stores a tree of extents, which then describe multiple contiguous data blocks. So if it's known a file has only one extent, the lookup would be constant time. But if it's fragmented (or larger than 128 MiB, requiring more than one extent), it would not.
(source: https://www.kernel.org/doc/html/latest/filesystems/ext4/dynamic.html#the-contents-of-inode-i-block)
Though I might be more worried about if the lookups are fast enough, than if they're constant time. That's a far easier goal, since the trees aren't going to be too deep anyway and if you're accessing the same file repeatedly, they'll soon all be loaded into memory anyway, eliminating any disk seeks (which aren't that big of a problem on SSDs, but, anyway). There's also the system call overhead on each access, double if you seek before each read/write. Though I think there are some more advanced system calls that can alleviate that.
that FAT filesystems are linked lists - you have to read a disk sector to know what the next disk sector is. This implies that to seek to 9,000,000 bytes into a file, I need to read all the disk sectors from the first 8,999,999 bytes. E.g. seek time is linear to the "depth" of the seek.
FAT filesystems have a table of block pointers (the FAT table), and those pointers form the linked list. Not the data blocks themselves. So, with e.g. a 4 kB block size, you only need to read 9000000 / 4096 ~= 2000 pointers, a few kB worth. It's still a linked list though, and iterating it requires a number of steps proportional to the seek location (unless the fs driver has some smarts to reduce that). But the FAT table is contiguous and also likely to be in memory so there are no disk seeks involved.
Typical values here would be 10KB per data point, 10,000 data points per big file, and a big file around 100MB.
let's say a very fast deep learning model might be able to pull 10000 data points per second, where each data point is around 10KB and randomly pulled from a big file.
A 100 MB file should easily fit fully in memory (multiple times over), and keeping it there would get rid of the system call overhead of the seeks too. If you're only reading, that's that.
If you were to write, too, note that not all writes would immediately hit the disk flash anyway without some special care that would probably slow the whole thing down. (At least you'd need to call fsync() on every occasion, and trust the drive to not cheat you.)
With the file in memory, you could either manually write it back every once in a while, or map it to memoery with mmap() and call msync() to ask it to be written back from time to time.
| Do files in an ext4 filesystem allow constant time seeking? |
1,445,260,167,000 |
I have the following devices with Linux Mint 18.1 on laptops and GNU/Linux Debian 9 on the server.
(All are 64-bit and with Cinnamon desktop.)
All drive devices are formatted with ext4 filesystem; RAID 1 is done utilizing mdadm.
Laptop with 1 SSHD (not to be confused with HDD).
Laptop with 3 drives: 2 x consumer HDDs in RAID 1 and 1 x SSD.
Server with 5 drives: 4 x enterprise HDDs in two times RAID 1 and 1 x SSD.
I have the system on those SSDs and I would never defragment an SSD.
The question is about HHDs and an SSHD.
I found an old PDF outlining a few more features to e4defrag.
Why the filesystem must be mounted, as per this error message when trying to defragment an unmounted filesystem? I want to understand why that is:
Filesystem is not mounted
I would like to have implemented free space defragmentation. AFAIK it is now under review. Is it possible for me to e.g. compile e4defrag from source with these options available or anyhow?
e4defrag -f /deviceOrDirectory
I would also like to use relevant data feature:
e4defrag -r /deviceOrDirectory
I have many relevant reasons to believe the fragmentation on these machines is slowing down the read speed, example:
Taken from the server with RAID 1 HDDs:
[2556/30987]/raid1a/bitcoind/blocks/rev00820.dat: 100% extents: 16 -> 1 [ OK ]
Taken from the laptop with RAID 1 HDDs:
[29405/50810]/raid1/movies/SGA-HEVC/S04E01 - Adrift.mp4: 100% extents: 31 -> 6 [ OK ]
As you can see, the defragmentation was not even able to put the 31 blocks file into 1 piece. Of course you might argue it is a movie file, so it does not matter. True, but only in this case.
The command I use to start the defragmentation:
On the server:
sudo e4defrag -v /dev/md1
On the laptop:
sudo e4defrag -v /raid1/
It does not seem to matter, whether I invoke the command using device name or a directory.
Can you point me to the right direction?
|
e4defrag needs the file system to be mounted because it asks the kernel’s file system driver to perform the defragmentation, it doesn’t do it itself.
As for free space defragmentation and relevant file defragmentation, the patches were never completed; the last mention on the relevant mailing list dates back to 2014:
The e4defrag is in e2fsprogs, and the code is still getting maintained
and improved. Dmitry Monakhov has in particular added a lot of
“torture tests”, and found a number of race conditions in the
underlying kernel code. He recently also sent a code refactor of the
kernel code which significantly improved it and (shrank the size of
ext4 by 550 lines of code).
That being said, there hasn't been any real feature development for
e4defrag in quite some time. There has been some discussion about
what the kernel APIs might be to support this feature, but there has
never been a finalized API proposal, let alone an implementation.
So I doubt there’s anything worth testing currently.
| Defragmentation options on Ext4 filesystem |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.