date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,445,260,167,000 |
Introduction
Until recently, I thought that on ext file system, inodes have reference counters which count the number of times the file is referenced by a directory entry or a file descriptor.
Then, I learned that the reference counter only counts the number of directory entries referencing it. To falsify this, I read the reference count of a video file using ls -l. It was 1 as I expected because I didn't create any additional hard links to it. I then opened the video file with a video player and executed the same command again. To my surprise, the reference count was still 1. Therefore, I failed at falsifying.
However, I can definitely continue watching the video after removing its only directory entry. When opening a big video file and deleting its directory entry, the amount of free storage space on the file system does not change. It only changes (by the size of the video file) when the player reached the end of the video and closes the file descriptor or the player terminates itself (depending on the video player used).
Question
What are the exact conditions for a file to be freed on an ext file system? I'm interested in how it is handled in ext2, ext3, and ext4. Are there differences depending on the kernel used or other parts of the operating system?
|
You are confusing two different counters: the file system link counter and the file descriptor reference counter.
The file system link counter counts how many links to an inode are in the file system itself. The inode is the structure that contains the file metadata. In ext* file systems this counter is stored in the file system itself.
You can verify how many links has a inode using ls -l. In addition, you can use ls -i to get the inode number of a file. E.g. try to multiply the links to a file using ln and verify that all links have the same inode number.
andcoz@tseenfoo:~/refcount> ls -li
total 40
2248813 -rw-r--r-- 1 andcoz users 40960 7 feb 21.34 test
andcoz@tseenfoo:~/refcount> ln test test2
andcoz@tseenfoo:~/refcount> ln test test3
andcoz@tseenfoo:~/refcount> ls -li
total 120
2248813 -rw-r--r-- 3 andcoz users 40960 7 feb 21.34 test
2248813 -rw-r--r-- 3 andcoz users 40960 7 feb 21.34 test2
2248813 -rw-r--r-- 3 andcoz users 40960 7 feb 21.34 test3
The file descriptor reference counter counts how many times a file is open by a process or, more formally, how many file descriptors reference that inode. This information is stored in kernel memory.
You can get an approximation of this value using fuser command. This command lists all the processes that have a file open. Note that a single process could open the same file multiple times, so fuser list size is less or, usually, equal to reference counter.
andcoz@tseenfoo:~/refcount> tail -f test &
[3] 4226
andcoz@tseenfoo:~/refcount> fuser test
/home/andcoz/refcount/test: 4226
andcoz@tseenfoo:~/refcount> tail -f test2 &
[4] 4354
andcoz@tseenfoo:~/refcount> fuser test
/home/andcoz/refcount/test: 4226 4354
A file is removed from the file system when both the counters are zero.
| When is a file freed in an ext file system? |
1,445,260,167,000 |
I have an old /home partition, that dates back to former linux systems, and it is still in ext3 format. Whereas the rest of my system, / and some other mounted point are devices formated in ext4.
I have grasped some sites on the net that describes how to convert an ext3 partition to an ext4.
In this UL.SE question Can I convert an ext3 partition into ext4 without formatting?, there are also warnings recommending backup of the data before convertion... if ever...
So I wonder if is generally a good idea to convert an existing ext3 partition to ext4. I know it's possible, I know there is a little risk that need a back up if ever. Are there enough benefits such that I should do it ?
|
Both ext3 and ext4 are journaling filesystems, in addition this list several differences, the most relevant are:
Maximum individual file size can be from 16 GB to 16 TB
Overall maximum ext4 file system size is 1 EB (exabyte). 1 EB = 1024 PB (petabyte). 1 PB = 1024 TB (terabyte).
Directory can contain a maximum of 64,000 subdirectories (as opposed to 32,000 in ext3)
Several other new features are introduced in ext4: multiblock allocation, delayed allocation, journal checksum. fast fsck, etc. All you need to know is that these new features have improved the performance and reliability of the filesystem when compared to ext3.
The interesting thing for you might be the faster fsck, the others are probably of less significance in this particular situation (unless your disk gets a growth spurt and magically can contain much larger files).
If you are not going to use that partition intensively I would not recommend converting (at least not without a backup).
| Convert old /home from ext3 to ext4 |
1,445,260,167,000 |
On my Debian Linux system during the install I decided to use disk encryption (the one offered during a regulard Debian install). When the system boots up I need to enter a password and then the "real" boot begins.
Could someone explain how this encryption is performed? Does it happen before or after the filesystem's laid out? Can I use any filesystem available for Linux with the disk encryption?
The /etc/mtab/ is more complicated than what I was used with Linux and I take it it's related to disk encryption but I'm really not sure. Here's (what I think is) the relevant bits from my /etc/mtab:
/dev/sda1 /boot ext2 rw,relatime,errors=continue 0
/dev/mapper/archon-root / ext4 rw,noatime,errors=remount-ro,user_xattr,commit=300,barrier=1,data=ordered 0 0
rootfs / rootfs rw 0 0
I don't really understand why /boot is ext2 and why / is ext-4 and using a /dev/mapper.
Could /boot be itself using ext4?
Could / be using, say, ZFS and yet still offer encryption?
|
/boot is not encrypted (the BIOS would have no way to decrypt it...). It could be ext4, but there really isn't any need for it to be. It usually doesn't get written to. The BIOS reads GRUB from the MBR, then GRUB reads the rest of itself, the kernel, and the initramfs from /boot. The initramfs prompts you for the passphrase. (Assumably, its using cryptsetup and LUKS headers.).
The encryption is performed at a layer below the filesystem. You're using something called dm-crypt (that's the low-level in-kernel backend that cryptsetup uses), where "dm" means "Device Mapper". You appear to also be using LVM, which is also implemented by the kernel Device Mapper layer. Basically, you have a storage stack that looks something like this:
1. /dev/sda2 (guessing it's 2, could be any partition other than 1)
2. /dev/mapper/sda2_crypt (dm-crypt layer; used as a PV for VG archon)
3. LVM (volume group archon)
4. /dev/mapper/archon-root (logical volume in group archon)
5. ext4
You can find all this out with the dmsetup command. E.g., dmsetup ls will tell you the Device Mapper devices in list. dmsetup info will give some details, and dmsetup table will give technical details of the translation the mapping layer is doing.
The way it works is that the dm-crypt layer (#2, above) "maps" the data by performing crypto. So anything written to /dev/mapper/sda2_crypt is encrypted before being passed to /dev/sda2 (the actual hard disk). Anything coming from /dev/sda2 is decrypted before being passed out of /dev/mapper/sda2_crypt.
So any upper layers use that encryption, transparently. The upper layer you have using it first is LVM. You're using LVM to carve up the disk into multiple logical volumes. You've got (at least) one, called root, used for the root filesystem. It's a plain block device, so you can use it just like any other—you can put any filesystem you'd like there, or even raw data. The data gets passed down, so it will be encrypted.
Things to learn about (check manpages, etc.):
/etc/crypttab
LVM (some important commands: lvs, pvs, lvcreate, lvextend)
cryptsetup
| Encrypted disk filesystem compatibilities |
1,445,260,167,000 |
I’m running a linux server at home which is mostly a file and e-mail server and a digital video recorder.
All the data goes on an ext4 partition on a software raid-6.
Every now and then (sometimes twice a day, sometimes twice a month) the whole server locks up. Sometimes I have a kernel report in the syslog which I cannot understand:
------------[ cut here ]------------
kernel BUG at fs/ext4/inode.c:2118!
invalid opcode: 0000 [#1] SMP
last sysfs file: /sys/devices/virtual/net/ppp0/uevent
CPU 0
Modules linked in: ppp_async crc_ccitt nvidia(P) fcpci(P) scsi_wait_scan
Pid: 27841, comm: mythbackend Tainted: P 2.6.39-gentoo-r3 #2 System manufacturer System Product Name/M2N-E
RIP: 0010:[<ffffffff8116f580>] [<ffffffff8116f580>] mpage_da_submit_io+0x268/0x3bf
RSP: 0018:ffff88004262bba8 EFLAGS: 00010286
RAX: ffffea000048b650 RBX: ffffea000051d118 RCX: 0000000000000000
RDX: 0000000000000000 RSI: ffff880000826890 RDI: 0000000000005d38
RBP: ffff88004262bcf8 R08: 000000000d654538 R09: 0100000000002820
R10: 0000000000005d0d R11: 0000000000000000 R12: ffff88004262bde8
R13: ffff88004262bd28 R14: ffff88005ef46150 R15: 0000000000005d37
FS: 00007fbeb053f700(0000) GS:ffff88007fc00000(0000) knlGS:00000000f74aa8e0
CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
CR2: 00007fdcb7a36000 CR3: 000000006b721000 CR4: 00000000000006f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Process mythbackend (pid: 27841, threadinfo ffff88004262a000, task ffff88007fb83330)
Stack:
ffff88007b193b88 ffff88004262bc98 ffff88004741c138 000004ac00001424
ffff88004262bc28 0000000000005d70 ffff88005ef46298 00000000811a337f
0000000000005d70 000000010000000e ffff88004262bc30 0000100000000000
Call Trace:
[<ffffffff811731df>] mpage_da_map_and_submit+0x2c6/0x2dc
[<ffffffff8117390a>] ext4_da_writepages+0x2d4/0x465
[<ffffffff810aafd6>] do_writepages+0x1c/0x26
[<ffffffff810a3bc0>] __filemap_fdatawrite_range+0x4b/0x4d
[<ffffffff810a3bea>] filemap_write_and_wait_range+0x28/0x51
[<ffffffff810fcba1>] vfs_fsync_range+0x30/0x75
[<ffffffff810fcc3b>] vfs_fsync+0x17/0x19
[<ffffffff810fcc66>] do_fsync+0x29/0x3e
[<ffffffff810fcc89>] sys_fdatasync+0xe/0x12
[<ffffffff8155f4fb>] system_call_fastpath+0x16/0x1b
Code: c1 00 02 00 00 74 09 f0 80 60 01 fd 4c 89 40 18 4c 8b 08 41 f7 c1 00 10 00 00 75 09 4c 8b 08 41 80 e1 20 74 0a 4c 39 40 18 74 04 <0f> 0b eb fe 41 f6 45 12 80 74 05 f0 80 48 02 80 f0 80 60 01 ef
RIP [<ffffffff8116f580>] mpage_da_submit_io+0x268/0x3bf
RSP <ffff88004262bba8>
---[ end trace c228cd85b8ef2f99 ]---
|
kernel BUG at fs/ext4/inode.c:2118!
invalid opcode: 0000 [#1] SMP
Appears to be an issue with the ext4 driver in your kernel.
Process mythbackend (pid: 27841, threadinfo ffff88004262a000, task ffff88007fb83330)
mythbackend is triggering it.
[<ffffffff811731df>] mpage_da_map_and_submit+0x2c6/0x2dc
[<ffffffff8117390a>] ext4_da_writepages+0x2d4/0x465
this is the frame stack; what syscalls were being made that led to this.
| What does this Linux kernel trace mean? |
1,445,260,167,000 |
If I dd my disk and compress the image with lzma or lzo the image is still big. The partition is 10GB used, 90GB available. But the image still around 20GB. I believe that is because I have copied many and deleted many files on that disk and the filesystem doesn't zero the unused blocks from those deletions.
How can I zero the unused blocks in order to minimize the disk image? So that dirty bytes don't add up on my image. I'm using ext4.
|
The tool you think you're looking for is zerofree, as described in this duplicate question Clear unused space with zeros (ext3,ext4), and already available in most distributions.
However, you seem to be asking how to take an image backup of a filesystem that excludes unused blocks. In this instance use fsarchiver, as described in this answer over on the AskUbuntu site.
| How can I zero the unused blocks on my filesystem in order to minimize the compressed disk image size? [duplicate] |
1,445,260,167,000 |
I have a USB flash drive with ext4 file system and its files are owned by my user on my local machine, for example by myuser@myhost with 700 permissions.
If I unplug my flash drive and plug it in other Linux machine, can users of that machine have access to files in the flash drive?
What if there is also a user named myuser, can he access those files?
|
Filesystems designed for unix, such as ext4, track the user via a number, the user ID. The user name is not recorded. You can see your own user ID with the command id -u. You can see the user ID who owns a file with ls -ln /path/to/file.
If you take an ext4 filesystem to a different machine, the files will still have the same permissions, and they will have the same user ID. This may or may not be the right user. In general, different machines don't have the same user IDs for the same users unless this requirement was taken into account when creating the user or the machines pool from the same user database.
Permissions on a file only protect that file inside one system. Permissions on a removable drive have no effect for someone who pops the drive into their own computer.
If you want to exchange files via USB, FAT32 is usually the filesystem of choice. It's what most flash drives are formatted for when they're sold. If you need to store files with names or attributes that FAT32 doesn't support, create an archive (e.g. .tar.gz).
| Permissions on an ext4 filesystem on a removable drive used in different machines |
1,445,260,167,000 |
I tried looking at the difference, the main ones seem to be 4 supports more subdirectories in a file, supports larger files, has delayed write which I don't prefer as I don't want data loss. I also see timestamps are more accurate but it also mentions there is no support in glibc so no apps would use it. Also I just need it ot be as accurate as NTFS, I don't need anything more accurate.
I'm thinking I should go with ext3 because its more likely to be more stable. What should I look at when choosing between the two?
|
These days ext4 is considered the stable standard, and you should use it. Also all filesystems use delayed writing, ext4 just delays allocating where the blocks go until they are actually written, which helps reduce fragmentation. It also uses extents to track the blocks, which makes it more efficient.
| How do I choose between ext 3 and 4? |
1,445,260,167,000 |
Suppose you have an ext3 partition which was unfortunately formated as ext4 partition (and where now are some but not a lot new files on it). Is there any way to recover (some) files from the old ext3 partition?
|
You can use a tool like PhotoRec to read the blocks and try to recover files. It actually recovers a lot of file types, not just images like the name may suggest.
http://www.cgsecurity.org/wiki/PhotoRec
| Recover formatted ext3 partition |
1,445,260,167,000 |
I'm using ext4 encryption. https://wiki.archlinux.org/index.php/Ext4#Using_file-based_encryption
Before I decrypt a directory, I can see lots of encrypted filenames in it.
I would like to copy the encrypted files so that I can decrypt them on a different machine.
I could do this with ecryptfs. How do I do this with ext4 encryption.
|
You can see encrypted & padded filenames, but you should be unable to read file contents. So trying to copy the files unencrypted will result in errors such as:
cp: cannot open 'vault/YgI8PdDi8wY33ksRNQJSvB' for reading: Required key not available
So you are pretty much not supposed to do this. The practical answer is to decrypt it, then copy it. The copy will be re-encrypted if you picked an encrypted location as the target directory. Over the network with rsync/ssh the transfer will be encrypted also. So most things work, just storing it in the cloud is probably out of the question. Filesystem specific encryption does not work outside of the filesystem.
Circumventing the read barrier is not sufficient: unlike ecryptfs where all metadata is regular files, the ext4 encryption involves metadata hidden in the filesystem itself, not visible to you, so you cannot easily copy it.
The closest I found is e4crypt get_policy, e4crypt set_policy which allows you to encrypt a directory with an existing key without knowing the actual key in clear text. But it only works for empty directories, not for files.
You can also encrypt a vault directory, populate it with files, then hardlink those files to the root directory, then delete the vault directory. You end up with encrypted files (contents) in the root directory (which you are not supposed to be able to encrypt). The filesystem just knows that the file is encrypted. (Not recommended to actually do this.)
If you must make a copy anyway, I guess you can do it the roundabout way:
make a raw dd copy of the entire filesystem
change filesystem UUID
delete the files you didn't want
Otherwise I guess you'd need a specialized tool that knows how to replicate an encrypted directory + metadata from one ext4 filesystem to another, but I didn't see a way to do so with e4crypt or debugfs.
debugfs in particular seems to be devoid of policy / crypt related features except for ls -r which shows encrypted filenames in their full glory as \x1e\x5c\x8d\xe2\xb7\xb5\xa0N\xee\xfa\xde\xa66\x8axY which means the ASCII representation regular ls shows is encoded in some way to be printable.
Actual filename is [padded to and actually stored in the filesystem as] 16 random bytes, but regular ls shows it as 22 ASCII characters instead. Copying such a file the traditional way would create a file stored as its ASCII character representation when you really need to store it as random bytes. So that's just bound to fail in so many layers.
tl;dr if there is a way to do it then I don't know about it :-}
| Copying ext4 encrypted files |
1,445,260,167,000 |
While I did Ubuntu netinst, the question came into my head. The question is: is reserved 5% kind of run-time? I mean, when doing something like sudo apt install - this 5% is beign used by root at this moment? Does system use this 5% at run-time? Do I have to increase it up to 10-15% e.g.? I have 300gb hard drive. Usually I do only swap and / partitions(not using separate /home,/var or whatever).
|
I mean, when doing something like sudo apt install - this 5% is being used by root at this moment?
Yes. No. Maybe. It doesn't quite work that way.
When you hear the term root reserve, you might think there is a specific area where only root may store files in. Like in a parking lot, you might find spots designated for people with disabilities, or spots for electric cars with charging stations next to them, or spots for parents with children. And no one else is allowed to park there.
However, the root reserve is not like that. There is no designated free space. No, it's all just the same regular free space. So where is the root reserve? It's nowhere. Nowhere specific.
Instead, anything that goes in or out has to go through the entrance/exit gate (filesystem) and is counted doing so. So the filesystem knows how many free spots there are.
And then, if you are not root and there is fewer than root reserve space left, it will simply deny you entry: sorry, not enough space left on device, please leave. (Yes I know there is still free space. But I have to keep at least X free space for root.)
Root on the other hand won't be denied entry unless there's really nothing left.
The location of the free blocks doesn't matter. It also doesn't matter who is already using which blocks. Regarding the blocks already in use, you can't say which is using the root reserve and which isn't. They all are. None of them are. Blame whoever leaves and frees up some space first.
You can delete either regular user files, or root files, to free up enough space so that regular users may be allowed to write again.
| Root reserved blocks |
1,445,260,167,000 |
With NTFS you can enable or disable case sensitivity. Is there a way to do it with ext4 in Linux?
|
There are patches currently under development to implement case insensitivity for ext4.
https://lwn.net/Articles/762826/
https://marc.info/?l=linux-ext4&m=154430575726827&w=2
They were included in the Linux 5.2 kernel, and also require e2fsprogs-1.45 to work. See How to enable new in kernel 5.2 case-insensitivity for ext4 on a given directory?
| Is it possible to disable ext4 case sensitivity? |
1,445,260,167,000 |
I am using:
debugfs -R 'stat <7473635>' /dev/sda7
to get the file creation time (crtime).
Inode: 7473635 Type: regular Mode: 0664 Flags: 0x80000
Generation: 1874934325 Version: 0x00000000:00000001
User: 1000 Group: 1000 Size: 34
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 8
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x55b65ebc:98040bc4 -- Mon Jul 27 22:09:24 2015
atime: 0x55da0168:60b33f74 -- Sun Aug 23 22:52:48 2015
mtime: 0x55b65ebc:98040bc4 -- Mon Jul 27 22:09:24 2015
crtime: 0x55b65ebc:970fe7cc -- Mon Jul 27 22:09:24 2015
Size of extra inode fields: 28
EXTENTS:
(0):29919781
Why am I not getting crtime in nanoseconds even though ext4 supports nanosecond resolution?
|
It does show the timestamp (with nanoseconds precision) but in hex; it's the field after crtime:, e.g. in your output 0x55b65ebc:970fe7cc. The part after the colon is the nanoseconds.
This article gives more details and explains how to calculate the timestamp/nanoseconds. So, e.g. to convert the hex values to a timestamp a la stat you could run:
date -d @$(printf %d 0x55b65ebc).$(( $(printf %d 0x970fe7cc) / 4 )) +'%F %T.%N %z'
2015-07-27 19:39:24.633600499 +0300
| Why debugfs doesn't show crtime in nanoseconds? |
1,445,260,167,000 |
I was compiling a custom linux kernel for a newly installed machine, and after booting into the new kernel (3.12), the init process fails to find a root device, which I traced to the system getting an unknown partition table error on the device in question (/dev/sda). The generic kernel boots up and mounts the root partition just fine. I cannot seem to find anything that looks relevant in the kernel config, what could it be missing?
|
There are a bunch of options mostly named CONFIG_.*_PARTITION, you probably didn't set the one you need. These may only show up if you answer yes to CONFIG_PARTITION_ADVANCED (Advanced partition selection).
You're going to want (on a PC) at least:
CONFIG_MSDOS_PARTITION=y # traditional MS-DOS partition table
CONFIG_EFI_PARTITION=y # EFI GPT partition table
and maybe:
LDM_PARTITION=y # Windows logical (dynamic) disks
You may also want a few more (such as CONFIG_MAC_PARTITION and BSD_DISKLABEL) to read partition tables from other operating systems' disks you may actually run in to.
You can see all of the partition table options in your kernel source tree (in block/partitions/Kconfig) or at Linux Cross Reference.
| "unknown partition table" - misconfigured kernel |
1,445,260,167,000 |
I'm trying to enable journaled usrquota on Debian 11 Kernel 5.10. All information I find uses external files which leads to the following deprecation warning:
quotaon: Your kernel probably supports ext4 quota feature but you are using external quota files. Please switch your filesystem to use ext4 quota feature as external quota files on ext4 are deprecated.
My fstab entry uses the options errors=remount-ro,usrjquota=aquota.user,jqfmt=vfsv1
Which as far as I understand should enable ext4 qouta feature. However after a reboot when I run sudo quotaon -v / I get a deprecation warning and complains about missing aquota.user file.
What confuses me is: Why do I have to specify a file name for usrjquota... As far as I understand the point of journaled quota is that we don't need a file any more.
If someone could provide the steps to enable journaled ext4 quotas it would be really appreciated.
|
To enable journaled quota tune2fs is used. No mount options in /etc/fstab are needed. I.E. assuming you want quotas for /home enabled which is on /dev/sda2
you do:
umount /home
tune2fs -O quota /dev/sda2
mount -a
quotaon -va
If you want to turn quota on for the root file system you need to boot from a live disk and use tune2fs on the related partition.
| How to enable journaled quota on Debian 11 |
1,445,260,167,000 |
I have downloaded Debian GNU/Hurd disk image. However, while the virtual machine was running, my PC crash along with the virtual machine. I tried to start the virtual machine, but many things were not working because the filesystem was damaged. As far as I know, EXT4 is a journaling filesystem, so any damage to the filesystem should be recoverable if the filesystem is EXT4. Now I want to convert the root filesystem (from a backup copy of the disk image) from EXT2 to EXT4. I know that's possible, but I'm not sure whether Debian GNU/Hurd supports EXT4 formatted filesystems.
|
As far as I’m aware, there’s no translator for Ext4, whether in Debian specifically or in the Hurd in general. The existing ext2fs translator doesn’t support Ext4, and the version packaged in Debian doesn’t either.
| Does Debian GNU/Hurd support ext4 filesystem? |
1,445,260,167,000 |
According to this page, filesystems like ext4 have journaling for both blocks and metadata, and it's used to prevent data corruption:
A journaling file system is a file system that keeps track of changes
not yet committed to the file system's main part by recording the
intentions of such changes in a data structure known as a "journal",
which is usually a circular log. In the event of a system crash or
power failure, such file systems can be brought back online more
quickly with a lower likelihood of becoming corrupted.
Btrfs doesn't seem to have journaling according to this page.
Yet, this page quotes ext4 primary developer and maintainer Theodore T'so as saying that btrfs is better than ext4:
Despite the fact that Ext4 adds a number of compelling features to the
filesystem, T'so doesn't see it as a major step forward. He dismisses
it as a rehash of outdated "1970s technology" and describes it as a
conservative short-term solution. He believes that the way forward is
Oracle's open source Btrfs filesystem, which is designed to deliver
significant improvements in scalability, reliability, and ease of
management.
So, how does btrfs prevent data corruption without journaling?
|
Btrfs uses copy on write (CoW) so the existing data are not overwritten when modified but copied to a new location and the copy is changed. So journal is not needed because in case of power failure or system crash you still have the original data. Btrfs also uses checksums to detect random data corruptions so it knows whether both data and metadata are valid or corrupted.
More information about copy on write is available here or more general description on wikipedia.
| How does btrfs prevent data corruption without journaling? |
1,445,260,167,000 |
Formatting a disk for purely large video files, I calculated what I thought was an appropriate bytes-per-inode value, in order to maximise usable disk space.
I was greeted, however, with:
mkfs.ext4: invalid inode ratio [RATIO] (min 1024/max 67108864)
I assume the minimum is derived from what could even theoretically be used - no point having more inodes than could ever be utilised.
But where does the maximum come from? mkfs doesn't know the size of files I'll put on the filesystem it creates - so unless it was to be {disk size} - {1 inode size} I don't understand why we have a maximum at all, much less one as low as 67MB.
|
Because of the way the filesystem is built. It's a bit messy, and by default, you can't even have the ratio as down as 1/64 MB.
From the Ext4 Disk Layout document on kernel.org, we see that the file system internals are tied to the block size (4 kB by default), which controls both the size of a block group, and the amount of inodes in a block group. A block group has a one-block sized bitmap of the blocks in the group, and a minimum of one block of inodes.
Because of the bitmap, the maximum block group size is 8 blocks * block size in bytes, so on an FS with 4 kB blocks, the block groups are 32768 blocks or 128 MB in size. The inodes take one block at minimum, so for 4 kB blocks, you get at least (4096 B/block) / (256 B/inode) = 16 inodes/block
or 16 inodes per 128 MB, or 1 inode per 8 MB.
At 256 B/inode, that's 256 B / 8 MB, or 1 byte per 32 kB, or about 0,003 % of the total size, for the inodes.
Decreasing the number of inodes would not help, you'd just get a partially-filled inode block. Also, the size of an inode doesn't really matter either, since the allocation is done by block. It's the block group size that's the real limit for the metadata.
Increasing the block size would help, and in theory, the maximum block group size increases in the square of the block size (except that it seems to cap at a bit less than 64k blocks/group). But you can't use a block size greater than the page size of the system, so on x86, you're stuck with 4 kB blocks.
However, there's the bigalloc feature that's exactly what you want:
for a filesystem of mostly huge files, it is desirable to be able to allocate disk blocks in units of multiple blocks to reduce both fragmentation and metadata overhead. The bigalloc feature provides exactly this ability.
The administrator can set a block cluster size at mkfs time (which is stored in the s_log_cluster_size field in the superblock); from then on, the block bitmaps track clusters, not individual blocks. This means that block groups can be several gigabytes in size (instead of just 128MiB); however, the minimum allocation unit becomes a cluster, not a block, even for directories.
You can enable that with mkfs.ext4 -Obigalloc, and set the cluster size with -C<bytes>, but mkfs does note that:
Warning: the bigalloc feature is still under development
See https://ext4.wiki.kernel.org/index.php/Bigalloc for more information
There are mentions of issues in combination with delayed allocation on that page and the ext4 man page, and the words "huge risk" also appear on the Bigalloc wiki page.
None of that has anything to do with that 64 MB / inode limit set by the -i option. It appears to just be an arbitrary limit set at the interface level. The number of inodes can also be set directly with the -N option, and when that's used, there are no checks. Also, the upper limit is based on the maximum block size of the file system, not the block size actually chosen as the structural limits are.
Because of the 64k blocks/group limit, without bigalloc there's no way to get as few inodes as the ratio of 64 MB / inode would imply, and with bigalloc, the number of inodes can be set much lower than it.
| Why is 67108864 the maximum bytes-per-inode ratio? Why is there a max? |
1,445,260,167,000 |
I recently got a "device full" warning for a 2 Tb external ext4 drive. I deleted a bunch of files, about 90-100 Gb of old system backups, and since I did not want to empty all trash, I deleted the trash folders from the drive. No disk space was freed however, and I am still showing only about 5 Gb free after deleting 90-100Gb.
I first tried rebooting to make sure it was not files being held open for some reason. I tried running sudo e2fsck -fp /dev/sde1 and sudo e2fsck -f -D -C0 -E discard /dev/sde1 but neither of these turned up any disk space. I checked inode usage, and am using something like 0.3% of the total. When I run sudo xdiskusage, it says that inodes are using up 93 Gb. man says this is the overhead used by the file system. 93 gb seems like a lot of overhead, and the fact that deleting about the same amount of files resulted in no freed disk space, I am guessing I fouled something up when I deleted the trah folders. Is there any way I can reclaim the space that I thought I would get from deleting the files?
|
There seem to be two possibilities here:
You're being confused by the (admittedly confusing) behavior of df and root-reserved space
You deleted (unlinked) one hardlink to the files, there are more.
Personally, I suspect you're seeing #1. Details below, along with some concluding remarks.
Confusing df behavior
If you fill up a filesystem fully, as a non-root user, this is what it looks like:
Filesystem Size Used Avail Use% Mounted on
/dev/md10 248M 236M 1.0K 100% /boot
but there is space reserved for root, typically 5%. If root fills it up, this is what df looks like (in the case of this tiny filesystem, it's another 13 MB):
Filesystem Size Used Avail Use% Mounted on
/dev/md10 248M 248M 0 100% /boot
Note it went from 100% used to... 100% used. Despite actually being another 5% used. The Used field went up as expected, but the avail field just changed from 1K to 0.
And what happens when you remove the first 13MB of data? Well, you get back to the first output—you've freed 5%, but still at 100% in use and almost none available.
Conclusion: when you want to look at how much space you're actually freeing, look at the Used column—not Avail, not Use%.
Wasn't the last hardlink
rm doesn't actually delete files. It unlinks them—that is, removes hardlinks to them. Each hardlink gives the file one name, basically. When a file has no links left (and isn't open, etc.) then, only then, is the file actually deleted.
A file is actually uniquely identified on a filesystem, regardless of the number of names it has, by its inode number. If you knew the inode numbers for these files, you could use find -inum to find all the hardlinks to them—but you probably don't. If you have some related files to clean up, you can get the inode numbers from those using stat. You can then use find /path/to/mount -inum NUMBER to find all the hardlinks to that file (including the name you just stat'd). Also, inode numbers can be re-used once a file is actually deleted.
Remember: inode numbers are per filesystem. So two different files can be inode 42 on two different filesystems. Only on the same filesystem is inode 42 guaranteed to always be the same file. Also, inode numbers do not always work right with network filesystems or non-Unix filesystems. But you're using ext4, where the definitely do.
Other than that, you'll just have to find any other names to remove the normal ways (e.g., by looking for large things with xdiskusage as you're already doing)
General remarks
Trash folders are just directories. If they were full of junk that you didn't manage to delete, they'd show in xdiskusage.
You should consider a backup system which can better handle deleting old backups for you—doing it by hand is error-prone. Worse, it can also be forgotten, leading to backup failures—and restores are generally of recent data (e.g, accidental deletion, corrupted file, disk failure), not old data ("oh yeah, I did need that thing I deleted last year..."), so "disk full backup failed" means you're actually discarding the most valuable data (the new backup) to preserve the least valuable data (that backup from two years ago).
| ext4 disk space not reclaimed after deleting files |
1,445,260,167,000 |
According to this Red Hat bug report (which I am trying to reproduce) it looks like the Netapp filer is able to store data directly in the inode, in case of very small files.
Considering I had a FS with large inodes, would it be possible to store data in such a way on a Unix / Linux file system?
|
ext4 since kernel 3.8 supports this: it can store (very) small files within the inode, as described in the filesystem layout documentation.
Other filesystems support this on Linux too, or variants of the idea; for example Btrfs stores small files in the parent extent.
| Is it possible to store data directly inside an inode on a Unix / Linux filesystem? |
1,445,260,167,000 |
I've been having a fairly serious issue on a high traffic web server. PHP pages are slowing down considerably, and it only seems to be an issue on pages where sessions are accessed, or a certain table within a database is being referenced. the '/var/log/messages' log file, I see hundreds of thousands of the following error:
'kernel: EXT4-fs warning (device dm-0): ext4_dx_add_entry: Directory index full!'
I suspect there is a bottleneck in '/var/lib/php/sessions' because I cannot open the folder in Filezilla, and cannot count the number of files/sub-directories with grep. While it is quite possibly a case of hard drive corruption, I'd like to verify a hunch of mine first by checking the number of files inside of this directory.
How would you go about finding the number of files within a folder without actually counting the files in said folder?
|
The size of the directory (as seen with ls -ld /var/lib/php/sessions) can give an indication. If it's small, there aren't many files. If it's large, there may be many entries in there, or there may have been many in the past.
Listing the content, as long as you don't stat individual files, shouldn't take a lot much longer than reading a file the same size.
What might happen is that you have an alias for ls that does ls -F or ls --color. Those options cause an lstat system call to be performed on every file to see for instance if they are a file or directory.
You'll also want to make sure that you list dot files and that you leave the file list unsorted. For that, run:
command ls -f /var/lib/php/sessions | wc -l
Provided not too many filenames have newline characters, that should give you a good estimate.
$ ls -lhd 1
drwxr-xr-x 2 chazelas chazelas 69M Aug 15 20:02 1/
$ time ls -f 1 | wc -l
3218992
ls -f 1 0.68s user 1.20s system 99% cpu 1.881 total
wc -l 0.00s user 0.18s system 9% cpu 1.880 total
$ time ls -F 1 | wc -l
<still running...>
You can also deduce the number of files there by subtracting the number of unique files elsewhere in the file system from the number of used inodes in the output of df -i.
For instance, if the file system is mounted on /var, with GNU find:
find /var -xdev -path /var/lib/php/sessions -prune -o \
-printf '%i\n' | sort -u | wc -l
To find the number of files not in /var/lib/php/sessions. If you subtract that to the IUsed field in the output of df -i /var, you'll get an approximation (because some special inodes are not linked to any directory in a typical ext file system) of the number of files linked to /var/lib/php/sessions that are not otherwise linked anywhere else (note that /var/lib/php/sessions could very well contain one billion entries for the same file (well actually the maximum number of links on a file is going to be much lower than that on most filesystems), so that method is not fool-proof).
Note that if reading the directory content should be relatively fast, removing files can be painfully slow.
rm -r, when removing files, first lists the directory content, and then calls unlink() for every file. And for every file, the system has to lookup the file in that huge directory, which if it's not hashed can be very expensive.
| How to determine how many files are within a directory without counting? |
1,445,260,167,000 |
After reading a lot about why newer 4096 byte physical block hard drives should be partitioned taking care of alignment (Linux on 4KB-sector disks: Practical advice, What is partition alignment and why whould I need it?, Why do I have to “align” the partitions on my new Western Digital hard drive?, I was convinced to make sure my new disk was properly partitioned and formatted with 4096 byte blocks.
So I did partition it using fdisk -b 4096 /dev/sdb and specified a 50GB size for sdb1, 412GB for sdb2 and the remaining space to sdb3 (3.8GB for sdb3), leading to the following partition table:
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 7600 cylinders, total 122096646 sectors
Units = sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x1c7f9c20
Device Boot Start End Blocks Id System
/dev/sdb1 * 256 13107455 52428800 83 Linux
/dev/sdb2 13107456 121110783 432013312 83 Linux
/dev/sdb3 121110784 122096645 3943448 82 Linux swap
Then I formatted both sdb1 and sdb2 with ext4:
# mkfs.ext4 /dev/sdb1
mke2fs 1.42.4 (12-June-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
409600 inodes, 1638400 blocks
81920 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1677721600
50 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
# mkfs.ext4 /dev/sdb2
mke2fs 1.42.4 (12-June-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3375104 inodes, 13500416 blocks
675020 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=0
412 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
and setup the swap area:
# mkswap /dev/sdb3
Setting up swapspace version 1, size = 492924 KiB
no label, UUID=2d18a027-6c03-4b29-b03e-c0c7f61413c5
The first strange thing I noticed was the reported swap size of just 492924 KiB. Then, after mouting the newly formatted partitions, I found them to appear much smaller than they should be:
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 6.3G 222M 5.8G 4% /mnt/zip
/dev/sdb2 52G 907M 48G 2% /mnt/memory
Why is that happening? Is there any way to correct this?
EDIT:
After @Alexios suggestion I tried rebooting but /proc/partitions didn't change. Both /dev/sda and /dev/sdb are identical 500GB drives, but /dev/sda I formatted unaligned (using default 512 byte sector size starting after sector 63) and /dev/sdb aligned (using 4096 sector size starting after sector 255). As we can see, the system considers /dev/sdb just has less blocks than /dev/sda, although both were partitioned with the same sizes:
# cat /proc/partitions
major minor #blocks name
11 0 4590208 sr0
8 0 488386584 sda
8 1 48829536 sda1
8 2 437498145 sda2
8 3 2058871 sda3
8 16 488386584 sdb
8 17 6553600 sdb1
8 18 54001664 sdb2
8 19 492931 sdb3
|
What is happening is that the -b switch is nonsense and should not even be there. The sector numbers recorded in the MBR are always interpreted to be in units of the drive's logical sector size ( 512 bytes ). By using the -b switch, you are causing fdisk to divide all of the sectors it records by 8, so the kernel interprets the partitions to be 1/8th the size you intended.
If you use parted instead of fdisk, it will make sure your partitions are properly aligned automatically. With fdisk, just make sure that the starting sectors are a multiple of 8.
| Why my partitions don't show the right capacity on a 4096 byte physical block hard drive? |
1,445,260,167,000 |
I was monitoring a directory containing downloads from Google Chrome with ls -la and got this in the output:
-????????? ? ? ? ? ? 'Unconfirmed 784300.crdownload'
I've never seen such question marks in the output.
There were other files in the directory with normal metadata output. When I ran ls -la again the output was all normal; the file still had the same name but the metadata was now visible. Later when the download finished the file was renamed to its final name, as expected.
I checked /var/log/syslog and dmesg output and didn't see any kernel messages.
I wonder if I hit some race condition? I wonder if there is a brief moment after the file is first created where the information is not yet available?
ext4 filesystem with seemingly standard mount options (rw,relatime,errors=remount-ro), 5.4.0-59-generic kernel on Ubuntu 20.04.1 LTS
|
That's a (temporary?) file which had disappeared in the time between ls reading its directory entry and trying to get the metadata from its inode.
You can reproduce that by stopping ls just before it calls lstat on a file, removing that file, and then letting it continue:
$ mkdir dir; touch dir/file
$ gdb -q ls
Reading symbols from ls...(no debugging symbols found)...done.
(gdb) br __lxstat
Breakpoint 1 at 0x4200
(gdb) r -l dir
...
Breakpoint 1, __GI___lxstat (vers=1, name=0x7fffffffdfca "dir",
buf=0x55555557c538) at ../sysdeps/unix/sysv/linux/wordsize-64/lxstat.c:34
(gdb) c
...
Breakpoint 1, __GI___lxstat (vers=1, name=0x7fffffffd3f0 "dir/file",
buf=0x55555557c538) at ../sysdeps/unix/sysv/linux/wordsize-64/lxstat.c:34
...
(gdb) shell rm dir/file
(gdb) c
...
/usr/bin/ls: cannot access 'dir/file': No such file or directory
total 0
-????????? ? ? ? ? ? file
wonder if I hit some race condition?
Kind of, but not really. It's simply the fact that ls does not hold a lock on the filesystem while it does its stuff ;-)
In any case, this is not a symptom of filesystem corruption or anything like that.
| Question marks in ls metadata output? |
1,445,260,167,000 |
I'm having a wired problem in my laptop. It works fine but almost every hour the screen freezes. When I force the shutdown and start it again, I see problems similar to this:
The only solution I found is turning over the laptop for few seconds before starting it again. This help me see my Ubuntu work normally without these FS problems.
Update:
This is the smartctl output:
smartctl 6.5 2016-01-24 r4214 [i686-linux-4.15.0-32-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
=== START OF INFORMATION SECTION ===
Model Family: Seagate Laptop SSHD
Device Model: ST500LM000-SSHD-8GB
Serial Number: W761F5WC
LU WWN Device Id: 5 000c50 07c440eb8
Firmware Version: LIV5
User Capacity: 500 107 862 016 bytes [500 GB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate: 5400 rpm
Form Factor: 2.5 inches
Device is: In smartctl database [for details use: -P show]
ATA Version is: ATA8-ACS, ACS-3 T13/2161-D revision 3b
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Fri Aug 17 14:37:51 2018 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
See vendor-specific Attribute list for marginal Attributes.
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: ( 128) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 1) minutes.
Extended self-test routine
recommended polling time: ( 96) minutes.
Conveyance self-test routine
recommended polling time: ( 2) minutes.
SCT capabilities: (0x1081) SCT Status supported.
SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 114 099 034 Pre-fail Always - 81759080
3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0
4 Start_Stop_Count 0x0032 097 097 020 Old_age Always - 3865
5 Reallocated_Sector_Ct 0x0033 100 100 010 Pre-fail Always - 0
7 Seek_Error_Rate 0x000f 072 060 030 Pre-fail Always - 163745195646
9 Power_On_Hours 0x0032 080 080 000 Old_age Always - 17649 (115 151 0)
10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0
12 Power_Cycle_Count 0x0032 096 096 020 Old_age Always - 4175
184 End-to-End_Error 0x0032 096 096 099 Old_age Always FAILING_NOW 4
187 Reported_Uncorrect 0x0032 098 098 000 Old_age Always - 2
188 Command_Timeout 0x0032 100 094 000 Old_age Always - 25770197149
189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0
190 Airflow_Temperature_Cel 0x0022 058 036 045 Old_age Always In_the_past 42 (Min/Max 42/46 #389)
191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0
192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 1213
193 Load_Cycle_Count 0x0032 098 098 000 Old_age Always - 4834
194 Temperature_Celsius 0x0022 042 064 000 Old_age Always - 42 (0 12 0 0 0)
196 Reallocated_Event_Count 0x000f 080 080 030 Pre-fail Always - 17711 (44104 0)
197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0
254 Free_Fall_Sensor 0x0032 100 100 000 Old_age Always - 0
SMART Error Log Version: 1
ATA Error Count: 7 (device log contains only the most recent five errors)
CR = Command Register [HEX]
FR = Features Register [HEX]
SC = Sector Count Register [HEX]
SN = Sector Number Register [HEX]
CL = Cylinder Low Register [HEX]
CH = Cylinder High Register [HEX]
DH = Device/Head Register [HEX]
DC = Device Command Register [HEX]
ER = Error register [HEX]
ST = Status register [HEX]
Powered_Up_Time is measured from power on, and printed as
DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes,
SS=sec, and sss=millisec. It "wraps" after 49.710 days.
Error 7 occurred at disk power-on lifetime: 16948 hours (706 days + 4 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 ff ff ff 0f Error: UNC at LBA = 0x0fffffff = 268435455
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 08 ff ff ff 4f 00 00:00:25.320 READ FPDMA QUEUED
60 00 20 ff ff ff 4f 00 00:00:25.319 READ FPDMA QUEUED
60 00 60 88 da 7f 41 00 00:00:25.309 READ FPDMA QUEUED
60 00 20 ff ff ff 4f 00 00:00:25.288 READ FPDMA QUEUED
60 00 08 ff ff ff 4f 00 00:00:25.284 READ FPDMA QUEUED
Error 6 occurred at disk power-on lifetime: 966 hours (40 days + 6 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 10 c5 a5 00 Error: UNC at LBA = 0x00a5c510 = 10863888
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 08 88 79 42 43 00 00:00:18.239 READ FPDMA QUEUED
60 00 08 80 79 42 43 00 00:00:18.239 READ FPDMA QUEUED
60 00 a8 10 c8 84 40 00 00:00:18.237 READ FPDMA QUEUED
60 00 08 78 79 42 43 00 00:00:18.237 READ FPDMA QUEUED
60 00 00 e0 c6 84 40 00 00:00:18.237 READ FPDMA QUEUED
Error 5 occurred at disk power-on lifetime: 966 hours (40 days + 6 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 10 c5 a5 00 Error: UNC at LBA = 0x00a5c510 = 10863888
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 28 00 48 f8 40 00 00:00:13.615 READ FPDMA QUEUED
60 00 08 08 0e 44 40 00 00:00:13.609 READ FPDMA QUEUED
60 00 18 60 d0 e6 40 00 00:00:13.608 READ FPDMA QUEUED
60 00 08 b8 a8 e6 40 00 00:00:13.608 READ FPDMA QUEUED
60 00 28 10 9b e6 40 00 00:00:13.607 READ FPDMA QUEUED
Error 4 occurred at disk power-on lifetime: 32 hours (1 days + 8 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 76 9d b6 01 Error: UNC at LBA = 0x01b69d76 = 28745078
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 80 60 9d b6 41 00 00:01:10.856 READ FPDMA QUEUED
61 00 08 68 89 59 40 00 00:01:10.747 WRITE FPDMA QUEUED
61 00 08 88 f6 3c 40 00 00:01:10.747 WRITE FPDMA QUEUED
2f 00 01 10 00 00 20 00 00:01:10.494 READ LOG EXT
60 00 40 c8 25 4c 41 00 00:01:10.441 READ FPDMA QUEUED
Error 3 occurred at disk power-on lifetime: 32 hours (1 days + 8 hours)
When the command that caused the error occurred, the device was active or idle.
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
40 51 00 76 9d b6 01 Error: UNC at LBA = 0x01b69d76 = 28745078
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
60 00 80 60 9d b6 41 00 00:00:53.242 READ FPDMA QUEUED
61 00 80 40 ba 44 41 00 00:00:53.241 WRITE FPDMA QUEUED
61 00 10 10 f7 86 40 00 00:00:53.241 WRITE FPDMA QUEUED
60 00 40 98 b7 1e 42 00 00:00:53.216 READ FPDMA QUEUED
60 00 08 60 9d b6 41 00 00:00:53.169 READ FPDMA QUEUED
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Vendor (0x50) Completed without error 00% 1 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
I made a FSCK at the startup to fix the problem but it didn't help.
Do you have any solution ?
|
Backup Immediately
Go buy an additional external HDD/SSD and make a full CloneZilla Live backup right now! The dead giveaway that your drive is in imminent danger of failing is the following parameter:
184 End-to-End_Error 0x0032 096 096 099 Old_age Always FAILING_NOW 4
Especially as you've been having this issue for a month now: HDDs are known to not die immediately, but give you ample warning like clicking sounds, random errors, ... whereas SSDs die suddenly without warning unless you measure their SMART status regularly.
The rule of thumb for drives is:
HDDs die a slow, painful death like cancer
SSDs die a sudden death like a heart attack
| Random EXT4 FS errors |
1,445,260,167,000 |
I read somewhere that an operating system which knows nothing about ext3 and ext4 (i.e. antique Linux version) is able to read/write to ext4, and it detects any ext4 file system as ext2.
I am not quite sure, whether the same is possible to Tux2 to tux3 or with FAT12. (FAT64 is exFAT)
How exactly is that possible?
How exactly can ext4 be treated like ext2? Is there no risk for file or metadata corruption?
|
This depends heavily on how the ext4 filesystem was formatted. Some newer ext4 features (e.g. extents or 64bit) cannot be understood by older ext2 drivers, and the kernel would refuse to mount the filesystem (see, for example this post). In general, any filesystem formatted with a modern mke2fs with the default -t ext4 options will not be mountable by an old ext2 driver, but if the filesystem was originally formatted a long time ago, then upgraded to ext4, it may still be mountable by ext2 if none of the newer ext4-specific features were enabled.
The ext2/3/4 filesystems track which features are in use by compat, rocompat, and incompat feature flags. These features are normally set at mke2fs time, but can sometimes be changed by tune2fs. If an unknown compat feature is found, the kernel will mount it, but e2fsck will refuse to check it because it might do something wrong. If an unknown rocompat feature is found, the kernel can mount the filesystem read-only, and any unknown incompat feature will prevent the filesystem from being mounted at all (a message will be printed to /var/log/messages in this case).
You can use debugfs -c -R features <device> to dump the features enabled on a filesystem, for example:
# debugfs -c -R features /dev/sdb1
debugfs 1.42.13.wc5 (15-Apr-2016)
/dev/sdb1: catastrophic mode - not reading inode or group bitmaps
Filesystem features: has_journal ext_attr resize_inode dir_index filetype
needs_recovery dirdata sparse_super large_file huge_file uninit_bg dir_nlink
Though this doesn't tell you which ones are compat, rocompat, or incompat. If your version of debugfs doesn't understand some newer feature, it will print it like I0400 or similar.
| How exactly is ext2 upwards-compatible? |
1,445,260,167,000 |
I have a very basic idea:
I would like to defragment files in size less than 100MB only in ext4 filesystem.
Since there is no option for that in the defragmentation tool (e4defrag), any ideas how I could achieve that?
I know only how to find those files:
find / -type f -size -100M
Reason for such action:
I had a system with 99.x% of the fs occupied, I freed the space now, leaving many files fragmented.
|
I was a hair away from a solution:
sudo find / -xdev -type f -size -100M -exec e4defrag {} +
Notes:
the -xdev argument as man page says:
Don't descend directories on other filesystems.
This means it will not process any other filesystems like tmpfs (/tmp) etc., see mount -v for all you have mounted.
| How to defragment files in size less than 100MB only in ext4 filesystem |
1,445,260,167,000 |
I'm using debootstrap to create a rootfs for a device that I want to then write to an image file. To calculate the size needed from my rootfs, I do the following:
local SIZE_NEEDED=$(du -sb $CHROOT_DIR|awk '{print $1}')
SIZE_NEEDED=$(($SIZE_NEEDED / 1048576 + 50)) # in MB + 50 MB space
dd if=/dev/zero of=$ROOTFS_IMAGE bs=1M count=$SIZE_NEEDED
As you can see I'm leaving 50MB of padding beyond what dd calculates I need.
I then create the loopback device, create a partition table and filesystem:
LO_DEVICE=$(losetup --show -f $ROOTFS_IMAGE)
parted $LO_DEVICE mktable msdos mkpart primary ext4 0% 100%
partprobe $LO_DEVICE
local LO_ROOTFS_PARTITION="${LO_DEVICE}p1"
mkfs.ext4 -O ^64bit $LO_ROOTFS_PARTITION
It seems parted attempts to do some sector alignment (?) as the partition doesn't quite take up the whole virtual disk, but close enough.
I then mount the new partition and start writing files. But then I run out of disk space right near the end!
mount $LO_ROOTFS_PARTITION $LO_MOUNT_POINT
cp -rp $CHROOT_DIR/* $LO_MOUNT_POINT
.....
cp: cannot create directory '/root/buildimage/rootfs_mount/var': No space left on device
I suspect this is some block size conversion issue or maybe difference between MiB and MB? Because up to a certain image size, it seems that I have enough headroom with the 50MB of padding. (I want some free space in the image by default, but not a lot.) The image size isn't off by a factor-of-two so there's some creep or overhead that gets magnified as the image size gets larger and I'm not sure where it's coming from.
For context, here's the last one I did that doesn't fit:
# du -sb build/rootfs
489889774 build/rootfs
Ok, 489MB/1024**2 + 50MB = 517MB image size. So dd looked like:
# dd if=/dev/zero of=build/rootfs.img size=1M count=517
517+0 records in
517+0 records out
542113792 bytes (542 MB, 517 MiB) copied, 2.02757 s, 267 MB/s
Confirmed on disk it looks slightly larger:
# du -sb build/rootfs.img
542113792 build/rootfs.img
The partition looks like:
# parted /dev/loop0 print
Model: Loopback device (loopback)
Disk /dev/loop0: 542MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 542MB 541MB primary ext4
and mounted filesystem:
# df -h /dev/loop0p1
Filesystem Size Used Avail Use% Mounted on
/dev/loop0p1 492M 482M 0 100% /root/buildimage/build/rootfs_mount
So maybe there is overhead in the ext4 filesystem, possibly for superblocks/ journal/ etc? How can I account for that in my size calculation?
EDIT:
Looking into ext4 overhead such as this ServerFault question.
Also looking into mkfs.ext4 options such as -m (reserved) and various journaling and inode options. In general if I know there's a 5% overhead coming from the filesystem, I can factor that in easily enough.
EDIT #2:
Thinking that du might be under-reporting actual on-disk size requirements (e.g. a 10-byte file still takes up a 4k block, right?) I tried a few other options:
# du -sb build/rootfs # This is what I was using
489889774 build/rootfs
# du -sm build/rootfs # bigger
527 build/rootfs
# du -sk build/rootfs # bigger-est
539088 build/rootfs
Furthermore, the manpage for -b notes that it's an alias for --apparent-size which can be smaller than "actual disk usage." So that may be (most) of where my math was wrong.
|
Possibly the simplest solution is to heavily overprovision the space initially, copy all the files, then use resize2fs -M to reduce the size to the minimum this utility can manage. Here's an example:
dir=/home/meuh/some/dir
rm -f /tmp/image
size=$(du -sb $dir/ | awk '{print $1*2}')
truncate -s $size /tmp/image
mkfs.ext4 -m 0 -O ^64bit /tmp/image
sudo mount /tmp/image /mnt/loop
sudo chown $USER /mnt/loop
rsync -a $dir/ /mnt/loop
sync
df /mnt/loop
sudo umount /mnt/loop
e2fsck -f /tmp/image
resize2fs -M /tmp/image
newsize=$(e2fsck -n /tmp/image | awk -F/ '/blocks$/{print $NF*1024}')
truncate -s $newsize /tmp/image
sudo mount /tmp/image /mnt/loop
df /mnt/loop
diff -r $dir/ /mnt/loop
sudo umount /mnt/loop
Some excerpts from the output for an example directory:
+ size=13354874
Creating filesystem with 13040 1k blocks and 3264 inodes
+ df /mnt/loop
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/loop1 11599 7124 4215 63% /mnt/loop
+ resize2fs -M /tmp/image
Resizing the filesystem on /tmp/image to 8832 (1k) blocks.
+ newsize=9043968
+ truncate -s 9043968 /tmp/image
+ df /mnt/loop
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/loop1 7391 7124 91 99% /mnt/loop
| How to calculate the correct size of a loopback device filesystem image for debootstrap? |
1,445,260,167,000 |
In order to test an analysis tool, I need a file where the depth (eh.eh_depth) is greater than 1.
I've tried a couple of things:
A large file (1GiB)
Creating hundreds of smaller files (1MiB), deleting every other one, and then filling the disk with one file (hoping for massive fragmentation).
In both cases I only got a depth of 1!
I even tried manually modifying the inodes in a hex editor, but I ended up corrupting the file system.
I wondered if it could be done with debugfs, but I can't see how?
PS: I have seen the 'increasing depth of extent tree in ext4' question on stackoverflow, but I don't really want to create a 174GiB file.
|
If you want a file with a lot of extents, just do:
$ perl -we 'for ($i=0;$i<100000;$i++) {seek STDOUT,$i*8192,0; print "."}' > a
$ ll a
-rw-r--r-- 1 stephane stephane 819191809 Dec 15 23:50 a
$ filefrag a
a: 100000 extents found
That's a sparse file where every other block is sparse, so it forces the extents to be 4KiB large.
debugfs: dump_extents a
Level Entries Logical Physical Length Flags
0/ 2 1/ 1 0 - 199998 33413 199999
1/ 2 1/295 0 - 679 33409 680
2/ 2 1/340 0 - 0 34816 - 34816 1
2/ 2 2/340 2 - 2 34818 - 34818 1
[...]
| Can I create a file on ext4 with a depth > 1 for testing purposes? |
1,445,260,167,000 |
I was using ext4 filesystems for a long time, and it's the first time I see a weird behavior of ext4 filesystem.
There is ext4 filesystem in /dev/dm-2
An I/O error happened in the underlying device, and the filesystem was remounted read-only. It is fine and expected by the configuration.But for some unknown reason, now it is not possible to completely unmount the filesystem.
The command umount /the/mount/point returned with success. Further runs of that command say "Not mounted".
The mount entry is gone from output of mount command. The filesystem is not mounted anywhere else.
But.
First: I can't see the usual EXT4-fs: unmounting filesystem text in dmesg. In fact, there is nothing in the dmesg.
Second thing (it speaks for itself that something is wrong):
root# cat /proc/meminfo | grep dirty
Dirty: 9457728 kB
root# time sync
real 0m0.012s
user 0m0.000s
sys 0m0.002s
root# cat /proc/meminfo | grep dirty
Dirty: 9453632 kB
Third thing: the debug directory /sys/fs/ext4/dm-2 still exists.
Tried writing "1" to /sys/fs/ext4/dm-2/simulate_fail in hope that it will bring the filesystem down. But it does nothing, shows nothing in dmesg.
Finally the fourth thing which makes the device unusable:
root# e2fsck -fy /dev/dm-2
e2fsck 1.46.5 (30-Dec-2021)
/dev/dm-2 is in use.
e2fsck: Cannot continue, aborting.
I understand that it is possible to reboot and etc. This question is not about solving some simple newbie problem. I want somebody experienced in ext4 filesystem to help me understand what can cause this behavior.
The dm-2 device is not mounted anywhere else, not bind-mounted, not in use by anything else.
There was nothing else using the Dirty Cache at the moment of measuring it with cat /proc/meminfo | grep dirty.
The unmount call which succeeded, was not an MNT_DETACH (no -l flag was used). Despite that, it succeeded nearly immediately (it's weird). The mount point is no longer mounted: but as I described above, it can be easily seen that the filesystem is NOT unmounted.
Update: as A.B pointed out, I tried to check if the filesystem is still mounted in a different namespace. I didn't mount it in a different namespace, so I didn't expect to see anything. But, surprisingly, it was mounted in a different namespace, surprisingly this (username changed):
4026533177 mnt 1 3411291 an-unrelated-nonroot-user xdg-dbus-proxy --args=43
I tried to enter that namespace and unmount it using nsenter -t 3411291 -m -- umount /the/mount/point
It resulted in Segmentation fault (Core dumped), and this in dmesg
[970130.866738] Buffer I/O error on dev dm-2, logical block 0, lost sync page write
[970130.867925] EXT4-fs error (device dm-2): ext4_mb_release_inode_pa:4846: group 9239, free 2048, pa_free 4
[970130.870291] Buffer I/O error on dev dm-2, logical block 0, lost sync page write
[970130.949466] divide error: 0000 [#1] PREEMPT SMP PTI
[970130.950677] CPU: 49 PID: 4118804 Comm: umount Tainted: P W OE 6.1.68-missmika #1
[970130.953056] Hardware name: OEM X79G/X79G, BIOS 4.6.5 08/02/2022
[970130.953121] RIP: 0010:mb_update_avg_fragment_size+0x35/0x120
[970130.953121] Code: 41 54 53 4c 8b a7 98 03 00 00 41 f6 44 24 7c 80 0f 84 9a 00 00 00 8b 46 14 48 89 f3 85 c0 0f 84 8c 00 00 00 99 b9 ff ff ff ff <f7> 7e 18 0f bd c8 41 89 cd 41 83 ed 01 0f 88 ce 00 00 00 0f b6 47
[970130.957139] RSP: 0018:ffffb909e3123a28 EFLAGS: 00010202
[970130.957139] RAX: 000000000000082a RBX: ffff91140ac554d8 RCX: 00000000ffffffff
[970130.957139] RDX: 0000000000000000 RSI: ffff91140ac554d8 RDI: ffff910ead74f800
[970130.957139] RBP: ffffb909e3123a40 R08: 0000000000000000 R09: 0000000000004800
[970130.957139] R10: ffff910ead74f800 R11: ffff9114b7126000 R12: ffff910eb31d2000
[970130.957139] R13: 0000000000000007 R14: ffffb909e3123b80 R15: ffff911d732beffc
[970130.957139] FS: 00007f6d94ab4800(0000) GS:ffff911d7fcc0000(0000) knlGS:0000000000000000
[970130.957139] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[970130.957139] CR2: 00003d140602f000 CR3: 0000000365690002 CR4: 00000000001706e0
[970130.957139] Call Trace:
[970130.957139] <TASK>
[970130.957139] ? show_regs.cold+0x1a/0x1f
[970130.957139] ? __die_body+0x24/0x70
[970130.957139] ? __die+0x2f/0x3b
[970130.957139] ? die+0x34/0x60
[970130.957139] ? do_trap+0xdf/0x100
[970130.957139] ? do_error_trap+0x73/0xa0
[970130.957139] ? mb_update_avg_fragment_size+0x35/0x120
[970130.957139] ? exc_divide_error+0x3f/0x60
[970130.957139] ? mb_update_avg_fragment_size+0x35/0x120
[970130.957139] ? asm_exc_divide_error+0x1f/0x30
[970130.957139] ? mb_update_avg_fragment_size+0x35/0x120
[970130.957139] ? mb_set_largest_free_order+0x11c/0x130
[970130.957139] mb_free_blocks+0x24d/0x5e0
[970130.957139] ? ext4_validate_block_bitmap.part.0+0x29/0x3e0
[970130.957139] ? __getblk_gfp+0x33/0x3b0
[970130.957139] ext4_mb_release_inode_pa.isra.0+0x12e/0x350
[970130.957139] ext4_discard_preallocations+0x22e/0x490
[970130.957139] ext4_clear_inode+0x31/0xb0
[970130.957139] ext4_evict_inode+0xba/0x750
[970130.989137] evict+0xd0/0x180
[970130.989137] dispose_list+0x39/0x60
[970130.989137] evict_inodes+0x18e/0x1a0
[970130.989137] generic_shutdown_super+0x46/0x1b0
[970130.989137] kill_block_super+0x2b/0x60
[970130.989137] deactivate_locked_super+0x39/0x80
[970130.989137] deactivate_super+0x46/0x50
[970130.989137] cleanup_mnt+0x109/0x170
[970130.989137] __cleanup_mnt+0x16/0x20
[970130.989137] task_work_run+0x65/0xa0
[970130.989137] exit_to_user_mode_prepare+0x152/0x170
[970130.989137] syscall_exit_to_user_mode+0x2a/0x50
[970130.989137] ? __x64_sys_umount+0x1a/0x30
[970130.989137] do_syscall_64+0x6d/0x90
[970130.989137] ? syscall_exit_to_user_mode+0x38/0x50
[970130.989137] ? __x64_sys_newfstatat+0x22/0x30
[970130.989137] ? do_syscall_64+0x6d/0x90
[970130.989137] ? exit_to_user_mode_prepare+0x3d/0x170
[970130.989137] ? syscall_exit_to_user_mode+0x38/0x50
[970130.989137] ? __x64_sys_close+0x16/0x50
[970130.989137] ? do_syscall_64+0x6d/0x90
[970130.989137] ? exc_page_fault+0x8b/0x180
[970130.989137] entry_SYSCALL_64_after_hwframe+0x64/0xce
[970130.989137] RIP: 0033:0x7f6d94925a3b
[970130.989137] Code: fb 43 0f 00 f7 d8 64 89 01 48 83 c8 ff c3 90 f3 0f 1e fa 31 f6 e9 05 00 00 00 0f 1f 44 00 00 f3 0f 1e fa b8 a6 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 05 c3 0f 1f 40 00 48 8b 15 c1 43 0f 00 f7 d8
[970130.989137] RSP: 002b:00007ffdd60f7d08 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[970130.989137] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007f6d94925a3b
[970130.989137] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 000055ca1c6f7d60
[970130.989137] RBP: 000055ca1c6f7b30 R08: 0000000000000000 R09: 00007ffdd60f6a90
[970130.989137] R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
[970130.989137] R13: 000055ca1c6f7d60 R14: 000055ca1c6f7c40 R15: 000055ca1c6f7b30
[970130.989137] </TASK>
[970130.989137] Modules linked in: 88x2bu(OE) erofs dm_zero zram ext2 hfs hfsplus xfs kvdo(OE) dm_bufio mikasecfs(OE) simplefsplus(OE) melon(OE) mikatest(OE) iloveaki(OE) tls vboxnetadp(OE) vboxnetflt(OE) vboxdrv(OE) ip6t_REJECT nf_reject_ipv6 ip6t_rt ipt_REJECT nf_reject_ipv4 xt_recent xt_tcpudp nft_limit xt_limit xt_addrtype xt_pkttype nft_chain_nat xt_MASQUERADE xt_nat nf_nat xt_conntrack nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat nf_tables binfmt_misc nfnetlink nvidia_uvm(POE) nvidia_drm(POE) intel_rapl_msr intel_rapl_common nvidia_modeset(POE) sb_edac nls_iso8859_1 x86_pkg_temp_thermal intel_powerclamp coretemp nvidia(POE) snd_hda_codec_realtek snd_hda_codec_generic ledtrig_audio snd_hda_codec_hdmi cfg80211 joydev snd_hda_intel input_leds snd_intel_dspcfg snd_intel_sdw_acpi snd_hda_codec kvm_intel snd_hda_core snd_hwdep kvm snd_pcm snd_seq_midi rapl snd_seq_midi_event snd_rawmidi intel_cstate serio_raw pcspkr snd_seq video wmi snd_seq_device snd_timer drm_kms_helper fb_sys_fops snd syscopyarea sysfillrect sysimgblt soundcore
[970130.989137] ioatdma dca mac_hid sch_fq_codel dm_multipath scsi_dh_rdac scsi_dh_emc scsi_dh_alua msr parport_pc ppdev lp parport drm efi_pstore ip_tables x_tables autofs4 raid10 raid456 async_raid6_recov async_memcpy async_pq async_xor async_tx raid1 raid0 multipath linear crct10dif_pclmul hid_generic crc32_pclmul ghash_clmulni_intel sha512_ssse3 sha256_ssse3 sha1_ssse3 usbhid cdc_ether aesni_intel usbnet uas hid crypto_simd r8152 cryptd usb_storage mii psmouse ahci i2c_i801 r8169 lpc_ich libahci i2c_smbus realtek [last unloaded: 88x2bu(OE)]
[970131.024615] ---[ end trace 0000000000000000 ]---
[970131.203209] RIP: 0010:mb_update_avg_fragment_size+0x35/0x120
[970131.204344] Code: 41 54 53 4c 8b a7 98 03 00 00 41 f6 44 24 7c 80 0f 84 9a 00 00 00 8b 46 14 48 89 f3 85 c0 0f 84 8c 00 00 00 99 b9 ff ff ff ff <f7> 7e 18 0f bd c8 41 89 cd 41 83 ed 01 0f 88 ce 00 00 00 0f b6 47
[970131.207841] RSP: 0018:ffffb909e3123a28 EFLAGS: 00010202
[970131.209048] RAX: 000000000000082a RBX: ffff91140ac554d8 RCX: 00000000ffffffff
[970131.210284] RDX: 0000000000000000 RSI: ffff91140ac554d8 RDI: ffff910ead74f800
[970131.211512] RBP: ffffb909e3123a40 R08: 0000000000000000 R09: 0000000000004800
[970131.212749] R10: ffff910ead74f800 R11: ffff9114b7126000 R12: ffff910eb31d2000
[970131.213977] R13: 0000000000000007 R14: ffffb909e3123b80 R15: ffff911d732beffc
[970131.215181] FS: 00007f6d94ab4800(0000) GS:ffff911d7fcc0000(0000) knlGS:0000000000000000
[970131.216370] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[970131.217553] CR2: 00003d140602f000 CR3: 0000000365690002 CR4: 00000000001706e0
[970131.218740] note: umount[4118804] exited with preempt_count 1
Machine still works, it's possible to sync other filesystems:
root# sync -f /
root#
But not global sync:
root# sync
(goes D state forever)
The dirty cache related to that ghost filesystem is not gone, the filesystem still "mounted"
What can be the cause of these issues?
|
Disclaimer: I can't and won't explain in this answer why a kernel partial failure was triggered. This looks like a kernel bug, possibly triggered by the I/O error conditions.
TL;DR
Having a filesystem still in use can happen when a new mount namespace inherits a mounted filesystem from the original mount namespace, but the propagation settings between both didn't make the unmount in the original namespace propagate it in the new namespace. The command findmnt -A -o +PROPAGATION also displays the propagation status of every visible mountpoint in its output.
Normally this is not supposed to happen in a systemd environment, because systemd very early makes / a shared mount (rather than the kernel default of private) thus allowing unmounts to propagate within their shared group. I would thus expect this to happen more easily in a non-systemd environment, or anyway if a tool explicitly uses --make-private in some mounts. --make-private still has its use, especially for virtual pseudo-filesystems.
One way to prevent this to happen could be, before a new mount namespace is created to change such mountpoint as shared with mount --make-shared ....
I made an experiment to illustrate what happens with shared versus non-shared mounts. I attempted to make sure the experiment should work the same in a systemd or a non-systemd environment.
Experiment
This can be reproduced like below (some values such as /dev/loop0 have to be adapted):
# truncate -s $((2**20)) /tmp/test.raw
# mkfs.ext4 -Elazy_itable_init=0,lazy_journal_init=0 -L test /tmp/test.raw
mke2fs 1.47.0 (5-Feb-2023)
Filesystem too small for a journal
Discarding device blocks: done
Creating filesystem with 1024 1k blocks and 128 inodes
Allocating group tables: done
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
# losetup -f --show /tmp/test.raw
/dev/loop0
# mkdir -p /mnt/propagation/test
This will allow to change later the propagation for the experiment without having to alter the whole system by turning a directory into a mountpoint:
# mount --bind /mnt/propagation /mnt/propagation
Now different experiments can have different outcomes.
unshare(1) tells:
unshare since util-linux version 2.27 automatically sets
propagation to private in a new mount namespace to make sure that
the new namespace is really unshared. It’s possible to disable this
feature with option --propagation unchanged. Note that private is
the kernel default.
Other tools might do otherwise. Here we'll change the underlying /mnt/propagation mountpoint instead and always use --propagation unchanged. This avoids getting different results for this experiment on non-systemd (kernel default: / is private) and systemd (systemd default: / is shared) systems.
with shared
# mount --make-shared /mnt/propagation
# mount /dev/loop0 /mnt/propagation/test
# ls /mnt/propagation/test
lost+found
# cat /proc/self/mountinfo | grep /mnt/propagation/test
862 854 7:0 / /mnt/propagation/test rw,relatime shared:500 - ext4 /dev/loop0 rw
Have a second (root) shell and unshare into a new mount namespace (I'll change the prompt to NMNS# to distinguish it):
# unshare -m --propagation unchanged --
NMNS# cat /proc/self/mountinfo | grep /mnt/propagation/test
1454 1453 7:0 / /mnt/propagation/test rw,relatime shared:500 - ext4 /dev/loop0 rw
NMNS# cd /mnt/propagation/test
The same shared:500 links the mount in the two namespaces: umounting from one will unmount it from the other.
In the original shell (in the original mount namespace) unmount it:
# umount /mnt/propagation/test
umount: /mnt/propagation/test: target is busy.
Free the resource usage:
NMNS# cd /
# umount /mnt/propagation/test
#
This time it worked.
And observe it also disappeared in the new mount namespace.
NMNS# cat /proc/self/mountinfo | grep /mnt/propagation/test
NMNS#
The kernel dmesg will have logged the filesystem is unmounted (everywhere), eg:
EXT4-fs (loop0): unmounting filesystem e74e0353-ace0-4eff-86ae-30e288db853e.
Quit the shell in the new mount namespace to clean up.
with private
# mount --make-private /mnt/propagation
# mount /dev/loop0 /mnt/propagation/test
# cat /proc/self/mountinfo | grep /mnt/propagation/test
857 854 7:0 / /mnt/propagation/test rw,relatime - ext4 /dev/loop0 rw
Not shared anymore.
Elsewhere:
# unshare -m --propagation unchanged --
NMNS# cat /proc/self/mountinfo | grep /mnt/propagation/test
1454 1453 7:0 / /mnt/propagation/test rw,relatime - ext4 /dev/loop0 rw
NMNS# echo $$
232529
# umount /mnt/propagation/test
# e2fsck /dev/loop0
e2fsck 1.47.0 (5-Feb-2023)
/dev/loop0 is in use.
e2fsck: Cannot continue, aborting.
#
The filesystem stayed mounted in the new mount namespace.
To find this rogue namespace(s) from the original, one can run something like this:
# for pid in $(lsns --noheadings -t mnt -o PID); do nsenter -t "$pid" -m -- findmnt /mnt/propagation/test && echo $pid; done
nsenter: failed to execute findmnt: No such file or directory
TARGET SOURCE FSTYPE OPTIONS
/mnt/propagation/test /dev/loop0 ext4 rw,relatime
232529
#
Note: nsenter: failed to execute findmnt: No such file or directory happened where the mount namespace was for a running LXC container where findmnt was not available. The loop did find the PID of the process in the new namespace having the mountpoint (note: in real cases, this could be an other PID in the same mount namespace, it doesn't matter.). In extreme cases, a dedicated command able to change mount namespace, check mounts and perform (u)mounts all-in-one would be required.
This mount can be removed either by removing the remaining holding resource (PID 232529), which might be needed if the process actively uses the mounted filesystem (preventing umount to succeed), or by unmounting it in this namespace:
# nsenter -t 232529 -m -- umount /mnt/propagation/test
# e2fsck /dev/loop0
e2fsck 1.47.0 (5-Feb-2023)
test: clean, 11/128 files, 58/1024 blocks
Useful references:
Mount namespaces and shared subtrees [LWN.net]
Mount namespaces, mount propagation, and unbindable mounts [LWN.net]
| Why a filesystem is unmounted but still in use? |
1,445,260,167,000 |
I have an old ext4 disk partition that I have to investigate without disturbing it. So I copied the complete partition to an image file and mounted that image file while continuing my investigation.
Now while I do not write to the mounted filesystem, I do have to mount it with read/write access, because one of the programs makes assumptions on what I intend to do, and requires write access, even though I do not intend to write to it. You know the kind of 'smart' programs.
Now the problem is that, when mounting an ext4 filesystem read/write, the last mount point is written into the filesystem itself, i.e. the mount command changes my image file, including file access time and file modification time. That is annoying for a lot of other reasons. I cannot find an option in mount(8) nor in ext4(5) to avoid this.
Is there another way to mount with read/write access, without the mount command writing to the filesystem?
|
I agree with @UlrichSchwarz mount it read-only, then use overlayFS or unionFS to create a writeable file-system. You can make the writable layer (the bit where the modifications go, disposable, or persistent. Ether way the changes are not stored on the master file-system.
| Can I mount ext4 without writing the last mountpoint to the filesystem? |
1,445,260,167,000 |
In Linux or more particular EXT4 the initial size of directory file is 4kB.
But if a large enough number of files are stored within the directory the size of the directory file will increase due to the increase of the internal "file list".
However, how many files are needed for this to happen? I have been unable to find a resource that can answer this question.
|
The format of ext4 directory entries is documented in the kernel. There are two possibilities.
For linear directories, each entry occupies eight bytes, plus the file name (zero-terminated), rounded up to four bytes. So n file entries occupy 8 × n bytes plus the lengths of all the file names individually rounded up to four (including the terminating zero). Directories always include . and .. which occupy twelve bytes each. Each linear directory can also have a twelve-byte checksum. The last entry in a block has its record length extended to cover the remaining room in the current block, so that directory entries never straddle two file system blocks.
For hash tree directories, the first data block in each directory has a 40-byte root entry (which includes file entries for . and ..), and each subsequent data block has an 18-byte node. Nodes occupy eight bytes each, and file entries use the same data structure as in a linear directory, ultimately as a linear array. So the amount of space consumed by a directory is harder to compute: each file occupies eight bytes plus the length of its name, rounded up to four bytes, and the tree structure consumes 40 bytes for the first block plus 18 bytes per extra block, and eight bytes per node.
If you want to quickly see a directory increase in size, fill it with files with lengthy file names — file names can be up to 254 bytes in length, plus the terminating zero byte, occupying 264 bytes in total, so 16 such entries in either type of directory will require more than 4096 bytes.
To determine whether a directory is linear or hashed, examine its inode, e.g. using debugfs:
debugfs: show_inode_info /path/to/directory
Inode: 7329 Type: directory Mode: 0755 Flags: 0x1000
Generation: 2283115506 Version: 0x00000001
...
The flags will show 0x1000 set if the directory is hashed, unset otherwise.
| How many files in a directory before the size of the directory file increase |
1,445,260,167,000 |
I've got 16 nodes of elasticsearch (RHEL 7) - 18TB each, every node has a ext4 filesystem. For better efficency I need to change to XFS filesystem.
Is there any tool / way which help me change filesystem without losing data? Or have I to do full backup of each node, which will be difficult because of large size of data files?
|
The extensible filesystem family (ext) has provided one way in place upgrades (ext2 to ext3 and ext3 to ext4); but this is only possible because the filesystem was specifically designed to be able to do this. There may be other filesystem families that are designed with a similar feature. In the case of a within family filesystem upgrade the risk of failure is relatively low. In any case, it would be wise to backup data before a filesystem upgrade in case something goes wrong.
A tool has been created to convert between some types of filesystems on Linux. In theory fstransform can be made to work with any Linux filesystem that supports sparse files (both ext4 and XFS are supported). It does require some free space (more than 10% free is recommended to transform to XFS) and the filesystem must be taken offline to transform.
| Change filesystem without losing data |
1,445,260,167,000 |
I'm going to put 4 million files into an EXT4 partition. I have about 700 files in each dir, the average file size is 38kb, total size is 169 gigabyte.
What are the best options in terms of block size, inode size and inode ratio I can choose?
Is it better to create two or more partitions, thinking at the checking time fsck could take?
|
Handling 4M files in a single filesystem is no problem for ext4, so long as the filesystem is formatted with enough inodes. It is no problem to even have 4M files in a single directory, if the filenames are not excessively large.
There are Lustre filesystems with 1.5-2B files, and 10-12M files in a single directory (which is about the directory limit until kernel 4.recent when the "large_dir" feature was added). That means you don't need to do anything special with the directory structure to handle the files, unless you might need to store many more files in the future, or if you have a regular turnover of files, where you might want to make "age" based directories and then delete them after some time.
Reasonable formatting options would be:
mke2fs -t ext4 -i 32768 -b 4096
-i 32768 = average file size is 32KB, to ensure enough inodes
-b 4096 = blocksize, to allow large directories
The default inode size is fine, unless you store a lot of xattrs on each file. If yes (use getfattr -d -m- -ehex /path/to/existing/file to see what the average xattr size is), then use -I to increase it. The core inode size is about 180 bytes these days, and the rest is available for fast xattrs.
If you put the filesystem on an LVM/DM device, then you can also resize it online to add more space/inodes if you need more in the future. What you can't easily change is the inode ratio or inode size.
| Best options for 4 million files on ext4 |
1,445,260,167,000 |
I have 2TB disk that I use in notebook. This disk was formatted as ext4 and It works fine in notebook, but when I attach it to desktop (via sata-usb adapter), I am unable to mount it due to following error:
From desktop:
# mount /dev/sdd1 /mnt
mount: /mnt: wrong fs type, bad option, bad superblock on /dev/sdd1, missing codepage or helper program, or other error.
# dmesg | grep sdd
[ 6978.692452] sd 11:0:0:0: [sdd] 3907029166 512-byte logical blocks: (2.00 TB/1.82 TiB)
[ 6978.692604] sd 11:0:0:0: [sdd] Write Protect is off
[ 6978.692606] sd 11:0:0:0: [sdd] Mode Sense: 03 00 00 00
[ 6978.692799] sd 11:0:0:0: [sdd] No Caching mode page found
[ 6978.692803] sd 11:0:0:0: [sdd] Assuming drive cache: write through
[ 6978.789625] sdd: sdd1
[ 6978.789631] sdd: p1 size 3907027120 extends beyond EOD, enabling native capacity
[ 6978.792344] sdd: sdd1
[ 6978.792346] sdd: p1 size 3907027120 extends beyond EOD, truncated
[ 6978.793299] sd 11:0:0:0: [sdd] Attached SCSI disk
[ 7002.085079] EXT4-fs (sdd1): bad geometry: block count 488378390 exceeds size of device (488378389 blocks)
# fdisk -l /dev/sdd
Disk /dev/sdd: 1.8 TiB, 2000398932992 bytes, 3907029166 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa3bf120c
Device Boot Start End Sectors Size Id Type
/dev/sdd1 2048 3907029167 3907027120 1.8T 83 Linux
From Notebook:
# dmesg | grep sdb
[ 6.747344] sd 1:0:0:0: [sdb] 3907029168 512-byte logical blocks: (2.00 TB/1.82 TiB)
[ 6.747347] sd 1:0:0:0: [sdb] 4096-byte physical blocks
[ 6.747369] sd 1:0:0:0: [sdb] Write Protect is off
[ 6.747372] sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00
[ 6.747407] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[ 6.769650] sdb: sdb1
[ 6.770587] sd 1:0:0:0: [sdb] Attached SCSI disk
[ 14.128886] EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: data=ordered
here I tried remount it, and it worked fine:
[ 286.189504] EXT4-fs (sdb1): mounted filesystem with ordered data mode. Opts: (null)
# fdisk -l /dev/sdb
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xa3bf120c
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 3907029167 3907027120 1.8T 83 Linux
My question is:
Why does one computer shows different amount of sectors on disk than the other one? I checked for bad blocks, none were found.
|
This happens with faulty USB interface adapters. Possible reasons for faulty adapters:
Adapter too old
Cheap adapter
Bad adapter firmware
These errors became a lot more frequent with the advent of advanced format drives.
Some adapters try to "translate" AF drive interactions so they emulate legacy format drives.
This means you can:
Use the USB adapter to format the drive, and then continue to use the USB adapter on both computers
Get a better USB adapter, so you won't have to format your drive.
Use internal SATA connectors on both computers.
Formatting will destroy all data on the drive.
| ext4-fs: bad geometry: block count exceeds size of device |
1,445,260,167,000 |
I'm on Ubuntu 18.04 and if I issue:
$ cat /proc/mounts
I don't see barrier=1 next to my main filesystem (under LVM). Does this mean barriers are not enabled? I read that while there were issues with this some time ago, now barriers are compatible with LVM.
If they are not enabled, how can I enable them? Maybe by adding the option in /etc/fstab?
|
They are enabled by default
Since many years ago, barriers are enabled by default on ext4. If you want to turn barriers off (and you have same sort of battery backup) then you can add barrier=0 to the options field in /etc/fstab. See the ext 4 documentation on kernel.org.
Generally speaking, don't add options to fstab unless you have a good reason; the defaults are safe and well thought.
| Can I enable barriers for ext4 under LVM? |
1,445,260,167,000 |
I was wondering if it is possible to convert my boot system from xfs to ext4. If it is possible, how do I do so?
|
You can do this using fstransform, which is a tool to convert a filesystem type into another:
fstransform /dev/sda1 ext4
Currently it supports all main Linux filesystems i.e. ext2, ext3, ext4, jfs, ntfs, reiserfs, xfs.
| Convert boot filesystem from xfs to ext4 |
1,445,260,167,000 |
From last 2 weeks I have problem with my SSD in GNU/Linux. I think it's not device problem but I'm not sure.
From time to time (every 1-2 days last days) I loss physical access to the disk, as if it was disconnected or powered off.
The error:
EXT4-fs error (device: sda2): ext4_find_entry:1465: inode #1308161: comm NetworkManager: reading directory lblock 0
I've typed this error from photo so it can be not fully accurate.
Notes:
Device is always the same "sda2", haven't noticed error with other (big home) partition. I will try to check this next time.
Inode and process name changes but NetworkManager is quite common. lblock is always 0.
Hardware:
Dell E7270 with SSD disk LITEON CV3-8D512-11 SATA 512GB
Software:
Debian testing, kernel 4.11.
smartctl brief output:
Device Model: LITEON CV3-8D512-11 SATA 512GB
Serial Number: TW0956WWLOH006CU022Z
LU WWN Device Id: 5 002303 100ce15e0
Firmware Version: T89110D
User Capacity: 512,110,190,592 bytes [512 GB]
Sector Size: 512 bytes logical/physical
Rotation Rate: Solid State Device
Form Factor: M.2
Device is: Not in smartctl database [for details use: -P showall]
ATA Version is: ATA8-ACS, ATA/ATAPI-7 T13/1532D revision 4a
SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is: Wed Jul 5 12:32:39 2017 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
...
SMART Attributes Data Structure revision number: 1
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
5 Reallocated_Sector_Ct 0x0003 100 100 000 Pre-fail Always - 0
9 Power_On_Hours 0x0002 100 100 000 Old_age Always - 327
12 Power_Cycle_Count 0x0003 100 100 000 Pre-fail Always - 335
175 Program_Fail_Count_Chip 0x0003 100 100 000 Pre-fail Always - 0
176 Erase_Fail_Count_Chip 0x0003 100 100 000 Pre-fail Always - 0
177 Wear_Leveling_Count 0x0003 100 100 000 Pre-fail Always - 59
178 Used_Rsvd_Blk_Cnt_Chip 0x0003 100 100 000 Pre-fail Always - 0
179 Used_Rsvd_Blk_Cnt_Tot 0x0003 100 100 000 Pre-fail Always - 0
180 Unused_Rsvd_Blk_Cnt_Tot 0x0033 100 100 005 Pre-fail Always - 2688
181 Program_Fail_Cnt_Total 0x0003 100 100 000 Pre-fail Always - 0
182 Erase_Fail_Count_Total 0x0003 100 100 000 Pre-fail Always - 0
187 Reported_Uncorrect 0x0003 100 100 000 Pre-fail Always - 0
194 Temperature_Celsius 0x0003 100 100 000 Pre-fail Always - 76
195 Hardware_ECC_Recovered 0x0003 100 100 000 Pre-fail Always - 0
199 UDMA_CRC_Error_Count 0x0003 100 100 000 Pre-fail Always - 0
238 Unknown_Attribute 0x0003 097 100 000 Pre-fail Always - 3
241 Total_LBAs_Written 0x0003 100 100 000 Pre-fail Always - 4293005286
242 Total_LBAs_Read 0x0003 100 100 000 Pre-fail Always - 3510503294
SMART Error Log Version: 0
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 298 -
# 2 Short offline Completed without error 00% 294 -
# 3 Offline Interrupted (host reset) 80% 294 -
# 4 Offline Interrupted (host reset) 10% 294 -
# 5 Short offline Completed without error 00% 294 -
# 6 Short offline Completed without error 00% 1 -
# 7 Short offline Aborted by host 90% 1 -
Ideas:
run bad block check
check connections
|
I think I've fixed this by removing SDD, blowing air into M.2 connector and reinserting it back.
When I booted to rescue Debian from USB I've noticed more detailed kernel debug information. While searching I've noticed most solutions were to replace SATA cables. Laptop M.2 connection has no cables.
I'm posting screen
Some most important log texts:
exception Emask 0x10 SAct ... SErr ... action 0xe frozen
interface fatal error, PHY RDY changed
SError: { PHYRdyChg LinkSeq }
failed command: WRITE FPDMA QUEUED
Emask 0x10 (ATA bus error)
hard resetting link
| random SSD turn off - ext4_find_entry , reading directory lblock0 |
1,439,543,725,000 |
I was trying to resize my LUKS crypt following this https://wiki.archlinux.org/index.php/Resizing_LVM-on-LUKS and I got to the partition resize with parted and seriously screwed things up. I typed 870 as the new size and forgot to put a G on the end. It shrunk my partition down to 870M I immediately resized it to 870G but by then the damage was done. Luckily I could still decrypt the LUKS crypt but I couldn't get my Logical Volume to even have a device file on the system. LVM recognized the volume as existing and showed the device file it was attached to but the file didn't exist and it showed it as having no filesystem. I did vgscan --mknodes and it successfully generated the device file but testdisk still wouldn't show it. I recreated the volume and put a new ext4 filesystem on it and now testdisk will show the drive but scanning yields nothing. I get a whole bunch of ext4 entries but all of them either say Can't open filesystem or No files found. Is there anyway for me to recover the filesystem that was on the disk? I don't want to write any data to it until I get what's on it off of it unless that's not possible.
EDIT: After poking around the real thing I need help with is recovering files from a previous ext4 filesystem. My drive had an ext4 system on it and that has since been overwritten with a new one however all the data from the old system still exists as shown by sudo dd if=/dev/Storage/Storage bs=1M | strings -fn 16. The only thing I did after my screw up was put a new ext4 FS on and nothing else so most of my data is probably still intact. I need to recover that data.
pvdisplay shows the following
--- Physical volume ---
PV Name /dev/mapper/Storage
VG Name Storage
PV Size 931.51 GiB / not usable 3.68 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 238466
Free PE 0
Allocated PE 238466
PV UUID CAueGx-Glzx-zCd0-H00m-R8d5-KTRc-9Ff7ay
--- Physical volume ---
PV Name /dev/mapper/sda3_crypt
VG Name mint-vg
PV Size 118.50 GiB / not usable 0
Allocatable yes
PE Size 4.00 MiB
Total PE 30336
Free PE 10
Allocated PE 30326
PV UUID UJJfu8-S2Ac-pEZl-PlPa-uUzJ-axEs-ckbDWG
My backup shows
# Generated by LVM2 version 2.02.98(2) (2012-10-15): Thu Aug 13 20:45:52 2015
contents = "Text Format Volume Group"
version = 1
description = "Created *before* executing '/sbin/lvreduce --config log{command_names=0} -f -l 217600 /dev/Storage/Storage'"
creation_host = "desktop" # Linux desktop 3.19.0-25-generic #26~14.04.1-Ubuntu SMP Fri Jul 24 21:16:20 UTC 2015 x86_64
creation_time = 1439523952 # Thu Aug 13 20:45:52 2015
Storage {
id = "lM3S9T-inH1-mKsq-5doN-H8hT-zO3F-LF9jDx"
seqno = 2
format = "lvm2" # informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 256
max_pv = 256
metadata_copies = 0
physical_volumes {
pv0 {
id = "nH1Axo-5nBo-WcyA-Xc4E-KwRt-K0Ib-ScK8Ch"
device = "/dev/mapper/Storage" # Hint only
status = ["ALLOCATABLE"]
flags = []
dev_size = 1953520999 # 931.511 Gigabytes
pe_start = 2048
pe_count = 238466 # 931.508 Gigabytes
}
}
logical_volumes {
Storage {
id = "Qb01kz-y1RG-PVQp-cGjB-sj77-xgnJ-w9kn3n"
status = ["READ", "WRITE", "VISIBLE"]
flags = []
creation_host = "desktop"
creation_time = 1436247513 # 2015-07-06 22:38:33 -0700
segment_count = 1
segment1 {
start_extent = 0
extent_count = 238466 # 931.508 Gigabytes
type = "striped"
stripe_count = 1 # linear
stripes = [
"pv0", 0
]
}
}
}
}
|
You can't fix LVM by growing size back to original size, unless you were very lucky and the LV had no fragmentation whatsoever due to previous resizes. Chances are the new LV will have the first 20G or so of your original filesystem but the remaining 780G (or whatever) are scrambled eggs (wrong data, wrong offset, wrong order).
And that's assuming you're using HDD media. If it was SSD, with issue_discards=1 in your lvm.conf, the data would simply be gone, which is why I never use this option.
You have to check /etc/lvm/{archive,backup}/ for old versions of your metadata. Each file in there says when it was created, for example:
description = "Created *before* executing 'lvremove HDD/mdtest1'"
You're looking for the one that says Created before lvresize 850 with the G missing. And then vgcfgrestore LVM metadata using that backup and hopefully then it will be back in working order.
If you do not have such files in /etc/lvm, either because you did this from a Live CD that lost this data, or the damage happened on your root LV, things get a bit more complicated as you have to hope for the LVM metadata on disk to contain this bit of history in its circular buffer.
Rough method to see what's possibly in there:
dd if=/dev/pvdevice bs=1M count=1 | strings -w -n 16
| LVM Filesystem recovery |
1,439,543,725,000 |
Would there be any problems in copying files on my Linux system with ext4 filesystem to an external drive that is formatted in NTFS? I'm reinstalling my OS and intend to copy these files back to my Linux system once the new Linux OS is up and running.
|
No, with ntfs-3g you've got read- and write-support for NTFS formated partitions. Just additionally avoid the following characters: \ : * ? " < > |
You will maybe loose the permissions... If this is important for you (which I doubt), you have to create a tar-file first and then transfer it to the NTFS-drive.
If you are free to choose the file system of the external drive (for a further usage) I would recommend to use ext3/4 because it's more error-resistant (full journaling, fragmentation, file-system checks,...) than the NTFS-filesystem.
| Backup Files on ext4 to external NTFS drive |
1,439,543,725,000 |
[root@xx]# cat -n create_extents.sh
1 #!/bin/bash
2
3 if [ $# -ne 2 ]
4 then
5 echo "$0 [filename] [size in kb]"
6 exit 1
7 fi
8
9 filename=$1
10 size=$2
11 i=0
12
13 while [ $i -lt $size ]
14 do
15 i=`expr $i + 7`
16 echo -n "$i" | dd of=$1 bs=1024 seek=$i
17 done
so I did
sudo ./create_extents.sh /device3/test70 70
Then, I use debugfs "stat" command to check it,
Inode: 13 Type: regular Mode: 0644 Flags: 0x80000
Generation: 2638566511 Version: 0x00000000:00000001
User: 0 Group: 0 Size: 71682
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 88
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x53292990:042e685c -- Wed Mar 19 01:22:24 2014
atime: 0x5329298f:edd4dc60 -- Wed Mar 19 01:22:23 2014
mtime: 0x53292990:042e685c -- Wed Mar 19 01:22:24 2014
crtime: 0x5329298f:edd4dc60 -- Wed Mar 19 01:22:23 2014
Size of extra inode fields: 28
EXTENTS:
(ETB0):33803, (1):33825, (3):33827, (5):33829, (7-8):33831-33832, (10):33834, (12):33836, (14-15):33838-33839, (17):33841
(END)
Why it takes so many blocks? and the place is so scattered..?
My block size is 4k.
I know that ext4 tries hard to keep locality for one file.
thanks
|
The “Blockcount” value is the i_blocks field of the struct ext2_inode. This is the value that is returned to the stat syscall in the st_blocks field. For historical reasons, the unit of that field is 512-byte blocks — this was the filesystem block size on early Unix filesystems, but now it's just an arbitrary unit. You can see the value being incremented and decremented depending solely on the file size further down in fs/stat.c.
You can see this same value by running stat /device3/test70 (“Blocks: 88”).
The file in fact contains 18 blocks, which is as expected with a 4kB block size (the file is 71682 bytes long, not sparse, and 17 × 4096 \< 71682 ≤ 18 × 4096).
It probably comes out as surprising that the number of 512-byte blocks is 88 and not 141 (because 140 × 512 \< 71682 ≤ 141 × 512) or 144 (which is 18 × 4096/512). The reason has to do with the calculation in fs/stat.c that I linked to above. Your script creates this file by seeking repeatedly past the end, and for the i_blocks field calculation, the file is sparse — there are whole 512-byte blocks that are never written to and thus not counted in i_blocks. (However, there isn't any storage block that's fully sought past, so the file is not actually sparse.)
If you copy the file, you'll see that the copy has 144 such blocks as expected (note that you need to run cp --sparse=never, because GNU cp tries to be clever and seeks when it sees expanses of zeroes).
As to the number of extents, creating a file the way you do by successive seeks past the end is not a situation that filesystems tend to be optimized for. I think that the heuristics in the filesystem driver first decide that you're creating a small file, so start by reserving space one block at a time; later, when the file grows, the heuristics start reserving multiple blocks at a time. If you create a larger file, you should see increasing large extents. But I don't know ext4 in enough detail to be sure.
| ext4, why 70k file takes 88 blocks |
1,439,543,725,000 |
I have a Debian box with additional ( it is not a system disk ) 1.5Tb sata hdd (wd caviar green). There is only one partition on the whole disk.
Disk is used for backups from remote system (with rsnapshot, backup update runs every 4 hours) and rtorrent for some files. So disk is permanently in use.
Everything was perfect, until some filesystem errors appeared and I couldn't delete a lot of files due to filesystem read/write error.
fsck saved me, within one month errors appered several times. Every time I need to umount partition and run fsck to fix all errors.
During fsck different fs block read/write were fixed, also appeared some messages like:
Inode 61477311 ref count is 3, should be 2. Fix? yes
Block bitmap differences: -(246948483--246948494) -(246987843--246987871) -(246988756--246988758) -(246989103--246989109). Fix? yes
smartctl doesn't show any errors at all.
So, should I backup the whole data and format it or hard is dying or maybe there is another way to fix the issue?
ps. Here is smartctls output:
smartctl 5.40 2010-07-12 r3124 [x86_64-unknown-linux-gnu] (local build)
Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net
=== START OF INFORMATION SECTION ===
Model Family: Western Digital Caviar Green (Adv. Format) family
Device Model: WDC WD15EARS-00Z5B1
Serial Number: WD-WMAVU1111103
Firmware Version: 80.00A80
User Capacity: 1,500,301,910,016 bytes
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: Exact ATA specification draft version not indicated
SMART support is: Available - device has SMART capability.
SMART support is: Enabled
=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
General SMART Values:
Offline data collection status: (0x82) Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
Self-test execution status: ( 0) The previous self-test routine completed
without error or no self-test has ever
been run.
Total time to complete Offline
data collection: (33000) seconds.
Offline data collection
capabilities: (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
SMART capabilities: (0x0003) Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
Error logging capability: (0x01) Error logging supported.
General Purpose Logging supported.
Short self-test routine
recommended polling time: ( 2) minutes.
Extended self-test routine
recommended polling time: ( 255) minutes.
Conveyance self-test routine
recommended polling time: ( 5) minutes.
SCT capabilities: (0x3031) SCT Status supported.
SCT Feature Control supported.
SCT Data Table supported.
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 197 197 051 Pre-fail Always - 36297
3 Spin_Up_Time 0x0027 206 177 021 Pre-fail Always - 4658
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 267
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 084 084 000 Old_age Always - 12335
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 265
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 57
193 Load_Cycle_Count 0x0032 142 142 000 Old_age Always - 176547
194 Temperature_Celsius 0x0022 120 087 000 Old_age Always - 30
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 3
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 3
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed without error 00% 12335 -
SMART Selective self-test log data structure revision number 1
SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS
1 0 0 Not_testing
2 0 0 Not_testing
3 0 0 Not_testing
4 0 0 Not_testing
5 0 0 Not_testing
Selective self-test flags (0x0):
After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
|
additionally I discovered some messages in dmesg like:
[2429573.624923] ata6.00: status: { DRDY ERR }
[2429573.624945] ata6.00: error: { UNC }
[2429573.632900] ata6.00: configured for UDMA/133
[2429573.632942] ata6: EH complete
[2429576.564846] ata6.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x0
[2429576.564885] ata6.00: irq_stat 0x40000008
[2429576.564910] ata6.00: failed command: READ FPDMA QUEUED
[2429576.564942] ata6.00: cmd 60/08:00:e8:14:c0/00:00:75:00:00/40 tag 0 ncq 4096 in
[2429576.564946] res 41/40:00:e8:14:c0/00:00:75:00:00/40 Emask 0x409 (media error) <F>
Google suggested to change data cable. So now everything looks ok. Thanks to everyone.
| A lot of errors on ext4, however smart doesn't show any errors |
1,439,543,725,000 |
How do I accomplish setting up project quota for my live root folder being ext4 on Ubuntu 18.04?
Documentation specific to project quota on the ext4 filesystem is basically non-existent and I tried this:
Installed Quota with apt install quota -y
Put prjquota into /etc/fstab for the root / and rebooted, filesystem got booted as read-only, no project quota (from here only with prjquota instead of the user and group quotas)
Also find /lib/modules/`uname -r` -type f -name '*quota_v*.ko*' was run and both kernel modules /lib/modules/4.15.0-96-generic/kernel/fs/quota/quota_v2.ko and /lib/modules/4.15.0-96-generic/kernel/fs/quota/quota_v1.ko were found (from this tutorial)
Put GRUB_CMDLINE_LINUX_DEFAULT="rootflags=prjquota" into /etc/default/grub, ran update-grub and rebooted, machine does not come up anymore.
Putting rootflags=quota into GRUB_CMDLINE_LINUX="... rootflags=quota" running update-grub and restarting did show quota and usrquota being enabled on root, but it does not work with prjquota or pquota or project being set as an rootflag
I need this for the DIR storage backend of LXD to be able to limit container storage size.
What else can I try?
|
I was told running tune2fs -O project -Q prjquota /dev/sdaX is absolutely essential to enable Project Quota on a device. So I searched for a solution that does not require switching off or using a live-cd as this requires too much time and does not always work well in my experience with my VPS provider. And I also hoped that I can turn the steps into a script, which did not work out so far.
Thanks to another question I was able to put together a solution that worked for me on Ubuntu 18.04. You do need ca. 4GB of RAM to do this (and of course a kernel after version 4.4).
Sources:
How to shrink root filesystem without booting a livecd
http://www.ivarch.com/blogs/oss/2007/01/resize-a-live-root-fs-a-howto.shtml
1. Make a RAMdisk filesystem
mkdir /tmp/tmproot
mount none /tmp/tmproot -t tmpfs -o rw
mkdir /tmp/tmproot/{proc,oldroot,sys}
cp -a /dev /tmp/tmproot/dev
cp -ax /{bin,etc,opt,run,usr,home,mnt,sbin,lib,lib64,var,root,srv} /tmp/tmproot/
2. Switch root to the new RAMdisk filesystem
cd /tmp/tmproot
unshare -m
pivot_root /tmp/tmproot/ /tmp/tmproot/oldroot
mount none /proc -t proc
mount none /sys -t sysfs
mount none /dev/pts -t devpts
3. Restart SSH on another port than 22 and reconnect with another session
nano /etc/ssh/sshd_config
Change the port to 2211
Restart SSH with /usr/sbin/sshd -D &
Connect again from 2211
4. Kill processes using /oldroot or /dev/sdaX
fuser -km /oldroot
fuser -km /dev/sdaX
5. Unmount /dev/sdaX and apply the project quota feature
umount -l /dev/sdaX
tune2fs -O project -Q prjquota /dev/sdaX
6. Mount with Project Quota
mount /dev/sda2 -o prjquota /oldroot
7. Putting things back
pivot_root /oldroot /oldroot/tmp/tmproot
umount /tmp/tmproot/proc
mount none /proc -t proc
cp -ax /tmp/tmproot/dev/* /dev/
mount /dev/sda1 /boot ### This might be different for you
reboot -f
8. Turn quota on after reboot
apt install quota -y
quotaon -Pv -F vfsv1 /
9. Check if quota is on on root
repquota -Ps /
10. Make it persistent
Put prjquota into the options of root in /etc/fstab
Enjoy!
| Project Quota on a live root EXT4 filesystem without live-cd |
1,439,543,725,000 |
I have an ext4 img file containing an Android system partition files.
I mount it by using sudo mount -t ext4 -o loop,rw system.img system
When I'm done editing the files and umount the file, I notice that the resulting image is significantly larger than the content it has.
I checked with GParted, and it is indeed true : the image file is 2.0Gb, while its partition only has 1.51Gb used (and 506Mb unused).
I can use GParted to resize the partition and shrink the IMG size by 500Mb, but I would like to have it automated, in a script. How could I achieve this ?
|
You can use resize2fs -M system.img to shrink your filesystem to the minimum size. Note that this does not shrink the image file directly. You would need to use truncate to shrink the image file to the new filesystem size (carefully, so you don't chop data off the end).
| Automatically shrink an ext4 IMG file |
1,439,543,725,000 |
Ubuntu 17.04; ext4 filesystem on 4TB WD green SATA [WDC WD40EZRX-22SPEB0]
Mount (on startup, from fstab) failed with bad superblock. fsck reported / inode damaged, but repaired it. 99% of files restored (the few that are lost are available in backup). Repaired volume mounts and operates normally.
Looking at the SMART data, I think the disk is okay. The "extended" smartctl test passed. The data is already backed up (and it's not mission critical). I already have a replacement drive. It's tempting to take a "zero tolerance" policy and replace the disk now, but as it's a £100 item, and I don't want to be chucking a wobbly and binning every disk that ever writes a bad block once.
Here's the smartctl dump. Is the disk actually dying, or did it just have a one-time mishap?
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 61
3 Spin_Up_Time 0x0027 195 176 021 Pre-fail Always - 7225
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 770
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0
9 Power_On_Hours 0x0032 084 084 000 Old_age Always - 12325
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 730
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 40
193 Load_Cycle_Count 0x0032 194 194 000 Old_age Always - 18613
194 Temperature_Celsius 0x0022 121 106 000 Old_age Always - 31
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 21
SMART Error Log Version: 1
No Errors Logged
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Extended offline Completed without error 00% 12320 -
# 2 Short offline Completed without error 00% 12311 -
|
According to the SMART readings, the disk seems fine at the moment.
The exciting ones for disk sectors are these
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0
A reallocated sector is one that failed a write and was remapped elsewhere on the disk. A small number of these is acceptable. Zero is excellent.
The current pending sector value is the number of sectors that are waiting to be reallocated elsewhere. (The read failed but the disk is waiting for a write request, which is the point at which the sector gets remapped.) This may become non-zero for a while, and as the sectors get overwritten this number will decrease and the reallocated sector count will increase.
The count of offline uncorrectable sectors is the number of sectors that failed and could not be remapped. A non-zero value is bad news because it means you are losing data. Your zero value is just fine.
These next group show the duration of use of your disk drive
4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 770
9 Power_On_Hours 0x0032 084 084 000 Old_age Always - 12325
12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 730
You've had the device running for 12325 hours (if that's continuous time it's about 18 months) and during that time it has powered up and down 730 times. If you power it off daily then you've had the disk running for about 16 hours/day over two years.
Finally, it would be worth scheduling a full test every week. You can do this with a command such as smartctl -t full /dev/sda. Errors in the tests can become cause for concern.
# 1 Extended offline Completed without error 00% 12320 -
# 2 Short offline Completed without error 00% 12311 -
If you are using this in a NAS I would recommend a NAS grade disk. Personally I find the WD Red are very good in this respect. The cost is a little higher but the warranty is longer.
| ext4 : bad block fixed, but is this disk dying? |
1,439,543,725,000 |
I had some bad sectors on my ext4 partition and using hdparm --write-sector I managed to reallocate them. However, I ended up in state where in one folder I have folder that has no inode assigned.
```
ls -li /path/
? d?????????? ? ? ? ? ? folder
```
I am unable to delete this folder now. I tried simply to rm -fr it - no success. I wanted to delete it with debugfs but opening filesystem that contains this folder gives me Bad magic number in super-block while opening filesystem. I don't know whether ext4 in lvm is supported by debugfs and found no info on that.
|
I'd suggest forcing a fsck: sudo touch /forcefsck and then reboot. But before you do that, make sure you have backups — especially now since you can still access the contents of your filesystem.
debugfs does support LVM-backed filesystems, it simply uses whatever block device you give it (or even a file). Presumably one of the blocks you reallocated was in the superblock; you could always try to run it using a backup superblock with the -s option (which also requires the -b option), but it's probably best not to write to the filesystem like that.
| Delete file with missing inode number |
1,439,543,725,000 |
I have a pretty basic system running Ubuntu 16.04 (this question is not specific to Ubuntu, but rather ext4 partitions), 1 HDD, running a few partitions:
sda1 - EXT4 - 100G - /
sda2 - EXT4 - 723.5G - /home
sda3 - NTFS - 100G - (windows)
sda5 - SWAP - 8G
Whenever I try to access one of 3-4 files in a specific directory in the /home partition, (the specific folder causing the issues is /home/path/to/broken/folder), the /home partition will error and remount read-only. dmesg shows the following errors:
EXT4-fs error (device sda2): ext4_ext_check_inode:497: inode #1415: comm rm: pblk 0 bad header/extent: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)
Aborting journal on device sda2-8.
EXT4-fs (sda2): Remounting filesystem read-only
EXT4-fs error (device sda2): ext4_ext_check_inode:497: inode #1417: comm rm: pblk 0 bad header/extent: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)
EXT4-fs error (device sda2): ext4_ext_check_inode:497: inode #1416: comm rm: pblk 0 bad header/extent: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)
So I understand what is going on...some bad block is causing an error and is remounting the drive read-only to prevent further corruption. I know it is these specific files because I can undo the error by
Logging in as root
Running sync
Stopping lightdm (and all sub-processes)
Stop all remaining open files on /home by finding them with lsof | grep /home
Unmounting /home
Running fsck /home (fixing the errors)
Remount /home
Everything is fine again, read and write, until I try to access the same files again, then this entire process is repeated to fix it again.
The way I've tried to access the files is by running ls /home/path/to/broken/folder and rm -r /home/path/to/broken/folder, so it seems any kind of HDD operation on that part of the drive errors it and throws it into read-only again.
I honestly don't care about the files, I just want them gone. I am willing to remove the entire /home/path/to/broken/folder folder, but every time I try this, it fails and throws into read-only.
I ran badblocks -v /dev/sda2 on my hard drive, but it came out clean, no bad blocks. Any help would still be greatly appreciated.
Still looking for a solution to this. Some information that might be useful below:
$ debugfs -R 'stat <1415>' /dev/sda2
debugfs 1.42.13 (17-May-2015)
Inode: 1415 Type: regular Mode: 0644 Flags: 0x80000
Generation: 0 Version: 0x00000000
User: 0 Group: 0 Size: 0
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 0
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x5639ad86 -- Wed Nov 4 01:02:30 2015
atime: 0x5639ad86 -- Wed Nov 4 01:02:30 2015
mtime: 0x5639ad86 -- Wed Nov 4 01:02:30 2015
Size of extra inode fields: 0
EXTENTS:
Now I looked at this myself and compared it to what I suspect to be a non-corrupted inode:
$ debugfs -R 'stat <1410>' /dev/sda2
debugfs 1.42.13 (17-May-2015)
Inode: 1410 Type: regular Mode: 0644 Flags: 0x80000
Generation: 0 Version: 0x00000000
User: 0 Group: 0 Size: 996
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 0
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x5639ad31 -- Wed Nov 4 01:01:05 2015
atime: 0x5639ad31 -- Wed Nov 4 01:01:05 2015
mtime: 0x5639ad31 -- Wed Nov 4 01:01:05 2015
Size of extra inode fields: 0
EXTENTS:
(0):46679378
I have bolded what I believe are the key differences here. I looked at other non-corrupted inodes and they display something similar to the 1410 that has a non-zero size and an extent.
Bad header/extent makes sense here...it has no extent....how do I fix this without reformatting my entire /home partition?
I really feel like I've handed this question to someone smarter than me on a silver platter, I just don't know what the meal (answer) is!
|
Finally found the answer from somebody else on another site, just zeroed the inodes and rechecked the system, that was all!
debugfs -w /dev/sda2
:clri <1415>
:clri <1416>
:clri <1417>
:q
fsck -y /dev/sda2
To anybody else with this issue, I found my bad inodes using find on the bad mount, then checked dmesg for errors on the bad inodes.
| Partition Errors and Remounts Read-Only when Accessing Specific File |
1,439,543,725,000 |
I always thought "clean" is a synonym of does not need journal recovery.
However that does not seem to be the case
$ sudo file -s /dev/sdc4
/dev/sdc4: Linux rev 1.0 ext4 filesystem data, UUID=117ce600-a129-446b-8859-1e20ad8fe823, volume name "platform" (needs journal recovery) (extents) (large files) (huge files)
$ sudo fsck -n /dev/sdc4
fsck from util-linux 2.25.1
e2fsck 1.42.12 (29-Aug-2014)
Warning: skipping journal recovery because doing a read-only filesystem check.
platform: clean, 13031/186800 files, 129254/790272 blocks
$ sudo file -s /dev/sdc4
/dev/sdc4: Linux rev 1.0 ext4 filesystem data, UUID=117ce600-a129-446b-8859-1e20ad8fe823, volume name "platform" (needs journal recovery) (extents) (large files) (huge files)
$ sudo fsck -n /dev/sdc4
fsck from util-linux 2.25.1
e2fsck 1.42.12 (29-Aug-2014)
Warning: skipping journal recovery because doing a read-only filesystem check.
platform: clean, 13031/186800 files, 129254/790272 blocks
Both file and fsck agree that journal recovery is needed. Still fsck says the filesystem is clean. And the -n flag obviously did what I wanted, the filesystem remained unchanged, so the clean cannot refer to having successfully cleaned (applied the journal).
Edit: The filesystem is not mounted.
|
The filesystem was unmounted incorrectly.
The "clean" at the end of the output of "fsck -n" is misleading. It does not mean recovery is not needed. It just means that if you would have reached that point, without -n, the filesystem would have been clean.
How to reproduce:
sh-4.3# dd if=/dev/zero of=/tmp/test.fs bs=1M count=5
5+0 records in
5+0 records out
5242880 bytes (5.2 MB) copied, 0.00333063 s, 1.6 GB/s
sh-4.3# mkfs.ext4 /tmp/test.fs
mke2fs 1.42.12 (29-Aug-2014)
Discarding device blocks: done
Creating filesystem with 5120 1k blocks and 1280 inodes
Allocating group tables: done
Writing inode tables: done
Creating journal (1024 blocks): done
Writing superblocks and filesystem accounting information: done
sh-4.3# mkdir /tmp/t
sh-4.3# mount -o loop /tmp/test.fs /tmp/t
sh-4.3# ls / > /tmp/t/file
sh-4.3# umount /tmp/test.fs
sh-4.3# fsck -n /tmp/test.fs
fsck from util-linux 2.25.2
e2fsck 1.42.12 (29-Aug-2014)
/tmp/test.fs: clean, 12/1280 files, 1224/5120 blocks
sh-4.3# mount -o loop /tmp/test.fs /tmp/t
sh-4.3# ls / > /tmp/file2
sh-4.3# cp /tmp/test.fs /tmp/test-unclean.fs
sh-4.3# fsck.ext4 -n /tmp/test-unclean.fs
e2fsck 1.42.12 (29-Aug-2014)
Warning: skipping journal recovery because doing a read-only filesystem check.
/tmp/test-unclean.fs: clean, 12/1280 files, 1224/5120 blocks
sh-4.3# mkdir /tmp/t2
sh-4.3# mount -o loop /tmp/test-unclean.fs /tmp/t2
sh-4.3# dmesg | tail
...
[66569.074538] EXT4-fs (loop1): recovery complete
[66569.074554] EXT4-fs (loop1): mounted filesystem with ordered data mode. Opts: (null)
sh-4.3# umount /tmp/test-unclean.fs
sh-4.3# fsck -n /tmp/test-unclean.fs
fsck from util-linux 2.25.2
e2fsck 1.42.12 (29-Aug-2014)
/tmp/test-unclean.fs: clean, 12/1280 files, 1224/5120 blocks
| ext4: Can a clean filesystem need journal recovery? |
1,439,543,725,000 |
I'm trying to create a graph of the distribution of file sizes on my ext4 system. I'm trying to write a script to scrape this information from my computer somehow. I don't care where the files are stored in the directory structure, only how much space each takes up. I know file sizes are stored in the inode metadata, and it seems like it might be pretty fast to read through the inode table, if such a thing exists. Does anyone know of a C API for accessing the size of files, or reading directly from the inode table? Does anyone know where the inode table is stored?
|
If you want a C API, you're going to end up with GNU nftw, the GNU file tree walk. DON'T fool yourself into using plain old ftw, you will get inaccurate data. You'll need to write a "per file" function that uses the struct stat that nftw passes into the "per file" function. You can have the "per file" function put file sizes in buckets, or just print out the file size, and then put the numbers in buckets some other way.
| Fastest way to get list of all file sizes |
1,439,543,725,000 |
For benchmark and testing purposes I need to be able to allocate a file at a specific offset from the start of the partition. When I create a new file normally, its blocks are placed wherever the file system decides, but I want to control that. In other words, I want to manually pick which blocks are assigned to a file.
I've looked at debugfs, but I can't find any way to do what I want. Though I can mark blocks as allocated and modify the inode, this only works for the first 12 blocks. After that I need to be able to create indirect and double indirect blocks as well, which it doesn't look like debugfs has any capability for.
Is there any way to do this? Any tool that could help me? You may assume that the file system is either ext3 or ext4 and that it has been freshly formatted (no other files exist).
Thanks in advance.
|
I have managed to find a way to do this. It uses a python script which first uses debugfs to find the necesssary number of blocks (including indirect blocks) that the file will need. It then manually writes the indirect blocks to the disk, and invokes debugfs again to mark the blocks as used and to update the file's inode.
The only issue is that debugfs apparently doesn't update the free block count of the block group when you use setb. Although I can set that parameter manually, there doesn't appear to be any way to print the current value so I can't calculate the correct value. As far as I can tell it doesn't have any real negative consequences, and fsck.ext3 can be used to correct the values if needed, so for benchmark purposes it'll do.
If there's any other file system consistency issue I've missed, please let me know, but since fsck.ext3 reports nothing besides the incorrect free block count I should be safe.
import sys
import tempfile
import struct
import subprocess
SECTOR_SIZE = 512
BLOCK_SIZE = 4096
DIRECT_BLOCKS = 12
BLOCKS_PER_INDIRECT_BLOCK = BLOCK_SIZE / 4
def write_indirect_block(device, indirect_block, blocks):
print "writing indirect block ", indirect_block
dev = open(device, "wb")
dev.seek(indirect_block * BLOCK_SIZE)
# Write blocks
for block in blocks:
bin_block = struct.pack("<I", int(block))
dev.write(bin_block)
zero = struct.pack("<I", 0)
# Zero out the rest of the block
for x in range(len(blocks), BLOCKS_PER_INDIRECT_BLOCK):
dev.write(zero)
dev.close()
def main(argv):
if len(argv) < 5:
print "Usage: ext3allocfile.py [device] [file] [sizeInMB] [offsetInMB]"
return
device = argv[1] # device containing the ext3 file system, e.g. "/dev/sdb1"
file = argv[2] # file name relative to the root of the device, e.g. "/myfile"
size = int(argv[3]) * 1024 * 1024 # Size in MB
offset = int(argv[4]) * 1024 * 1024 # Offset from the start of the device in MB
if size > 0xFFFFFFFF:
# Supporting this requires two things: triple indirect block support, and proper handling of size_high when changing the inode
print "Unable to allocate files over 4GB."
return
# Because size is specified in MB, it should always be exactly divisable by BLOCK_SIZE.
size_blocks = size / BLOCK_SIZE
# We need 1 indirect block for each 1024 blocks over 12 blocks.
ind_blocks = (size_blocks - DIRECT_BLOCKS) / BLOCKS_PER_INDIRECT_BLOCK
if (size_blocks - DIRECT_BLOCKS) % BLOCKS_PER_INDIRECT_BLOCK != 0:
ind_blocks += 1
# We need a double indirect block if we have more than one indirect block
has_dind_block = ind_blocks > 1
total_blocks = size_blocks + ind_blocks
if has_dind_block:
total_blocks += 1
# Find free blocks we can use at the offset
offset_block = offset / BLOCK_SIZE
print "Finding ", total_blocks, " free blocks from block ", offset_block
process = subprocess.Popen(["debugfs", device, "-R", "ffb %d %d" % (total_blocks, offset_block)], stdout=subprocess.PIPE)
output = process.stdout
# The first three entries after splitting are "Free", "blocks", "found:", so we skip those.
blocks = output.readline().split(" ")[3:]
output.close()
# The last entry may contain a line-break. Removing it this way to be safe.
blocks = filter(lambda x: len(x.strip(" \n")) > 0, blocks)
if len(blocks) != total_blocks:
print "Not enough free blocks found for the file."
return
# The direct blocks in the inode are blocks 0-11
# Write the first indirect block, listing the blocks for file blocks 12-1035 (inclusive)
if ind_blocks > 0:
write_indirect_block(device, int(blocks[DIRECT_BLOCKS]), blocks[DIRECT_BLOCKS + 1 : DIRECT_BLOCKS + 1 + BLOCKS_PER_INDIRECT_BLOCK])
if has_dind_block:
dind_block_index = DIRECT_BLOCKS + 1 + BLOCKS_PER_INDIRECT_BLOCK
dind_block = blocks[dind_block_index]
ind_block_indices = [dind_block_index+1+(i*(BLOCKS_PER_INDIRECT_BLOCK+1)) for i in range(ind_blocks-1)]
# Write the double indirect block, listing the blocks for the remaining indirect block
write_indirect_block(device, int(dind_block), [blocks[i] for i in ind_block_indices])
# Write the remaining indirect blocks, listing the relevant file blocks
for i in ind_block_indices:
write_indirect_block(device, int(blocks[i]), blocks[i+1:i+1+BLOCKS_PER_INDIRECT_BLOCK])
# Time to generate a script for debugfs
script = tempfile.NamedTemporaryFile(mode = "w", delete = False)
# Mark all the blocks as in-use
for block in blocks:
script.write("setb %s\n" % (block,))
# Change direct blocks in the inode
for i in range(DIRECT_BLOCKS):
script.write("sif %s block[%d] %s\n" % (file, i, blocks[i]))
# Change indirect block in the inode
if size_blocks > DIRECT_BLOCKS:
script.write("sif %s block[IND] %s\n" % (file, blocks[DIRECT_BLOCKS]))
# Change double indirect block in the inode
if has_dind_block:
script.write("sif %s block[DIND] %s\n" % (file, dind_block))
# Set total number of blocks in the inode (this value seems to actually be sectors
script.write("sif %s blocks %d\n" % (file, total_blocks * (BLOCK_SIZE / SECTOR_SIZE)))
# Set file size in the inode
# TODO: Need support of size_high for large files
script.write("sif %s size %d\n" % (file, size))
script.close()
# execute the script
print "Modifying file"
subprocess.call(["debugfs", "-w", device, "-f", script.name])
script.unlink(script.name)
if __name__ == "__main__":
main(sys.argv)
The script can be used as follows to create a 1GB file at offset 200GB (you need to be root):
touch /mount/point/myfile
sync
python ext3allocfile.py /dev/sdb1 /myfile 1024 204800
umount /dev/sdb1
mount /dev/sdb1
The umount/mount combo is necessary to get the system to recognize the change. You can unmount before invoking the script but that makes invoking debugfs slower.
If anyone wants to use this: I don't guarantee it'll work right, I don't take responsibility if you lose any data. In general, don't use it on a file system that contains anything important.
| Allocate file at a specific offset in ext3/4 |
1,439,543,725,000 |
I just read about the "inline data" feature in EXT4, and more specifically about that answer on how to enable it.
What are the reasons why this feature isn't enabled by default in EXT4 ? I guess it's to keep the FS compatible with older kernels, that didn't support this feature yet. Are there other reasons ?
If I know that I'll never use an older kernel, is there any reason NOT to always enable this feature when formatting a partition with EXT4 ?
|
Answering my own question, basically just developing Stéphane Chazelas' comment :
As Theodore Ts'o (maintainer of ext4) explained in an e-mail dating back to 2019:
There are some known issues with with the inline_data feature (...).
(...)
But yeah, there is a good reason why it's not a default-enabled
feature. It also generally doesn't buy you much for most file system
workloads, so it hasn't been high on my priority list to fix.
So it's probably best no to enable that feature for now, unless absolutely needed.
| Are there downsides to enabling the "inline data" feature of EXT4? |
1,439,543,725,000 |
I just read this discussion between Linus Torvalds and (among others) Milan Broz, one of dm-crypt's maintainers.
I am intrigued by the the following part of the discussion :
Linus Torvalds:
I thought the people who used hidden ("deniable") things didn't actually ever use the outer filesystem at all, exactly so that they can just put the real encrypted thing in there and nor worry about it.
Milan Broz: Well, they actually should "use" outer from time to time
so the data looks "recent" and for the whole "hidden OS" they should
be even able to boot to outer decoy OS on request, just to show that
something working is there.
In theory, I agree with Milan's statement, using the decoy data is a good thing to do to increase credibility. But how do you achieve that in practice? E.g., how can you write to the outer volume without risking to overwrite the inner volume?
I have been using hidden LUKS volumes for years now, combining detachable headers and data offset. Usually I start by creating a small LUKS-encrypted outer volume (let's say 20 GB), I format it with EXT4, I fill it with decoy data, then I increase this outer volume's size (to for example 500 GB), and I create the inner volume with an offset of 25GB for example.
And after that I do what Linus said, I religiously avoid to touch the outer volume's decoy data, out of fear of damaging the inner volume's data.
Is there a way to refresh the outer volume's data, without risking to damage the inner volume's data? E.g., is there a tool to write specifically on the 20 first Gigs of the outer volume, making sure to not mess with the 480 following gigs?
I am using both HDDs and SSDs, so the question applies to both.
|
There are probably a few ways to do this with reasonable safety, with potentially different approaches if starting with a new outer volume or an existing one.
Probably the best way to do this would be with the debugfs setb command on the unmounted outer filesystem device to mark range(s) of blocks that belong to the inner volume before mounting the outer filesystem and updating files there.:
debugfs -c -R "setb <inner_start_blk> <inner_count>" /dev/<outer>
setb block [count]
Mark the block number block as allocated. If the optional argument
"count" is present, then "count" blocks starting at block number
"block" will be marked as allocated.
If there are disjoint ranges to the file, then multiple setb commands could be scripted writing by piping a file with block ranges like:
setb <range1> <count1>
setb <range2> <count2>
:
to debugfs reading the file debugfs -c -f <file> /dev/<outer>.
If you wanted to be a bit more clever than just packing the inner volume at the end of the outer filesystem, the inner volume could initially be created with fallocate -s 32M mydir/inner in the the outer filesystem, then the block range could be generated from debugfs:
# debugfs -c -R "stat mydir/inner" /dev/vg_root/lvhome
Inode: 263236 Type: regular Mode: 0664 Flags: 0x80000
Generation: 2399864846 Version: 0x00000000:00000001
User: 1000 Group: 1000 Project: 0 Size: 32499577
File ACL: 0
Links: 1 Blockcount: 63480
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x63c98fc0:62bb0a38 -- Thu Jan 19 11:45:20 2023
atime: 0x63cee835:5e019630 -- Mon Jan 23 13:04:05 2023
mtime: 0x63c98fc0:559e2928 -- Thu Jan 19 11:45:20 2023
crtime: 0x63c98fc0:41974a6c -- Thu Jan 19 11:45:20 2023
Size of extra inode fields: 32
Extended attributes:
security.selinux (37) = "unconfined_u:object_r:user_home_t:s0\000"
EXTENTS:
(0-7934):966656-974590
In this case, the ~32MB (7935x4KiB block) file is in blocks 966656-974590, so this would use setb 966656 7935 to mark those blocks used. The inode should be erased with clri <inum> to prevent the allocated block range from being visible afterward.
The blocks allocated in the outer filesystem by debugfs setb would remain allocated until the next time that e2fsck was run on the outer filesystem. That could potentially "expose" those blocks are in use if someone was really paying attention, so they could optionally be cleared again after the outer filesystem was unmounted, using `debugfs -c -R "clrb <inner_start> <inner_count>" /dev/", or kept allocated to avoid the inner filesystem from potentially being corrupted.
| How to refresh decoy data on a plausible deniability dm-crypt scheme? |
1,439,543,725,000 |
Is it possible to completely disable sparse file support in an ext4 file system?
The purpose is to avoid disk fragmentation.
Bonus "no-points" if the solution allows the file system to allocate the files quickly, without filling the file with zeroes (or whatever).
What I want is to be able to tell applications to create files of X bytes and, no matter what method they use to pre-allocate the files, a file of X bytes will be created on the disk. I have no use for an "apparent size" of X bytes, nor I have control over those applications' source codes.
In order to save you some time, let me already address some comments that always seem to appear whenever someone mentions sparse files and quick allocations
I don't care how good a file system allocation heuristic is supposed to be.
I am aware of the security implications of not zeroing newly allocated blocks.
|
There’s no option to disable sparse file support in ext4, but ext4 (and the Linux kernel in general) support features which would allow you to implement a workaround without writing a new file system driver (or adapting the ext4 driver).
I think what could work is to implement a shared library shim, which you would load using LD_PRELOAD, and you’d override all the calls which can result in the creation of sparse files to handle them in such a way that they don’t. The fast way to allocate blocks in files is to use posix_fallocate, which ensures that disk space is allocated when it returns, without necessarily writing all zeroes to the blocks (but the file system guarantees that reading from the allocated blocks will return zeroes). Your shim would have to intercept posix_fallocate too, since it can also be used to create sparse files...
There are limitations to LD_PRELOAD shims (in particular with setuid binaries), but they might not apply in your case.
| Disable sparse file support on ext4 |
1,439,543,725,000 |
I have a 128 GB Micro SD Card that I formatted as ext4 and used in a Chromebook for an Ubuntu Chroot Environment. I used it for quite some time that way. At some point, I either deleted everything off of it or formatted it using the Chromebook's simple formatting system.
After this, I stuck it in a GoPro Hero Session, and found that the GoPro didn't care to format the disk and could immediately write pictures and videos. No problem.
I went on a trip, took lots of photos and video, and then suddenly the GoPro was having trouble reading the disk. It was still able to record video and pictures (I assume) as I could turn on the recording mode and it didn't report any problems. From what I could tell, 128 GB is too much for this GoPro Session.
When I plug this into a computer (Chromebook, Mac OSX, Ubuntu) I either get an error (Chromebook & OSX) or I have the disk mount, but no viewable file structure when I open it with a file explorer. Totally empty.
If I right click, and click Properties (on Ubuntu), I get a report that the disk is formatted ext3/ext4, 128 GB and has 45.1 GB used, 71.9 GB free space. gparted is reporting the same thing.
I was able to successfully recover all 6 GB of photos using photorec. I didn't recover any videos, though.
I've used ddrescue to duplicate the disk to an image that I can work with. When I mount the image file, it behaves exactly the same way as the disk does (expected).
ddrescue output:
rescued: 125829 MB,
errsize: 0 B,
current rate: 12648 kB/s
ipos: 125829 MB,
errors: 0,
average rate: 19079 kB/s
opos: 125829 MB,
time since last successful read: 0 s
Finished
I ran a pass on the .IMG file with foremost -v -q -t mp4 -d but it finished with 0 files returned.
At this point, it doesn't actually seem to me that there has been either data loss or corruption. I'm not sure what actually is going on, but suspect that something has gone awry with the file system- being ext3/ext4 in a GoPro rather than FAT32 or exFAT.
EDIT: I just used Disk Usage Analyzer and found all of the largest files that photorec recovered. Among them are many large .bz2 files, with files in them with no extension that are timestamped for the time I would have recorded the footage. I can open them and view this information with an archive manager, but am unable to extract them.
EDIT 2: I tried running fsck and checked in /lost+found. All of my Linux files were there, but no videos, and not even the pictures that I had previously recovered with photorec.
I also tried to mount the image as exfat using sudo mount -o loop -t exfat SD_Card.img ~/mountpoint but it fails to mount.
FUSE exfat 1.2.8
ERROR: exFAT file system is not found.
|
Running testdisk on the ddrescue image as per the instructions in this guide, I was able to recover all files.
The initial quickscan did not detect anything useful, but after the quickscan, a deepscan option is available.
Deepscan detected three partition file systems-
ext4, exFAT, exFAT
ext4 was labeled Linux. I did not try to recover anything from that partition. This is the partition that was mountable previously.
The first exFAT was unlabeled, and I was able to browse through it using terminal commands provided by testdisk. Contained in this partition table, which other programs such as gparted were unable to see, were all of the GoPro folders and files, in pristine order. Within the DCIM folder, I found all of my photos and videos with correct file names and time stamps- so recovery was not a matter of restoring corrupted files at all.
The second exFAT looked to be the same as the first, but the files were unreadable.
| SD Card Recovery without data loss or corruption |
1,439,543,725,000 |
Sometimes I have an error ext4 and my disk becomes read-only.
I can fix it with a reboot and fcsk /dev/sda2 but it keeps coming back...
Here are some dmesg :
[ 3160.692730] perf: interrupt took too long (2509 > 2500), lowering kernel.perf_event_max_sample_rate to 79500
[ 3631.408303] perf: interrupt took too long (3144 > 3136), lowering kernel.perf_event_max_sample_rate to 63500
[ 4143.729000] perf: interrupt took too long (3992 > 3930), lowering kernel.perf_event_max_sample_rate to 50000
[ 4770.574303] perf: interrupt took too long (5018 > 4990), lowering kernel.perf_event_max_sample_rate to 39750
[ 5334.077445] perf: interrupt took too long (6289 > 6272), lowering kernel.perf_event_max_sample_rate to 31750
[ 8241.921553] acer_wmi: Unknown function number - 8 - 1
[11370.110956] perf: interrupt took too long (7918 > 7861), lowering kernel.perf_event_max_sample_rate to 25250
[11484.098212] acer_wmi: Unknown function number - 8 - 0
[11875.568601] EXT4-fs error (device sda2): ext4_iget:4862: inode #92441: comm TaskSchedulerFo: bad extra_isize 9489 (inode size 256)
[11875.575273] Aborting journal on device sda2-8.
[11875.575537] EXT4-fs error (device sda2) in ext4_da_write_end:3209: IO failure
[11875.575976] EXT4-fs (sda2): Remounting filesystem read-only
[11875.576792] EXT4-fs error (device sda2): ext4_journal_check_start:61: Detected aborted journal
[11875.577612] EXT4-fs error (device sda2): ext4_iget:4862: inode #92441: comm TaskSchedulerFo: bad extra_isize 9489 (inode size 256)
[11875.583499] EXT4-fs error (device sda2): ext4_iget:4862: inode #92441: comm TaskSchedulerFo: bad extra_isize 9489 (inode size 256)
[11875.832886] EXT4-fs error (device sda2): ext4_iget:4862: inode #92441: comm TaskSchedulerFo: bad extra_isize 9489 (inode size 256)
[11899.686408] systemd-journald[395]: Failed to write entry (21 items, 614 bytes), ignoring: Read-only file system
[11899.686483] systemd-journald[395]: Failed to write entry (21 items, 705 bytes), ignoring: Read-only file system
[11899.686587] systemd-journald[395]: Failed to write entry (21 items, 614 bytes), ignoring: Read-only file system
[11899.686656] systemd-journald[395]: Failed to write entry (21 items, 705 bytes), ignoring: Read-only file system
[11899.686719] systemd-journald[395]: Failed to write entry (21 items, 614 bytes), ignoring: Read-only file system
[11899.686781] systemd-journald[395]: Failed to write entry (21 items, 705 bytes), ignoring: Read-only file system
[11899.686844] systemd-journald[395]: Failed to write entry (21 items, 614 bytes), ignoring: Read-only file system
[11899.686938] systemd-journald[395]: Failed to write entry (21 items, 705 bytes), ignoring: Read-only file system
[11899.686999] systemd-journald[395]: Failed to write entry (21 items, 614 bytes), ignoring: Read-only file system
[11899.687084] systemd-journald[395]: Failed to write entry (21 items, 705 bytes), ignoring: Read-only file system
And my /etc/fstab :
UUID=9c882ba5-b980-4f7d-dd02-cd0a1831ab1a / ext4 errors=remount-ro 0 1
UUID=0E37-D0A2 /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
Should I remove or change remount-ro in fstab and ignore this error ? How to fix / avoid this error ?
|
Can you check your disk for bad sectors or bad blocks? you can use badblocks or smartctl command to check in linux, I think bad disk is only reason for your issue.
| Ext4 Error and disk remounted read-only |
1,439,543,725,000 |
The problem that I am having is the extremely long time that fsck is taking. I have thoroughly made searches on Google, but I could not find anything that would resolve the problem.
The command that I am running is sudo fsck.ext4 -vc /dev/sdb1.
I have a 200GB SATA hard drive which has some bad sectors. It is SMART-compatible, however, SMART somehow is not capable of remapping the sectors. The command that I am running is going to check for bad sectors and add them to the bad block list. However, here is the output so far:
e2fsck 1.42 (29-Nov-2011)
Checking for bad blocks (read-only test): 1.95% done, 11:53:24 elapsed. (1657/0/0 errors)
At this rate it will probably take around 1 month.
Now don't tell me "Your hard drive is too old and it's gonna fail soon blah blah blah". I just want to add the bad blocks to the badblocks list. The hard drive is not developing any new bad sectors.
My machine has an i3 quad-core with 8GB of RAM. My CPU usage is under 10%, and about 1.5GB of the RAM is used. Nothing is paged.
The disk which I am checking has a newly created ext4 filesystem with nothing on it.
I just don't understand why it will take 1 month to fsck a disk and list bad blocks. Something is definitely wrong here. Any advice?
|
SMART doesn't remap sectors, it just detects and logs errors. Bad sectors are remapped automatically when written to. You can do this with dd or hdparm --write-sector.
If your drive cannot remap the sector because it has run out of reserve sectors then you should be one step before panic.
Remapping them in the file system does not make much sense.
If hdparm -t /dev/sdb gives you reasonable results then you may run badblocks on its own (with -s) in order to check whether its faster if run directly and run it through strace if it is not faster in order to get an impression where the performance problem results from.
Maybe there are certain areas on the disk which cause a lot of read retries.
| Extremely long time for an ext4 fsck |
1,439,543,725,000 |
I've several partitions with ext4.
Now, I would want if it makes sense to use tune2fs with flags -c0 (max-mount-counts) and -i0 (interval-between-checks) in the partitions with a journal file-system since it needs less checks?
|
Generally speaking... yes, it does make sense.
Though you might want to run
tune2fs -l /dev/sdXY | egrep "Maxim|Check"
to see how those flags are set as it all depends on the version of e2fsprogs used to create the filesystems and/or distribution specific patches applied to e2fsprogs. You might already have MAX_MNT_COUNT and CHECKINTERVAL set to -1 and 0 respectively, due to the fact that, as of v. 1.42, e2fsprogs defaults to -c1 -i0, see changelog:
If the enable_periodic_fsck option is false in /etc/mke2fs.conf (which
is the default), mke2fs will now set the s_max_mnt_count superblock
field to -1, instead of 0. Kernels older then 3.0 will print a
spurious message on each mount then they see a s_max_mnt_count set to
0, which will annoy users.
/etc/mke2fs.conf compared:
v. 1.41.14 released 2010-12-22:
[defaults]
base_features = sparse_super,filetype,resize_inode,dir_index,ext_attr
blocksize = 4096
inode_size = 256
inode_ratio = 16384
v. 1.42 released 2011-11-29:
[defaults]
base_features = sparse_super,filetype,resize_inode,dir_index,ext_attr
default_mntopts = acl,user_xattr
enable_periodic_fsck = 0
blocksize = 4096
inode_size = 256
inode_ratio = 16384
| To use -c0 -i0 in file-systems with journal |
1,439,543,725,000 |
I am trying to understand what I did wrong with the following mount command.
Take the following file from here:
http://elinux.org/CI20_Distros#Debian_8_2016-02-02_Beta
Simply download the img file from here.
Then I verified the md5sum is correct per the upstream page:
$ md5sum nand_2016_06_02.img
3ad5e53c7ee89322ff8132f800dc5ad3 nand_2016_06_02.img
Here is what file has to say:
$ file nand_2016_06_02.img
nand_2016_06_02.img: x86 boot sector; partition 1: ID=0x83, starthead 68, startsector 4096, 3321856 sectors, extended partition table (last)\011, code offset 0x0
So let's check the start of the first partition of this image:
$ /sbin/fdisk -l nand_2016_06_02.img
Disk nand_2016_06_02.img: 1.6 GiB, 1702887424 bytes, 3325952 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0212268d
Device Boot Start End Sectors Size Id Type
nand_2016_06_02.img1 4096 3325951 3321856 1.6G 83 Linux
In my case Units size is 512, and Start is 4096, which means offset is at byte 2097152. In which case, the following should just work, but isn't:
$ mkdir /tmp/img
$ sudo mount -o loop,offset=2097152 nand_2016_06_02.img /tmp/img/
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
And, dmesg reveals:
$ dmesg | tail
[ 1632.732163] loop: module loaded
[ 1854.815436] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem
[ 1854.815452] EXT4-fs (loop0): bad geometry: block count 967424 exceeds size of device (415232 blocks)
None of the solutions listed here worked for me:
resize2fs or,
sfdisk
What did I missed ?
Some other experiments that I tried:
$ dd bs=2097152 skip=1 if=nand_2016_06_02.img of=trunc.img
which leads to:
$ file trunc.img
trunc.img: Linux rev 1.0 ext2 filesystem data (mounted or unclean), UUID=960b67cf-ee8f-4f0d-b6b0-2ffac7b91c1a (large files)
and same goes the same story:
$ sudo mount -o loop trunc.img /tmp/img/
mount: wrong fs type, bad option, bad superblock on /dev/loop2,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
I cannot use resize2fs since I am required to run e2fsck first:
$ /sbin/e2fsck -f trunc.img
e2fsck 1.42.9 (28-Dec-2013)
The filesystem size (according to the superblock) is 967424 blocks
The physical size of the device is 415232 blocks
Either the superblock or the partition table is likely to be corrupt!
Abort<y>? yes
|
Once you have extracted the filesystem you are interested in (using dd), simply adapt the file size (967424*4096=3962568704):
$ truncate -s 3962568704 trunc.img
And then simply:
$ sudo mount -o loop trunc.img /tmp/img/
$ sudo find /tmp/img/
/tmp/img/
/tmp/img/u-boot-spl.bin
/tmp/img/u-boot.img
/tmp/img/root.ubifs.9
/tmp/img/root.ubifs.4
/tmp/img/root.ubifs.5
/tmp/img/root.ubifs.7
/tmp/img/root.ubifs.2
/tmp/img/root.ubifs.6
/tmp/img/lost+found
/tmp/img/root.ubifs.3
/tmp/img/boot.ubifs
/tmp/img/root.ubifs.0
/tmp/img/root.ubifs.1
/tmp/img/root.ubifs.8
Another simpler solution is to truncate directly on the original img file:
$ truncate -s 3964665856 nand_2016_06_02.img
$ sudo mount -o loop,offset=2097152 nand_2016_06_02.img /tmp/img/
Where 3962568704 + 2097152 = 3964665856
| bad geometry: block count 967424 exceeds size of device (415232 blocks) |
1,431,331,661,000 |
I have a single hard drive. I want to use a filesystem that will give me less storage space, but as a tradeoff, give me checksums or any other method to help preserve data integrity.
It is my understanding that something like ext4 or xfs will not do this, and thus you can suffer from silent data corruption, aka bitrot.
zfs looks like an excellent choice, but everything I have read says you need more than one disk to use it. Why is this? I realize having only one disk will not tolerate a single disk failure, but that is what multiple backup schemes are for. What backups won't help with is something like bitrot.
So can I use zfs on a single hard drive for the single purpose of preventing bitrot? If not, what do you recommend?
|
You could use either ZFS or btrfs.
Both of them are copy-on-write filesystems with error detection (and correction too, if there's sufficient redundancy to repair the original data - e.g. mirror drives or RAID-Z), transparent compression, snapshots, etc.
ZFS allows you to set the copies attribute on a dataset to keep more than one copy of a file - e.g. on ZFS you can run zfs set copies=2 pool/dataset to tell ZFS to keep two copies of everything on that particular dataset - see man zfsprops and search for copies=. I think btrfs has a similar feature, but it's been a long time since I used btrfs and can't find it in the docs.
These extra copies do provide redundancy for error correction (in case of bitrot) but won't protect you from disk failure. You'll need at least a mirror vdev (i.e. RAID-1) for that, or make regular backups (but you should be doing that anyway - RAID or RAID-like tech like ZFS or btrfs is NOT a substitute for backups).
Backing up could be as simple as using zfs snapshot and zfs send/zfs receive to send the initial and then incremental backup to a single-drive zfs pool plugged in via USB. Or to a pool on another machine over the network. Even using zfs send to store the backup in files on a non-ZFS filesystem is better than nothing.
If your machine has the physical space and hardware to support a second drive, you should add one. You can do this when you first create a pool, or you can add a mirror drive to any single-drive or mirror vdev at any time with zpool attach pool device new-device.
NOTE: it's important to use zpool attach, not zpool add for this. attach adds a mirror to an existing drive in a vdev, while add adds another vdev to an existing pool. Adding a single-drive vdev to an existing pool will effectively make a RAID-0 with the other vdevs in the pool, putting ALL of the data at risk. This is a fairly common mistake, and (if the pool contains any RAID-Z vdevs), the only fix is to backup the entire pool, destroy it, re-create it from scratch, and restore. If the pool only has mirror or single-drive vdevs (i.e. no RAID-Z vdevs), it is possible to use zpool remove to remove an accidentally added single drive.
| Filesystem with checksums? |
1,431,331,661,000 |
a@b:~$ sudo growpart -v /dev/xvda 1
update-partition set to true
resizing 1 on /dev/xvda using resize_sfdisk_dos
6291456000 sectors of 512. total size=3221225472000 bytes
WARN: disk is larger than 2TB. additional space will go unused.
## sfdisk --unit=S --dump /dev/xvda
label: dos
label-id: 0x965243d6
device: /dev/xvda
unit: sectors
/dev/xvda1 : start= 2048, size= 4294965247, type=83, bootable
max_end=4294967296 tot=6291456000 pt_end=4294967295 pt_start=2048 pt_size=4294965247
NOCHANGE: partition 1 could only be grown by 1 [fudge=2048]
a@b:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 3T 0 disk
└─xvda1 202:1 0 2T 0 part /
xvde 202:240 0 64G 0 disk
Trying to extend a 2TB partition to 3TB. Is the partition limited to 2TB?
|
Your drive is formatted as MBR.
For drives larger than 2TB, they need to be partitioned as GPT as MBR is limited to 2TB regardless of the OS.
| Cannot extend partition beyond 2TB on AWS Ubuntu |
1,431,331,661,000 |
I have heard if you don't make partitions in linux, it is so hard to recover the data. And if you make few partitions it would be easy to recover the data.
for example if you make /par1 /part2 /part3 it is better for recovery.
But now some of my friends tell me that /home/user/{all data here} has no difference with making /par1 /part2 /part3 concerning recovery.
Which one is true, and why?
|
To illustrate the question in a simple and efficient manner, consider two scenarios:
You install your favourite linux distribution on entire disk i.e. without any partitions:
Suppose your system is crashed because operating system is unable to access some sectors and unable to boot. You lost some chunk of data due to bad sectors and because of that you might be unable to access other chunks of data in your hard disk. Bottom line is that some bad sector is affecting your entire data. So recovery here is probably hard than if you were to use multiple partitions for different category of data.
You install your favourite linux distribution by partitioning the hard disk:
If you partition your hard disk say sda1 for boot, sda2 for root, sda3 for opt, sda4 for usr, sda5 for home and so on, now if some kind of crashing or bad sector problem occurs then there is a greater probability than the previous scenario that you might save/recover your other partitions. It is also useful in cases like, for example say that i have crashed my system (consider it an os problem) and system does not boot, i could reinstall my system without touching my home partition so, partition home is isolated and safe. Other benefits are as follows:
less amount of time in file system check.
freedom to choose different file systems.
protection of file systems.
ease in repairing file systems by pin pointing to the problematic file system.
And of course there are benefits of Logical Volume Management(LVM), it starts with a single volume group and subsequently creating multiple logical volumes to hold the necessary file systems. I personally don't use LVM, so for more you can visit Wikipedia and Gentoo
| Why creating partitions in linux is a good solution for easy recovery? |
1,431,331,661,000 |
The drive is currently in NTFS. After running chkdsk for hours, I have found the locations of bad sectors (below). I want to reformat the disk in EXT4. I have heard that EXT4 has some sort of metadata to mark bad sectors, and there is a utility for that, but I do not want to run the test for hours again. Can I just directly tell EXT4 about the bad sector locations below that I have already found by chkdsk?
Stage 4: Looking for bad clusters in user file data ...
Read failure with status 0xc000009c at offset 0x280036f1000 for 0x10000 bytes.
Read failure with status 0xc000009c at offset 0x280036fb000 for 0x1000 bytes.
Read failure with status 0xc000009c at offset 0x280cb987000 for 0x10000 bytes.
Read failure with status 0xc000009c at offset 0x280cb993000 for 0x1000 bytes.
Read failure with status 0xc000009c at offset 0x280dbdc2000 for 0x10000 bytes.
Read failure with status 0xc000009c at offset 0x280dbdc4000 for 0x1000 bytes.
Read failure with status 0xc000009c at offset 0x2835d5bb000 for 0x10000 bytes.
Read failure with status 0xc000009c at offset 0x2835d5c0000 for 0x1000 bytes.
|
You can use the "badblocks" command together with e2fsck to specify a list of bad disk blocks to the filesystem.
As others have commented, that is not great, because that means your disk is on the verge of increasing failure. Also, because this is normally handled at the drive level today, this badblocks code is rarely used these days.
| Manually telling EXT4 about bad sectors? |
1,431,331,661,000 |
using dumpe2fs on some ext4 partition, I get in the initial data, that the first inode is #11. However, if I ls -i on this disk root partition, I get that it's inode number is #2 (as expected). So... What is this “first partition” reported by dumpe2fs ?
|
#11 is the first "non-special" inode, that can be used for the first regularly created file or directory (usually used for lost+found). The number of that inode is saved in the filesystem superblock (s_first_ino), so technically it doesn't need to be #11, but mke2fs always sets it that way.
Most of the inodes from #0 to #10 have special purposes (e.g. #2 is the root directory) but some are reserved or used in non-upstream versions of the ext filesystem family. The usages are documented on kernel.org.
inode Number
Purpose
0
n/a
1
List of defective blocks
2
Root directory
3
User quota
4
Group quota
5
Reserved for boot loaders
6
Undelete directory (reserved)
7
"resize inode"
8
Journal
9
"exclude" inode (reserved)
10
Replica inode (reserved)
| what is this “first inode” reported by dumpe2fs? |
1,431,331,661,000 |
I'd like to format a 12 TB HDD (not SSD) with EXT4, in order to store large video files (each file being at least 1 GiB in size).
I am working with an x86-64 (a.k.a. x64 or amd64) processor.
There's of course the -T largefile4 option of mkfs.ext4, but are there other optimizations that can be done ?
In particular, I wonder :
Should I increase block size to its max (64K, -b 65536) ?
OR should I use block clusters, and set cluster size to its max (256M, -C 268 435 456)
OR should I do both ?
What would be the best parameters in terms of both disk space and performance optimization ?
|
The very document you linked says (emphasis mine):
At the moment, the default size of a block is 4KiB, which is a commonly supported page size on most MMU-capable hardware. This is fortunate, as ext4 code is not prepared to handle the case where the block size exceeds the page size.
Out of well-known processor architectures capable of running Linux, only ARM, Alpha AXP, Itanium or PowerPC have/had the capability of using page sizes beyond the usual 4 KiB.
Although AMD64/x86_64 processors can use hugepages, that is not quite the same thing - the basic system page size is still 4 KiB, hugepages just allow assigning them in larger bundles to improve memory management efficiency in large-memory systems. This does not change the fundamental "ext4 block size <= system memory page size" requirement.
With PowerPC or 64-bit ARM processors, the page size (the basic "block size" of system memory management) can be increased up to 64 KiB, which allows the ext4 filesystem to scale up its internal operations too. On AMD64/x86_64 that option is not available, so block clusters would be the only available way to reduce the space and work required for filesystem metadata.
I've used a system that had an ext4 filesystem extended to >10 TB range, and running a filesystem check on it was not a pleasant experience. Granted, that was an old system whose filesystems had been expanded and re-expanded without any careful tuning, to way outside the limits of the system's original designed capacity. (It was also a video server.)
But based on that, I would say that ext4 definitely needs specific tuning to successfully deal with tens-of-terabytes filesystems. Like Romeo Ninov in the comments, I would urge you to reconsider other filesystem types if possible: although ext4 can be used with much larger than 10 TB filesystems, I think low tens of terabytes is about the current limit of what is generally practical to do with it.
However, if you essentially write the contents of the filesystem once and then keep it read-only, you would almost never have to run a filesystem check on it, which will avoid one significant pain point.
| EXT4 for very large (>1GB) files : increase block size, use block clusters, or both? |
1,431,331,661,000 |
I created an ext4 partition on Ubuntu 18.04.4 LTS in order to transfer a large amount of data to a production server. The server is running CentOS 6.10 with kernel 2.6.32. The Ext4 Howto states that "Ext4 was released as a functionally complete and stable filesystem in Linux 2.6.28" so I assumed I was going to be able to just mount the partition.
However when trying to mount the partition on the server I get the errors:
localhost kernel: EXT4-fs (sdd1): couldn't mount RDWR because of unsupported optional features (400)
localhost kernel: JBD: Unrecognised features on journal
localhost kernel: EXT4-fs (sdd1): error loading journal
I have full root access to the server, but I am unable to upgrade any of the operating system components due to compatibility issues with the running software.
Initial Googling suggested that the issue was due to the metadata checksum feature, so I downloaded and compiled the latest e2fsprogs (1.46-WIP (20-Mar-2020)) and used those to disable the feature:
sudo /home/user/bin/e2fsck -f /dev/sdd1
sudo /home/user/bin/tune2fs -O ^metadata_csum /dev/sdd1
However the partition still fails to mount, although I don't get the "unsupported optional features (400)" message any more:
$ sudo mount /dev/sdd1 /mnt/disk1
mount: wrong fs type, bad option, bad superblock on /dev/sdd1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
$ sudo tail /var/log/messages
Jul 20 08:01:21 localhost kernel: JBD: Unrecognised features on journal
Jul 20 08:01:21 localhost kernel: EXT4-fs (sdd1): error loading journal
Is there some way I can access the data on this partition without rebooting the server or changing any of the system software? There seem to be two options: either I use mount the partition as is (using FUSE, or compiling my own mount.ext4 binary), or I use tune2fs to remove the remaining incompatible features (how do I find out what they are?)
I should mention that due to COVID-19 lockdown measures, there's a two to three week wait for someone to physically unplug the drive from the server and plug it into a different machine. I need to find a solution which I can implement quicker than that.
|
First try running
sudo e2fsck -f -v -C 0 -t /dev/sdd1
An e2fsck run may be required to complete the removal of the feature.
If it still doesn't help, try removing and recreating the journal:
sudo /home/user/bin/tune2fs -O '^has_journal,^64bit' /dev/sdd1
sudo /home/user/bin/resize2fs -s /dev/sdd1
sudo /home/user/bin/tune2fs -j /dev/sdd1
Lastly if it's still unmountable, compare the flags being used for sudo dumpe2fs /dev/existing_parition and sudo dumpe2fs /dev/sdd1 and remove the ones which are not present for your already existing partitions.
For future reference, if you format the filesystem on the old system instead of on the new system, it should always be usable by the new kernel. If you need to format on the new system, you could use mke2fs -t ext4 -O '^metadata_csum,^64bit' to avoid some of the newer features (though this may be a moving target), or mke2fs -t ext3 (though this may be somewhat slower than ext4 as a result, but is very safe compatibility wise).
| How do I mount an ext4 partition created on a recent system, on a ten-year-old 2010 CentOS system? |
1,431,331,661,000 |
I wonder if there are ways to copy or restore crtime (creation time) for inodes/files/directories in Linux in 2020. I've accidentally deleted a folder while I still have a full disk backup, but neither cp -a, nor rsync can restore/copy files/directories crtimes.
I have found a way to achieve it using debugfs but it's super complicated and I need to automate it (I have hundreds of deleted files/directories).
For the source disk you do this:
# debugfs /dev/sdXX
# stat /path
Inode: 432772 Type: directory Mode: 0700 Flags: 0x80000
Generation: 3810862225 Version: 0x00000000:00000006
User: 1000 Group: 1000 Project: 0 Size: 4096
File ACL: 0
Links: 5 Blockcount: 8
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x5db96479:184bb16c -- Wed Oct 30 15:22:49 2019
atime: 0x5b687c70:ee4dff18 -- Mon Aug 6 21:50:56 2018
mtime: 0x5db96479:184bb16c -- Wed Oct 30 15:22:49 2019
crtime: 0x5b687c70:d35d1348 -- Mon Aug 6 21:50:56 2018
Size of extra inode fields: 32
Extended attributes:
security.selinux (40)
EXTENTS:
(0):1737229
Remember the crtime, these are two fields, crtime_lo (yes, the first) and crtime_hi (the second)
Then for the destination disk you do this:
# debugfs -w /dev/sdYY
# set_inode_field /path crtime_lo 0x${1st_value_from_earlier}
# set_inode_field /path crtime_hi 0x${2nd_value_from_earlier}
Maybe there's something else I'm missing in the debugfs manual which could help me do that, so I'd be glad if people could help.
-f cmd_file surely seems like a nice way to start but still a little bit too difficult for me.
|
I've actually solved it on my own. You never know what you can do till you try :-)
It must be safe to run even when all the filesystems are mounted read-write.
#! /bin/bash
dsk_src=/dev/sdc4 # source disk with original timestamps
mnt_src=/mnt/sdc4 # source disk mounted at this path
dsk_dst=/dev/sda4 # destination disk
directory=user/.thunderbird # the leading slash _must_ be omitted
cd $mnt_src || exit 1
find $directory -depth | while read name; do
read crtime_lo crtime_hi < <(debugfs -R "stat \"/$name\"" $dsk_src 2>/dev/null | awk '/crtime:/{print $2}' | sed 's/0x//;s/:/ /')
echo "File: $name"
echo "crtime_lo: $crtime_lo"
echo "crtime_hi: $crtime_hi"
debugfs -w $dsk_dst -R "set_inode_field \"/$name\" crtime_lo 0x$crtime_lo"
debugfs -w $dsk_dst -R "set_inode_field \"/$name\" crtime_hi 0x$crtime_hi"
done
If people are interested I can adjust the script to allow to use it within one partition as well, e.g. after running cp -a. It's quite easy actually.
| Copying or restoring crtime for files/directories on ext4fs filesystem |
1,431,331,661,000 |
I have added user_xattr in ext4 but when I remount it doesn't show xattr & I installed attr & attr_dev
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/mapper/Anonymous--vg-root / ext4\040remount,user_xattr errors=remount-ro 0 1`
|
User extended attributes are supported by default on Ext4, you don’t need to do anything to enable them. To verify this, run
cd
touch xattr-test
setfattr -n user.test -v "hello" xattr-test
getfattr xattr-test
This should show that the extended attribute was successfully stored.
| how to enable xattr support in Debian 9 (Stretch) |
1,431,331,661,000 |
I'm going to move journal to another partition, but I don't know how to correctly caculate the size needed for journal?
I'm running ext4 file system with 15GB capacity.
|
$ man mkfs.ext4
The size of the journal must be at least 1024 filesystem blocks (i.e., 1MB if using 1k blocks, 4MB if using 4k blocks, etc.) and may be no more than 102,400 filesystem blocks.
I think the default size is 128MB but not sure, that might be dated. Anyways I don't think moving journal to another partition on the same HDD will be an improvement. If you move to another physical disk that can help.
The best you can do is to try different sizes and compare to current state with your real workload (not some benchmark tool that may or may not simulate operations similar to your real workload).
| Keep ext4 journal on another system, how much space would be necessary? |
1,431,331,661,000 |
TLDR;
In EXT4 terminology, are "block groups" and "extents" the same thing ?
[EDIT]
The suggested What do "extents" feature do in ext4 filesystem in linux? discussion doesn't answer that question. While it explains clearly what "extents" are, it doesn't talk about "block groups", and whether it's the same thing or not (it's not : see answer below).
DETAILS
In this post they discuss the structure of a block group :
According to Wikipedia, an extent is a range of blocks.
Are those two concepts the same thing, just using different names ?
|
The ext4 block groups are how ext4 is managing block allocation. There is a bitmap to manage the allocated/freed blocks for every group.
In archaic terminology this is a "cylinder group", but it has been a very long time since this related to a cylinder on a physical disk. In XFS this is an Allocation Group (AG).
An extent is a unit of block allocation for a single file, which represents a range of physically and logically contiguous blocks allocated to that file.
| In EXT4, are "extent" and "block group" the same thing? |
1,431,331,661,000 |
I am a Unix wanderer. I just noticed that symlinks don't have data blocks allocated to them, I think the inode of the symlink file stores the filename which the symlink refers to, is this actually the case?
$ stat sdb
File: sdb -> /dev/sdb
Size: 8 Blocks: 0 IO Block: 4096 symbolic link
Device: 803h/2051d Inode: 26348139 Links: 1
....
I could only imagine one possibility for now, the inode of sdb symlink contains among other things (i,e. owner, permissions...) + /dev/sdb path.
|
ext4 stores the target of a symbolic link inside the inode, if the target is less than 60 bytes long. Longer targets will be stored in a data block.
| Why symbolic links have no data blocks allocated to them in ext4fs? |
1,431,331,661,000 |
I'm trying to use setfattr, but always get Operation not supported
In my home directory, I'm doing the following:
touch delete.me
setfattr -n naomi -v washere delete.me
This returns setfattr: delete.me: Operation not supported.
My home directory is ext4 and delete.me definitely exists. I'm on Fedora 25. Any idea why this is not working?
|
You can't just use any name. You need to select a namespace. For arbitrary attribute name, you'd need to use the user namespace:
setfattr -n user.naomi -v washere delete.me
(see man 5 attr for details).
For ext4, the ext_attr feature must be enabled (on by default). Check with:
sudo debugfs -R stats /dev/block/device | grep -w ext_attr
And to be able to use attributes in the user namespace, the filesystem should be mounted with the user_xattr option enabled (also on by default). Check with:
grep user_xattr /proc/self/mountinfo
If it returns nothing, also check the default mount options in the debugfs output above.
| Cannot set file attribute |
1,431,331,661,000 |
I want to mount my Windows NTFS share C:\ to Linux ext4 file system, so I can see the file system tree as part of my Linux file system and transfer my files.
PS. I am using rhel6.
|
As @Hauke Laging says, it will not become ext4, but you could mount it to /mnt/winshare or some other place, using Samba. A tutorial for RHE is here. Both ways (linux to windows, and vice versa) are described.
BTW: this seems to be a similar question.
| How to mount remote file system |
1,431,331,661,000 |
The filesystem is ext4, the machine hasn't been rebooted in years and we don't want to do that now either.
We used to have a folder with millions of small (2-3kb of size) files. This almost broke the system so we fixed the code that was generating so many files and wrote a crontask that erased all the files within the directory (because rm wasn't working)
At first everything went smooth, you type ls and you got full list of the 4-5 remaining files.
On the next day however when I typed ls the system took forever to execute the command (it took minutes) and the system load went over 20 which scared me a lot.
It's basically like this for months now. The first time of the day when I do ls the system borderline slows to a crawl and eventually returns a list of ... 5 files and no subfolders.
I believe it's some ext4 cache, I've tried running various commands to no avail.
Is there anything else I could do to force ext4 to erase the cache.
The system is running in RAID 1 mode. Running cat /proc/mdstat shows that both drives are fully functional and synchronized. smartctl says the drive is in good health as well. hdparm returns the following
hdparm -tT /dev/sda1
/dev/sda1:
Timing cached reads: 19238 MB in 2.00 seconds = 9629.50 MB/sec
Timing buffered disk reads: 316 MB in 3.01 seconds = 104.92 MB/sec
|
This is a known problem with file systems in the Ext family; see Why directory with large amounts of entries does not shrink in size after entries are removed? for details.
The only way to fix this is to re-create the directory. First, rename the existing directory (this will avoid problems with processes attempting to open files there):
mv brokendir repairdir/
Then, create a new directory (not using the old name, yet):
mkdir newdir
Move all the contents of the broken directory to the new directory:
mv repairdir/* newdir/
mv repairdir/.[!.]* newdir/
mv repairdir/..?* newdir/
(as three separate commands so that you know exactly what’s going on if one of them fails, e.g. if there are no hidden files to move).
You may want to ensure the new directory’s metadata is identical to the original’s, in particular its ownership and permissions; if you’re using GNU coreutils, this can be done (once repairdir is empty) with
cp -aT repairdir newdir
Finally, move everything back into place, and delete the old directory:
mv newdir brokendir/
rmdir repairdir
| Listing directory takes forever on a folder that used to have millions of files [duplicate] |
1,431,331,661,000 |
In the Linux kernel source, the block numbers in an on-disk inode struct are 32-bit. Why? Surely Linux can support more than 2^32 blocks...
|
The interpretation of the array inode.i_block is different in Ext4 compared to previous on-disk filesystem formats. In Ext4, when the inode has the EXT4_EXTENT_FL set in i_flags this array stores the root of the extent tree and up to four extent descriptors (struct ext4_extent or struct ext4_extent_idx). You will notice that in the extent descriptor there are 48 bits for the block address. For older on-disk formats, e.g. Ext3, the maximum number of block does indeed fit in 32 bits.
See Ext4 data structures and algorithms, section 4.2 The Contents of inode.i_block.
| 32-bit block addresses in ext4 inode struct |
1,431,331,661,000 |
There is a troublemaking empty file (md5sums of kernel 4.19.1) left
on my ubuntu system, with has strange owner/group/date/attributes
How to fix or workaround this defect file?
$ uname -a
Linux olly-ryzen-pc1 4.20.10-042010-generic #201902150516 SMP Fri
Feb 15 10:19:07 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
stat
$ stat /var/lib/dpkg/info/linux-image-unsigned-4.19.1-041901-generic.md5sums
Datei: /var/lib/dpkg/info/linux-image-unsigned-4.19.1-041901-generic.md5sums
Größe: 0 Blöcke: 0 EA Block: 4096 Normale
leere Datei <= empty file
Gerät: 802h/2050d Inode: 27918873 Verknüpfungen: 1
Zugriff: (5625/-rwS-w-r-t) Uid: (477987903/ UNKNOWN) Gid: (3699747887/
UNKNOWN)
Zugriff : 2381-05-02 11:29:39.163881368 +0100
Modifiziert: 2293-06-01 00:54:46.455862499 +0100
Geändert : 2167-05-10 21:19:01.867729249 +0100
Geburt : -
lsattr
$ lsattr /var/lib/dpkg/info/linux-image-unsigned-4.19.1-041901-generic.md5sums
lsattr: Keine Daten verfügbar Beim Lesen der Flags von /var/lib/dpkg/info/linux-image-unsigned-4.19.1-041901-generic.md5sums
apt, dpkg
This file can't changed or deleted (remove/purge 4.19.1), but
troubles apt-get to install applications.
--fix-broken or --reinstall dpkg exit also with 'not allowed' message.
Cannot be deleted.
Die Control-Info-Datei »/var/lib/dpkg/info/linux-image-unsigned-4.
19.1-041901-generic.md5sums« kann nicht gelöscht werden: Vorgang
nicht zulässig
chmod -st, chown root:root
No changes.
rm -f
No.
live USB
Also tried a boot of ubuntu (install 4.18) from USB-Stick to repair, but:
sudo e2fsck -f /dev/sba2 does not report an error
sudo badblocks -vsn /dev/sda2 reports 0 bad blocks
and rm, chmod, chown: same behaivior as above ..
Only to compare, here is a neighbor file:
$ stat /var/lib/dpkg/info/linux-sound-base.md5sums
Datei: /var/lib/dpkg/info/linux-sound-base.md5sums
Größe: 545 Blöcke: 8 EA Block: 4096 Normale Datei
Gerät: 802h/2050d Inode: 27269131 Verknüpfungen: 1
Zugriff: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root)
Zugriff : 2019-02-03 16:56:08.943545720 +0100
Modifiziert: 2015-07-31 05:42:23.000000000 +0200
Geändert : 2018-05-22 01:20:37.178864616 +0200
Geburt : -
$ lsattr /var/lib/dpkg/info/linux-image-unsigned-4.19.1-041901-generic.list
--------------e---
/var/lib/dpkg/info/linux-image-unsigned-4.19.1-041901-generic.list
|
As fsck does not find anything wrong, you may need to use debugfs to clear the inode. Note that I last used debugfs years ago, so take care! Read the manpage first to see what's possible with this tool.
Boot from a rescue medium, and run
debugfs /dev/sda2
You can try to use debugfs's rm command to remove the file:
rm /var/lib/dpkg/info/linux-image-unsigned-4.19.1-041901-generic.md5sums
(assuming that /dev/sda2 is mounted on /, not /var)
If that doesn't work, you might try freeing the inode. You already know the inode number (27918873) from the stat output. You can free the inode with:
freei 27918873
After manipulating the filesystem with debugfs I recommend running fsck again.
| Error when move or delete file on ext4 in dpkg info directory |
1,431,331,661,000 |
I am trying to find the association between block groups and superblocks in an ext4 filesystem
I was unable to find much of documentation online, except this link that hints that
Because of the importance of the superblock and because damage to it (for example, from physical damage to the magnetic recording medium on the disk) could erase crucial data, backup copies are created automatically at intervals on the filesystem (e.g., at the beginning of each block group)
However, the dumpe2fs command seems to indicate that there are a lot less superblock copies than the number of block groups:
$ sudo dumpe2fs /dev/sda5 | grep -i group | wc -l
dumpe2fs 1.44.1 (24-Mar-2018)
2690
$ sudo dumpe2fs /dev/sda5 | grep -i superblock
dumpe2fs 1.44.1 (24-Mar-2018)
Primary superblock at 0, Group descriptors at 1-21
Backup superblock at 32768, Group descriptors at 32769-32789
Backup superblock at 98304, Group descriptors at 98305-98325
Backup superblock at 163840, Group descriptors at 163841-163861
Backup superblock at 229376, Group descriptors at 229377-229397
Backup superblock at 294912, Group descriptors at 294913-294933
Backup superblock at 819200, Group descriptors at 819201-819221
Backup superblock at 884736, Group descriptors at 884737-884757
Backup superblock at 1605632, Group descriptors at 1605633-1605653
Backup superblock at 2654208, Group descriptors at 2654209-2654229
Backup superblock at 4096000, Group descriptors at 4096001-4096021
Backup superblock at 7962624, Group descriptors at 7962625-7962645
Backup superblock at 11239424, Group descriptors at 11239425-11239445
Backup superblock at 20480000, Group descriptors at 20480001-20480021
Backup superblock at 23887872, Group descriptors at 23887873-23887893
Backup superblock at 71663616, Group descriptors at 71663617-71663637
Backup superblock at 78675968, Group descriptors at 78675969-78675989
/home/pkaramol
$ sudo dumpe2fs /dev/sda5 | grep -i superblock | wc -l
dumpe2fs 1.44.1 (24-Mar-2018)
17
How many copies are there actually and how (when) is that number (and superblock location placement) decided?
|
Here's what the official documentation has to say about that:
If the sparse_super feature flag is set, redundant copies of the
superblock and group descriptors are kept only in the groups whose
group number is either 0 or a power of 3, 5, or 7. If the flag is not
set, redundant copies are kept in all groups.
The sparse_super feature (this is one of the filesystem features, you can list them all via tune2fs or dumpe2fs) is documented in the ext2/3/4 manual/info page:
sparse_super
This file system feature is set on all modern ext2, ext3, and ext4 file systems. It indicates that
backup copies of the superblock and block group descriptors are present only in a few block groups, not
all of them.
The same information is available via the old ext2 official documentation:
The first version of ext2 (revision 0) stores a copy at the start of every block group, along with backups of the group descriptor block(s). Because this can consume a considerable amount of space for large filesystems, later revisions can optionally reduce the number of backup copies by only putting backups in specific groups (this is the sparse superblock feature). The groups chosen are 0, 1 and powers of 3, 5 and 7.... IOW superblock groups are 0, 1, 3, 5, 7, 9, 25, 27, 49, 81, 125, 243, 343 etc
| Superblock replicas in ext4 |
1,431,331,661,000 |
I am using GParted (0.28.1, Fedora 25) to format a external drive and noticed that the command displayed is:
mkfs.ext4 -F -O ^64bit -L "INSTALL" /dev/sdd1
When making disks in the past from command line I have just used mkfs.ext4 DEVICE which seemed to work well for various architectures. However the above includes the option -O ^64bit, which I guess removes some default 64bit feature of the filesystem so it works with 32bit. Does it do this and is normally necessary to pass it on modern Linux OSs (to enable compatibility with 32bit etc systems), and what cost could it have other than probably reducing the volume size limit?
|
The default options for mke2fs including those for ext4 can be found in /etc/mke2fs.conf. They could be different depending on the distro you're using. I'd take a look at that file on any distro you're curious about to see if the -O ^64bit param would be necessary. According to the man page the '^' is indeed the prefix used to disable a feature. The effect of not using 64bit ext4 is that you'll be limited to ~ 15T volumes. Where as you can have 1EiB volumes if you use the 64Bit flag. HOWEVER, 16T is the recommended max volume size for ext4 anyway.
| What does this mkfs.ext4 operand mean? |
1,431,331,661,000 |
The ext4 file system usually uses 4 KiB blocks. In this way when you write a small file, and it's size is less that 4 KiB, you will see the difference in any file manager. There are usually two values: size of the file and size on disk. The fist one has the right value, and the other is multiplication of the 4 KiB.
In the case of larger files, I've always thought that the size can't differ more than 4 KiB (the last, not fully written block). But in the case of some files on my disk, I can see that the difference is more than 4 KiB, for instance 9425 bytes. So the question is simple, why the sizes differ more than 4 KiB. Is it because of fragmentation or something else? Isn't it weird that some blocks in the middle of the file aren't fully written?
|
The list of blocks that make up the file has to be stored somewhere. Typically there's a little space in the inode, but if there are too many blocks to fit in the inode, the filesystem allocates indirect blocks to store the address of the blocks, in addition to the blocks that contain file data. At least for ext2/ext3/ext4 on Linux, and I think for most Unix-like filesystems on most Unix-like operating systems, the indirect blocks are taken into account in the file's disk usage.
Ext4 uses extent trees to store block lists. If a file uses a list of consecutive blocks in order, this takes up a single entry in the tree. Thus a file with little fragmentation doesn't need any indirect blocks, just one entry in the tree that specifies the first block and the number of blocks. A maximally fragmented file needs a lot of indirect blocks to store one tree entry per block. If the file is not fragmented or only very slightly then no indirect block is needed and the file's disk usage is the file size rounded up to a whole number of filesystem blocks. Fragmented files require indirect blocks.
Ext2 and ext3 have a simpler scheme where the block list is not compressed so the number of entries scales slightly more than linearly with the size of the file, requiring indirect blocks if the file uses more than 12 blocks (that's how many blocks can be recorded directly in the inode).
You can explore an ext2/ext3/ext4 filesystem with the debugfs command. In debugfs, blocks /path/to/file lists the blocks used by a file; this shows how fragmented the file is. The command filefrag /path/to/file gives the number of fragments; for ext4 this correlates with the number of indirect blocks and hence with the difference between file size and file disk usage.
| Why is the difference in file size and it's size on disk bigger than 4 KiB? |
1,431,331,661,000 |
My task is to store a list of JSONs on disk (without using any database) and I have these options:
Store them in a single, large file.
Store them in separate files, keyed by their IDs.
Personally I prefer the second option since it allows direct addressing any JSON by their ID without ever having to touch any other JSONs. However, there are almost 0.1 to 1 million JSON entries and I'm afraid of the possible negative consequences on the underlying filesystem (ext4 in my case):
Will this go over filesystem limits about the number of files (either in a directory or in a whole filesystem)?
Will this cause a slowdown while retrieving a specific ID?
To be more specific, I believe the list of files under a directory are maintained by the directory's inode structure, but I'm not sure what data structures (list or map) it uses to keep the file list. Is there any performance gain in the lookup if I use a hierarchy of directories? For example, put 0123456789.json into root/01/0123456789.json instead of root/0123456789.json?
|
Having 1 million files in a single directory would slow things down, but so would parsing an aggregate JSON with 1 million entries. Your best bet is indeed to use hashed directories, but you probably want to go two levels deep rather than just one. Namely, put 0123456789.json in root/0/01/0123456789.json, and 987654321.json in root/9/98/987654321.json.
| What are the consequences of having many files in a directory in an ext4 filesystem? |
1,431,331,661,000 |
In order to store attachments, a /path/to/atts/ directory will have numerous child-directories (product IDs) created (from 1 to ~10,000 or maybe more in the future), and in each of this subdir, 1 to ~10 attachment files will be created.
In /path/to/atts/
1
├── file1.1
├── file1.2
└── file1.3
2
└── file2.1
...
10000
├── file10000.1
├── file10000.2
├── file10000.3
├── file10000.4
└── file10000.5
(actually 1 .. 10000 was chosen for the sake of a simpler explanation - IDs will be int32 numbers)
I'm wondering, on the ext4 file system, what is the cd (actually path resolution) complexity, when reaching /path/to/atts/54321/... for instance:
Does the path resolution checks all inode / names one by one in the atts dir until 54321 is reached? Meaning on average n/2 inodes are checked (O(n))
Or is there some tree structure within a dir that reduces the search (e.g. a trie tree, alphabetical order...), that would reduce dramatically the number of inodes checked, like log(n) instead of n/2?
If it is the former, I'll change the way the products tree structure is implemented.
Just to be clear: the question is not about a find search of a file in a file system tree (that's O(n)). It's actually a path resolution (done by the FS), crossing a directory where thousands of file names reside (the product IDs).
|
You can read about the hash tree index used for directories here.
A linear array of directory entries isn't great for performance, so a new feature was added to ext3 to provide a faster (but peculiar) balanced tree keyed off a hash of the directory entry name.
| 'cd' complexity on ext4 |
1,431,331,661,000 |
I have a directory called Pages of 2.2 million HTML files (about 80 GB) on an Ubuntu server. I compressed it with 7-Zip using this command:
7z a -mx=9 Pages.7z Pages
It took around 5-6 hours to compress (seems excessive). Compressed size is about 2.3 GB.
I then downloaded it to my main computer (Ubuntu, Intel® Xeon® CPU E5-1650 v2 @ 3.50GHz). Every time I try to extract, it starts off at disappointing, but acceptable speed, but slows down to a crawl as it gets further along (ran overnight and when I woke up it was doing about 300 files per minute).
However, on my Windows machine (Intel® Xeon® CPU E5-2687W @ 3.10GHz 3.10 GHz, which is only a slightly better machine, I extracted the entire directory in 15-20 minutes. It also clearly made use of multiple processors, which I can't get 7-Zip to do on Ubuntu.
Obviously I can't have an extraction take several days, nor should I.
My sense is this has to do with something I don't know about Ubuntu (I'm a recovering Windows user) or my file system rather than 7-Zip. Any help would be tremendously appreciated.
My main computer uses ext4 file system, and the version of 7-Zip I have is 9.20:
7-Zip [64] 9.20 p7zip Version 9.20 (locale=en_US.UTF-8,Utf16=on,HugeFiles=on,12 CPUs)
Update:
I should clarify that I actually have one drive on my main Ubuntu
installation that is ext4 (my ssd), though I have another one that is
ntfs (I think I remember this being recommended by Ubuntu during
installation, perhaps b/c I set it up as a raid array). The problem
of slowing down over time was happening regardless of which I was
working from.
Following advice in the comments, I used my Windows machine to unzip
the compressed file, restructure the directory with 4096
subdirectories, and re-zip it (though this time I used the default
compression level rather than maximum, and specified lzma2). I then transferred it to my
Ubuntu machine (the ext4 SSD specifically) and unzipped. It worked
perfectly as I would expect - very fast.
However, as another commenter noted, part of the problem here is
likely just that my drives on the Ubuntu machine are not indexed (they
are on Windows), and I might not have to restructure directories at
all if I do index (which I've been wanting to do anyway). I'm
currently trying to figure out how to do that successfully and
safely...and will report back with any useful results.
I've also tried restructuring a directory already on my Ubuntu machine
using python, which is going unreasonably slow. Perhaps it's a python
issue rather than Linux/ext4/ntfs or perhaps it also has to do with
indexing, or perhaps it is b/c the source directory has 2.2 million files
in one directory...:
for fileName in series:
if not os.path.exists('[...]/Pages2/' + fileName[:3] + '/' + fileName):
shutil.copy('[...]/Pages/' + fileName, '[...]/Pages2/' + fileName[:3] + '/' + fileName)
|
I finally figured out the actual answer when I read the wikipedia entry for XZ (https://en.wikipedia.org/wiki/Xz):
One can think of xz as a stripped-down version of the 7-Zip program.
xz has its own file format rather than the .7z format used by 7-Zip
(which lacks support for Unix-like file system metadata[2]).
It is in fact okay to have millions of small files in a single directory, it would seem, on either NTFS or EXT-4 with Ubuntu (perhaps not advisable for other reasons however). There was also nothing wrong with the indexing on my file systems. The reason 7zip slows down when trying to extract a massive directory has everything to do with the writers of 7zip not caring much about Linux/Unix users.
This does half make me wonder whether whoever wrote Nautilus is similarly contemptuous of Linux users...b/c it really doesn't like directories with lots of files either, whereas Windows Explorer has no problems with it.
| 7-Zip slows down over time on Ubuntu but not Windows |
1,431,331,661,000 |
I have a few TB of media files that never changes and that I need to store safely. Since it's a personal business it's overkill to set up a disk server et cetera, so I use the simple solution of storing the files on harddrives at two different locations. Then, at intervals of a few years, I rewrite the drives to refresh them.
They are now ext4, what are the pros and cons of this?
What should I consider when choosing a filesystem for storage disks like these?
|
Short answer: Ext4 is the standard file system on most Linux distribution. It works, it is safe, and as @Marco said:
If ext4 works for you, just keep using it
Choosing a file system
It depends on what are your objectives.
For a total compatibility across systems, you may choose FAT32 (do not blame me - I think it's a terrible choice).
NTFS works well on mostly all systems, at least in read.
ReiserFS / Reiser4 (mostly Linux systems) is know to be very fast.
You may read this Wikipedia article to see each FS limits and features.
Here are the main features you can think about:
Journal support (avoid losing data)
Versioning (switch between files version, like an integrated SVN or GIT support)
Scalability (extends size; multiples FS over the network (NFS))
Native encryption support
Drivers (which OS / harware can mount (read/write) the file system ?)
Design limitations (file name size, maximum size of a file (ex: FAT32 is 2GO))
Native support for data replication (ex: ZFS)
Ext4 pros:
Read / write works on every Linux system
Backward compatible with ext2 and ext3 (mount them as ext4)
Journalized
Mature, supported, open-source
Support SSD trim (in short, increase SSD lifetime)
Ext4 cons:
MacOS and Windows doesn't support Ext4 without additional software(s)
Recovering deleted files is difficult (even if a tool exists)
| What should I consider when choosing a filesystem for a personal disk archive / cold storage? |
1,431,331,661,000 |
I've reinstall windows 7 on it's asignated partition and, as usually, it override the MBR with it's own stuff and, thus, it was not possible to boot to my ubuntu 12.04 partition.
I followed the step int this tutorial and everything went good.
When booting on my ubuntu 12.04 system after that, I got an error message telling me that there was an error while mounting /home/. I selected to ignore the error and the boot continue successfully until login screen. When trying to log into my account, nothing happened after I entered my password.
I opened a terminal and see a dummy /home/ with nothing inside.
Here is what my disk looks like:
# fdisk -l
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical) : 512 bytes / 512 bytes
I/O size (minimum/optimal) : 512 bytes / 512 bytes
Disk identifier: 0x000913aa
Device Boot Start End Blocks Id System
/dev/sda1 * 63 275803919 137901928+ 7 HPFS/NTFS/exFat
/dev/sda2 275803290 317797829 20996955 83 Linux
/dev/sda3 317797954 1953520064 817861055+ 5 Extended
/dev/sda5 1936716075 1953520064 8401995 82 Linux swap / Solaris
# blkid
/dev/sda1: UUID="4CD32DDF72FB084D" TYPE="ntfs"
/dev/sda2: UUID="dae0bc16-7133-4706-8a40-fdd84e281651" TYPE="ext4"
/dev/sda5: UUID="2daec68e-08b6-452f-8f75-2f59ebf61ba5" TYPE="swap"
Here is what happen when I try to mount it myself
# mount /dev/sda3 /mnt
[ 2680.555298] EXT3-fs (sda3): error: unable to read superblock
[ 2680.564065] EXT4-fs (sda3): unable to read superblock
mout: you must specify the filesystem type
# mount -t ext4 /dev/sda3 /mnt
[ 2863.195328] EXT4-fs (sda3): unable to read superblock
mount: wrong fs type, bad option, bad superblock on /dev/sda3,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
# dmesg | tail
... other stuff related to my sound card ...
[ 21.538194] init: mountall main process (325) terminated with status 2
[ 2680.555298] EXT3-fs (sda3): error: unable to read superblock
[ 2680.564065] EXT4-fs (sda3): unable to read superblock
[ 2863.195328] EXT4-fs (sda3): unable to read superblock
Then, I read somewhere to try
# mke2fs -n /dev/sda3
mke2fs 1.42 (29-Nov-2011)
mk2fs: inode_size (128) * inodes_count (0) too big for a
fylesystem with 0 blocks, specify higher inode_ratio (-i)
or lower inode count (-N).
# fsck.ext4 /dev/sda3
e2fsck 1.42 (29-Nov-2011)
fsck.ext4: Attempt to read block from filesystem resulted in short read
while trying to open /dev/sda3
Could this be a zero-length partition?
Basically, it seems that my partion /dev/sda3 is assumed to have a zero-length.
How can I solve this issue?
|
You don't seem to have a separate home partition. /dev/sda3 is an extended partition (hence the "Étendue" in fdisk -l), you will not be able to mount it and it will not contain your /home.
Unless you have a 2nd hard disk, it appears you deleted your /home partition while installing Windows. The only partitions in sda are the Windows one (sda1), what I imagine is your linux root (sda2), the extended one (sda3) and a swap partition. If you have a second hard drive please update your question.
| Unable to mount /home/ partition after reinstalling grub after reinstalling windows 7 |
1,431,331,661,000 |
To be able to test out of disk situations I tried to set up a file-based size-limited file system like this:
$ dd if=/dev/zero of=file.fs bs=1MiB count=1
$ mkfs.ext4 file.fs
$ udisksctl loop-setup -f file.fs
Mapped file file.fs as /dev/loop1.
$ udisksctl mount --options rw -b /dev/loop1
Mounted /dev/loop1 at /media/myuser/29877abe-283b-4345-a48d-d172b7252e39
$ ls -l /media/myuser/29877abe-283b-4345-a48d-d172b7252e39/
total 16
drwx------ 2 root root 16384 Dec 2 22:08 lost+found
But as can be seen, it's made writable only for root. How do I make it writable for the user that is running the commands?
I can't chown or chmod it since that also gives "Operation not permitted".
I tried with some options to udisksctl like -o uid=<id> but then I get an error about that mount option not being allowed.
Since this should be able to run for normal users I can't use root or sudo.
I am on Ubuntu 22.04.1.
|
Yeah, that's kind of mean :) But you can work around:
mkfs.ext4 takes a -d directory/ option with which you can specify a directory containing an initial content for the file system; if you already know which directories you'll later want to populate, that would be a good place to start.
mkfs.xfs supports -p protofile; that probably does exactly what you want to do. A file myprotofile containing naught but:
thislinejustforbackwardscompatibility/samefornextline
1337 42
d--777 1234 5678
where the first line is just a single string for backwards compatibility, which will be ignored; the second line must contain two numbers that will be ignored. (See man mkfs.xfs for more details than I remember from the top of my head.)
The third line contains a filemode uid gid tuple, describing the root directory. Replace 1234 with your user id of choice, and 5678 with the group id of your choice.
A subsequent
mkfs.xfs -p myprotofile -f file.fs
should do (but your image file needs to be at least 16 MB in size for a default-configure mkfs.xfs), so
dd if=/dev/zero of=file.fs bs=1MiB count=16
mkfs.xfs -p myprotofile -f file.fs
udisksctl loop-setup -f file.fs
works and automounts the filesystem rw on my system (but that's not necessarily the case on your system – your mount thing should work; but --options rw seems a bit superfluous).
| Create writable file system using udisksctl |
1,431,331,661,000 |
Reading the man page of chattr, I came across the I flag for Indexed Directory. Upon investigation it turned out that this refers to HTree indexed directories, as described by this paper. It says that hashed tree provided a similar performance to BTrees, but are way simpler to implement.
After running lsattr in the home on my Ubuntu machine, I noticed the the Downloads directory has the Indexed Directory flag set, but nothing else has. I also noticed, that stat --format "%s" Downloads tells me its size was 12 KiB instead of the 4 KiB I got for all other directories in my home. I searched the Internet for further information on this topic, but only got this paper from 2001.
The system is an Ubuntu 19.10 with kernel 5.3.0-26 on an ext4 root.
My questions are:
What's the practical difference between HTree and non-HTree directories?
How can I create them? How was it created?
Why isn't any directory a HTree one?
|
In any modern ext3/ext4 filesystem, all directories larger than a single filesystem block (typically 4KB) will be indexed. This happens automatically when the directory grows beyond the first block.
There isn't any particular way to "create" an htree directory, beyond adding more entries than can fit into one block (depends on filename length, maybe 60-100 files). Once a directory grows in size, it will never be shrunk by ext4, though there are some patches floating around that may implement this one day.
| HTree Indexed Directory |
1,431,331,661,000 |
I want to backup my home directory to an external SSD drive using rsync.
I'm on Arch Linux. My home is ext4 (251G), the SSD is NTFS-3G mounted as fuseblk (512G).
The exact rsync invocation is:
rsync -aSh --info=progress2 --delete --exclude=/me/.cache /home/me /run/media/me/Samsung_T5/
Eventually, it fails with this being its last words:
218.76G 99% 25.08MB/s 2:18:36 (xfr#2093188, ir-chk=1368/2286507)
rsync: write failed on "/run/media/me/Samsung_T5/me/a_file": No space left on device (28)
So, rsync allegedly copied around 218G of data and couldn't go furhter due to my SSD being full.
When I ask du how much data is there on my SSD rsync destination, it says 466G.
$ du -hs /run/media/me/Samsung_T5/me
466G /run/media/me/Samsung_T5/me
This is weird. rsync tried to copy 281G, but it copied 218G and failed because it actually copied 466G.
What am I getting wrong here?
I do know that NTFS and ext4 are different. But are they different enough to make my files more than 2x larger?
Am I copying more than I actually have in my home?
What would be the correct rsync procedure to back up my ~280G home to my SSD as something comparable in size with my home?
UPDATE [Thanks to the comments below]:
I have a large number of small files in my source directory and a certain amount of sparse files.
For example, there is a file 4K big in the source and 128K big in the destination. There is also a sparse file that is 12K in the source and 128K in the destination.
Also, I do have 244 hard links to different executables (e.g., shared libraries). Some of those hard links point to some relatively large files. For example, a version of binutils linker (ld) is around 7M and I have 4 hard links to it.
|
You might look at duplicity and its gui deja-dup. It does incremental backups using tar files, optionally encrypted, optionally to a remote server.
It uses librsync and its rolling-checksum algorithm so that each incremental archive holds only the changed parts of files.
The home page says it handles Unix permissions, symbolic links, fifos, and
device files, but does not preserve hard links. If you have many large hard-linked
files it may be sub-optimal in the archive, but more importantly, you may also want to note separately which files are interlinked so that if you need to restore them you can put back the link. If possible, converting to symbolic links would solve this problem.
You can look for hard links with something like
find /home/me -links +1 -type f -printf '%n %i %D %p\n' | sort -n
where the format string shows %n the number of links, %i the inode number, %D the device the file is on, and %p the pathname. Lines with the same inode number and device are hard links. The device is only useful if you have mount points within the directory tree (as the same inode on a different device is not the same file). Of course, hard links to files outside the tree cannot be handled, even by rsync.
| rsync doubles the size when copied from ext4 to NTFS-3G |
1,431,331,661,000 |
I have an ext4 volume that is 3.6T. According to df, the USED is less than SIZE with about 100GB free. However AVAIL shows 0.
If I run gparted it shows the real amount of free space (100GB).
If I try to write any files, I get the error message:
No space left on device
The only thing I can think of is that I use rsnapshot so there are lots of hardlinks to the same inode on the drive.
What is going on?
|
What you're looking at is the reserved space and the file system overhead for ext4.
Reserved space is standard 5% on any ext4 FS and is reserved for the root user only!
FS overhead consists of:
the inode table at format time
the journal ( usually 128 MB )
resize inodes.
So basically: the OS is still running, users cannot write to that FS any more: add more disks!
Please don't try to reduce the reserved space because that 5% also helps in keeping fragmentation to a minimum and why we never need to defragment ext2/3/4 partitions!
| Showing 0 (zero) disk space available even though there is free space |
1,431,331,661,000 |
I have 2 physically identical disks. Each with 1 partition:
| Disk | FS | Size | Comment |
|----------+------+----------+----------------------------------|
| /dev/sdb | NTFS | 468.8 GB | Partition created long |
| | | | ago with Partition magic, Win XP |
|----------+------+----------+----------------------------------|
| /dev/sdc | ext4 | 458.5 GB | Partition created last |
| | | | week with Linux fdisk v. 2.21.2 |
|----------+------+----------+----------------------------------|
Here is fdisk info for each of them:
sdb
Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x3765c6b7
Device Boot Start End Blocks Id System
/dev/sdb1 * 63 976768064 488384001 7 HPFS/NTFS/exFAT
sdc
Disk /dev/sdc: 500.1 GB, 500107862016 bytes
81 heads, 63 sectors/track, 191411 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xe84f8200
Device Boot Start End Blocks Id System
/dev/sdc1 2048 976773167 488385560 83 Linux
Despite they are identical, fdisk gives different info:
| Disk | heads | cylinders |
|------+-------+-----------|
| sdb | 255 | 60801 |
| sdc | 63 | 191411 |
Questions
Why does this difference exist?
Why does sdb1 end with block #68064, while sdc with #73167?
Space at the end: 976773167 - 976768064 = 5103 blocks
Space at the beginning: 63 - 2048 = -1985 blocks
Total: 5103 - 1985 = 3118 blocks
sdc1 must have 3118 more blocks. While in reality the partition is smaller. Why?
I heard, that it is better to start the partition from block #2048, then 63. So, sdc filesystem must work faster, then sdb. Is it true?
|
Cylinder/head/sector addressing is horrendously obsolete, but some old disk tools still use it by default, and Linux fdisk supports it in emulation. The CHS values it's giving do not refer to any physical reality of the disk, but are guesses based on (I'd guess) the current partition table. They can probably be safely ignored.
sdc1 runs right up to the end of the disk, as per fdisk default; sdb1 stops somewhere short, leaving free space at the end, for some probably-inscrutable purpose (maybe Windows uses this?) Meanwhile, sdc1 starts at sector 2048, meaning it's 1MB aligned; this ensures decent performance on modern disks, and also causes there to be plenty of space before the first partition for bootloaders, GPT if desired, and so on. sdb1 is using an older standard for first starting partition, which will still work on modern hardware, but may cause alignment issues and boot loading problems.
The reason why sdb1 scans as bigger than sdc1, even though the latter is more sectors long? If you got those numbers out of a filesystem checker, I'd guess it's due to the differing FSs, and code not treating them identically. (It's possible, for instance, that the NTFS FS code is reporting something more like the raw partition size, while ext* is subtracting filesystem overhead.) Without more details here, it's hard to say.
Whether the partition start sector matters depends somewhat sensitively on your setup. Old disks were fine so long as everything was 512-byte aligned; newer ones switched to 4k physical sectors, and so they want things aligned to that granularity (and enacted a hefty performance penalty if they weren't). Starting from 1M allows easier use of some various (mostly Linux/Unix) device-mapper techniques, which add disk overhead at the beginning of a device, while still ensuring 4k alignment. It's hard to say whether this will matter in your case, but 1M alignment is probably good form.
| Comparing 2 partitions on identical disks |
1,431,331,661,000 |
I'm curious, what is the smallest size a file can really be on Linux? (Assuming Ext3 fs, so why not ext4 fs as well).
Sure you can write a file that only contains one byte, or maybe even less; but surely that'll allocates a minimum, and reasonable amount of data for convenience.
So what is the minimum allocation / block size that can be allocated on ext3, and or ext4?
|
The smallest possible allocation size for a file in ext3/ext4 is 0 (none at all) because of inline data: files with sizes smaller than 60 bytes can be stores completely inside the inode itself.
Of course, every file, whether it's a regular file, symlink, directory (which can contain data), or character device or block device or named pipe (none of which possess the concept of "contents"), still occupies an inode. You can read about the size of the inode itself.
| Smallest file block size (ext 3, 4) |
1,431,331,661,000 |
After a breaker trip a Raspberry Pi of mine started to halt boot with kernel panic (same message as here). This is a Raspberry Pi running Raspbian, so it runs from a SD card, from a main ext4 partition, which I've tried repairing on my PC with:
sudo e2fsck -f -y -v /dev/sdx2
However, this eventually fails with some weird output:
Error writing block 137439060017 (Invalid argument) while getting next inode from scan. Ignore error? yes
Error reading block 183472412950529 (Invalid argument). Ignore error? yes
Force rewrite? yes
Error writing block 183472412950529 (Invalid argument) while getting next inode from scan. Ignore error? yes
Inode 13329, i_size is 4096, should be 549755817984. Fix? yes
Inode 13607, i_size is 69632, should be 137439023104. Fix? yes
Error reading block 36983963385857 (Invalid argument). Ignore error? yes
Force rewrite? yes
Error writing block 36983963385857 (Invalid argument) while getting next inode from scan. Ignore error? yes
Error reading block 179632729097217 (Invalid argument). Ignore error? yes
Force rewrite? yes
Error writing block 179632729097217 (Invalid argument) while getting next inode from scan. Ignore error? yes
Error reading block 17592186080054 (Invalid argument) while reading directory block. Ignore error? yes
Force rewrite? yes
Error writing block 17592186080054 (Invalid argument) while getting next inode from scan. Ignore error? yes
Error storing directory block information (inode=17449, block=0, num=134507168): Memory allocation failed
/dev/sdx2: ***** FILE SYSTEM WAS MODIFIED *****
e2fsck: aborted
/dev/sdx2: ***** FILE SYSTEM WAS MODIFIED *****
There are two things that are worrying here:
the inode sizes and block sizes, which seem ridiculously high (we're talking ab out a 16GB SD card),
e2fsck ends with Memory allocation failed - on a PC with 32 GB of RAM, most of which are free. It does actually take up the free RAM before it fails.
I've tried configuring a scratch file directory with the same result (e2fsck does write some files there, and the target directory is on a mount with +250GB free space - it takes up the available RAM, and fails).
It looks like there's some corruption in the fundamental file system parameters on the affected partition. How to diagnose and eliminate it?
|
I had a quick glance through the e2fsck source, and it seems to me there are places where the "Memory allocation failed" error can occur for reasons that might not really be memory allocation errors.
The error string is defined in [src]/lib/ext2fs/ext2_err.et.in in relation to the constant EXT2_ET_NO_MEMORY. This can be returned from various places in the code in [src]/e2fsck/. Here's an example from ea_refcount.c:
errcode_t ea_refcount_increment(ext2_refcount_t refcount, blk_t blk, int *ret)
{
struct ea_refcount_el *el;
el = get_refcount_el(refcount, blk, 1);
if (!el)
return EXT2_ET_NO_MEMORY;
get_refcount_el() is in the same file:
static struct ea_refcount_el *get_refcount_el(ext2_refcount_t refcount,
blk_t blk, int create)
{
int low, high, mid;
if (!refcount || !refcount->list)
return 0;
That's not the only reason it will return null, nor the only reason that looks like it is not directly related to a failed allocation.
To really prove that I'd have to do more digging, but it does fit with your assertion that it did not really exhaust the system memory.
This being the case, perhaps the problem is related to an obscure and unpredictable potential of deranged/damaged SD card controllers, but it still amounts to a bug in e2fsck in so far as some sort of sanity checking or something should be done to catch this, even if it's just to say, "Sorry, your device is screwed" (probably true) vs. "Out of memory" (probably not true). You may want to report this ("In case of bugs in these programs, please contact Ted Ts'o at [email protected] or [email protected]" -- I believe T.T. is a linux kernel dev), and you can reference this Q&A.
Beyond that, IMO you might as well forget whatever is on that card and do a destructive read-write test on it:
badblocks -v -w -b 1048576 -c 16 /dev/sdx
Remember, that's a DESTRUCTIVE test -- you'll be loosing all your data. Badblocks is not useful for creating an actual badblocks list for an SD card (they do not report actual physical addresses because of wear levelling), but if the card is borked, it will probably let you know. Testing a 16 GB card this way takes less than an hour.
| After crash, e2fsck fails with weirdly high block numbers/sizes |
1,348,735,162,000 |
I just read this article about the virtually non-existent disk fragmentation on *nix filesystems.
It was mentioned that due to the way ext handles writing data to the disks, fragmentation may only begin manifesting on hard drives that are at least 80%, where the free space between the files starts to run out.
On how to deal with this fragmentation, the final paragraph reads:
If you actually need to defragment a file system, the simplest way is probably the most reliable: Copy all the files off the partition, erase the files from the partition, then copy the files back onto the partition. The file system will intelligently allocate the files as you copy them back onto the disk.
That sounds illogical to me. Because as far as I understand, when copying all files back to the erased drive, a similar process should take place where files are written and written with gradually decreasing portions of free space between them, to the point where fragmentation will manifest again.
Am I right on this one?
|
What you have read is true. File systems become fragmented over time - as you write more of your epic screenplay, or add to your music collection, or upload more photos, etc, so free space runs low and the system has to split files up to fit on the disk. In the process described in the excerpt you posted, the final stage, copying the files back onto the recently cleaned disk, is done sequentially - so files are written to the file system, one after another, allowing the system to allocate disk space in a manner that avoids the conditions that led to fragmentation in the first place.
On some UNIX file systems, fragmentation is actually a good thing - it helps to save space, by allocating data from two files to a single disk block, rather than using up two blocks that would each be less than half filled with the data.
UNIX file systems don't start to suffer from fragmentation until nearly full, when the system no longer has sufficient free space to use as it attempts to shuffle files around to keep them occupying contiguous blocks. Similarly, the Windows defragmenter needs around 15% of the disk to be unused to be able to effectively perform its duty.
| How to fix a fragmented ext disk - myth or truth? |
1,348,735,162,000 |
I tried to install Arch Linux to an USB key. Things are kind of wobbly (it assumes the system has exactly one drive inside, for example) but everything installed just fine. Until I rebooted.
Booting 'Arch Linux'
root (hd1,1)
Filesystem type unknown, partition type 0x7
kernel /boot/vmlinuz26 root=/dev/dsb2 ro
Error 17: Cannot mount selected partition
(/dev/dsb1/(hd1,0) is a small FAT partition for data storing purposes -- for those locked down lab computers.)
Here's the 'Arch Linux' command sequence:
root (hd1,1)
kernel /boot/vmlinuz26 root=/dev/dsb2 ro
initrd /boot/kernel26.img
At the grub console:
grub> root (hd1,1)
Filesystem type unknown, partition type 0x7
grub> cat /etc/passwd
Error 17: Cannot mount selected partition
grub> root (hd0,1) # my ubuntu partition
Filesystem type is ext2fs, partition type 0x73
grub> cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
<snip/>
I could successfully reboot in my host OS, Ubuntu, and used the Disk Tool to confirm partitioning was successful. Also, running sudo kvm /dev/dsb resulted in a successful Arch boot once I edited the commands to boot from root (hd0,1) (that is, until Arch tried to mount /dev/dsb2).
What did I do wrong?
Edit 1
From Ubuntu, fdisk -l /dev/sdb gives:
Disk /dev/sdb: 1998 MB, 1998585856 bytes
62 heads, 62 sectors/track, 1015 cylinders
Units = cylinders of 3844 * 512 = 1968128 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdb1 1 65 124899 b W95 FAT32
/dev/sdb2 * 66 1015 1825900 83 Linux
I have now read the related wiki section but nothing seems to apply: I didn't use GParted or logical drives; everything is on the same partition.
|
The first thing to check in this situation is if the disk you're trying to boot from is the right one. The ordering of disks can depend on many factors:
In Grub1, you only get access to two hard disks. This is a limitation of the BIOS interface. Which two hard disks you actually get depends on your BIOS settings (look for something like “boot order”) and what disks and other hard-disk-like bootable media (e.g. USB flash drives) you actually have available.
Under Linux, the ordering of sda, sdb, etc., depends on the order in which drives are detected, which at boot time often depends on the order in which the drivers are loaded. Also, whether some disks appear as sd? or hd? depends on kernel configuration options and udev settings.
Here Grub is reporting a partition with type 7. While Linux and Grub don't care about partition types (except for “container” partitions such as extended partitions), it is unusual to have a Linux filesystem on a partition with type 7 (which fdisk describes as HPFS/NTFS). So my guess is that whichever drive your BIOS is offering as the second boot drive (Grub's hd1) is not the disk you want to boot, but some other disk with a Windows partition. Check if hd0 is the drive you want to boot from; if it's not, you'll have to change your BIOS settings.
If Grub recognizes the filesystem in a partition, you can type something like cat (hd1,1)/ and press Tab to see what files are there. This is the usual way of figuring out what filesystems you have where when you're feeling lost at a Grub prompt.
The second thing to check would be whether the partition you're trying to access is the right one — Grub1 counts from 0, Linux and Grub2 count from 1, and unusual situations (such as having a BSD installation) can cause further complications. Adding or removing logical partitions can cause existing partitions to be renumbered in a sometimes non-intuitive way.
If you had the right partition on the right disk, then Filesystem type unknown would indicate that the partition doesn't contain a filesystem that your version of Grub support. Grub1 supports the filesystems commonly used by Linux (ext2 and later versions, reiserfs, xfs, jfs) but (unless you have a recent patch) btrfs. Grub1 also doesn't support LVM, or RAID (except RAID-1, i.e. mirroring, since it looks like an ordinary volume when just reading).
| Arch Linux fails to boot from a USB key (cannot mount selected partition) |
1,348,735,162,000 |
I'm trying to understand e4crypt and fscrypt, and also how they differ. But it is hard to find documentation on e4crypt other than the command line tool man page and some old tutorials.
Is there any documentation on how the kernel side of things work?
I'm mainly interested in the higher level stuff: what is stored where?
The applicable policy and crypto options/algorithm need to be stored somewhere. (In the inode? extended atributes? Of every file or just the root encrypted directory?)
Also the fscrypt documentation says fscrypt is a kernel-level library that filesystems can use to implement encryption. Does that mean that e4crypt encryption has a separate implementation or do they use the same implementation for the low level encryption stuff?
|
Native filesystem encryption is supported since Linux-4.1
The kernel level is nowadays implemented in the fs/crypto directory of the kernel source tree and commonly referred as fscrypt.
e4crypt, part of the e2fsprogs package, is the initial userspace tool that relies on the native ext4 filesystem encryption. It was (since it is not longer actively developed) a basic low-level tool.
It is indeed poorly documented (even the code is) only some "small howto" being reported.
fscrypt is also the name of a high-level tool for the management of Linux native filesystem encryption.
Designed by google in the intend to "supersede e4crypt".
To answer your questions in short :
e4crypt, as a userspace tool has got no other "kernel side" that the kernel's native filesystem implementation it accesses via syscalls (look at lines 92 & 101)
e4crypt relies on the same native implementation of the linux native filesystem encryption as the fscypt utility from google does.
| Is there any e4crypt kernel side documentation? |
1,348,735,162,000 |
I used the following alias to back up my root directory on Ubuntu 22.04 LTS to an external flash drive
alias backup='sudo rsync -aAXHS --info=progress2 --delete --exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*", "/USBDrive", "/lost+found"} / /USBDrive && notify-send -t 10000 "Backup complete"'
The file system on my usb drive is ext4.
I realize this probably isn't the best way but I was running into some issues and needed to reinstall to see if I could correct the issues.
I came across the following articles and will probably try them in the future as they appear much easier
Copy the entire root directory (/) for transferring OS to new computer?
Backup and restore of root file system (Ubuntu 20.04)
So my question is, what is recommended with the backup I have currently? From what I've read, it looks like a lot of hassle to restore the root directory. Unless someone knows of a fairly easy way to do so, I think I'll just restore the home directory. I'm looking for a way to backup and restore the home directory on any linux distribution (preferably without having to install anything else: this was the motivation for using the rsync command). Is it really just as easy as reversing the source and destination in the rsync command to restore after reinstalling the same OS on the same computer and what if I'm transferring to a new distribution that doesn't have the same parent, e.g. Debian vs Arch Linux? I also have git repositories that I cloned to my home directory. I'm not sure if this makes any difference and if this is normal practice with repositories. I'm also unsure if I will retain the packages installed with apt package manager from my backup if restoring on Ubuntu 22.04 LTS.
Update:
roaima's answer worked for me using a live disk after mounting the target disk. This didn't retain the packages installed with the apt package manager but I expected this. Therefore, you will have to rebuild programs from git repositories. If moving to another OS with a different parent, ex. Arch Linux, the root directory tree is different so I'm not sure if you're restoring more than the home directory.
|
The backup command was missing --numeric-ids but since you're only copying locally it probably doesn't matter.
To restore the backup you swap the source and destination arguments. Remember the --dry-run option while testing.
You can choose to restore just your home directory. Ensure once the copy is complete that the owner and group are correct (you installed a new OS so we shouldn't assume the target user's uid and gid are the same as in the backup)
find /home/whoever -user 12345 -exec chown newuser {} +
find /home/whoever -group 12345 -exec chgrp newgroup {} +
| Restore Rsync Backup Of System On Any Linux Distribution |
1,348,735,162,000 |
I've got an Ubuntu 20 virtual machine running in qemu that uses a qcow2 disk file with another qcow2 disk file as backing store. The VM was built from a recent Canonical-distributed cloud image with cloud-init.
As soon as I start it up, its disk file starts getting bigger and bigger at a rate of about a gigabyte every five minutes.
It's an ext4 file system and no swap is configured. The thin provisioned disk image is configured for 1 TB, with only 4.2 GB actually in use; the disk image itself is 4.4 GB. All the virtual machine is doing is booting and starting its GUI.
"iotop" shows "ext4lazyinit" running, so I think it's initializing inode tables.
When I shut it down and check its disk image with debugfs's "dump_unused", it shows all kinds of random data in its unused disk blocks.
What I don't understand is why it's doing this if the disk is zero'ed, and why it writes all this random clutter.
Are empty inode tables all zero on ext4? Does it perhaps write a small portion of a disk block as an inode table, then the rest of the disk block is filled with junk instead of being all zeros? Any way to get it to write zeros instead of garbage?
|
This is the ext4 lazy_itable_init thread that is zeroing out the inode tables after mount, instead of doing it as part of mke2fs. In e2fsprogs commit v1.46.4-25-gbd2e72c5c552 a patch was landed (mke2fs: Add extended option for prezeroed storage devices) to add the -E assume_storage_prezeroed option so that you can tell it that the device is already zeroed and not to do an explicit overwrite the inode table blocks. That should avoid increasing the size of the disk image, and also avoid extraneous disk IO at initial mount time.
While it appears that this commit would be in the 1.46.5 release, it looks like it is only on the master branch and will likely only be packaged in a 1.47-based release unless you build e2fsprogs from source (either the master branch, or cherry-pick this patch to maint).
| How can I stop my Linux virtual machine from writing data to empty disk blocks? |
1,348,735,162,000 |
I have a folder /stuff that is owned by root:stuff with setgid set so all new folders' have group set to stuff.
I want it so:
New files have rw-rw----:
User: read and write
Group: read and write
Other: none
New folders have rwxrwx---:
User: read, write, and execute
Group: read, write, and execute
Other: none
If I set default ACLs with setfacl then it seems to apply to both files and folders. For me, this is fine for Other since both files and folders get no permissions:
setfacl -d -m o::---- /stuff
But what do I do for User and Group? If I do something like above then it will be set on all files and folders.
And I can't use umask.
I have a shared drive. I am trying to make it so folks in stuff can read/write/execute but nobody else (Other) can. And I wan to make sure that by default files do not get the execute bit set, regardless of what the account's umask is.
|
There is no way to differentiate between files and directories using setfacl only.
Instead you can workaround the issue with using inotify-tools to detect new created files/dirs, then apply the correct ACLs for each one recursively:
1- You have to install inotify-tools package first.
2- Recover the default /stuff directory acls
sudo setfacl -bn /stuff
3- SetGID
sudo chmod g+s /stuff
4- Execute the following script in the background for testing purpose, for a permanent solution wrap it within a service.
#!/bin/bash
sudo inotifywait -m -r -e create --format '%w%f' /stuff | while read NEW
do
# when a new dir created
if [ -d "$NEW" ]; then
sudo setfacl -m u::rwx "$NEW"
sudo setfacl -m g::rwx "$NEW"
# when a new file created
elif [ -f "$NEW" ]; then
sudo setfacl -m u::rw "$NEW"
sudo setfacl -m g::rw "$NEW"
fi
# setting no permissions for others
sudo setfacl -m o:--- "$NEW"
done
| How do I set different default permissions for files vs folders using setfacl? |
1,348,735,162,000 |
I was experimenting with encryption on an ext4 filesystem and I encrypted a file (using fscrypt) which was set to be immutable (via chattr +i). I have now lost the encryption key and uninstalled fscrypt.
I would like to delete the file, but when I try to delete it, I get the following error:
# rm foo
rm: cannot remove 'foo': Operation not permitted
and when I try to make it mutable:
# chattr -i foo
chattr: Required key not available while reading flags on foo
Therefore, I believe I cannot delete the file as it is immutable and I cannot change its attributes due to encryption. Any suggestions?
Edit:
I have tried the following and they do not work:
Deleting/modifying the files from a Live USB. The same errors occur.
Trying after removing the encrypt feature, as Ángel suggested. fsck also doesn't throw any errors for some reason.
Output of findmnt (testdir contains foo) and filesystem properties:
$ findmnt --target testdir
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda4 ext4 rw,relatime
# tune2fs -l /dev/sda4 | grep "Filesystem features"
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
|
With the filesystem unmounted, you should be able to use debugfs -w -R "rm path_to_file" /dev/sda1 to delete the file.
| How do I delete an immutable encrypted file? |
1,348,735,162,000 |
Today I have bought a new Toshiba 1TB Canvio Ready USB 3.0 Portable External Hard Drive (Black). The specifications page of the portable hard drive says, it has been formated to NTFS file system and can be re-formatted to HFS+ file system for full Mac compatibility.
File system
NTFS (MS Windows)
* The drive can be re-formatted to HFS+ file system for full Mac compatibility.
However, I am a GNU/Linux user and I am wishing to re-format the portable external hard drive to ext4 file system. Is it okay to do so?
|
Sure. The only reason that only NTFS and HFS+ were mentioned was because that's what the vast majority of people purchasing their product are going to use.
This isn't OS-specific, but I would strongly recommend that you always make sure to properly unmount the drive before you disconnect the USB cable. USB drives aren't always as fast as internal drives, and if you disconnect the cable before the drive has completed writing you'll potentially lose data!
| Is it okay to format my Toshiba Canvio Ready Portable Hard Drive to "ext4"? |
1,348,735,162,000 |
We have Beaglbone black based custom board with 256MB RAM and 4GB eMMC.
We have script to flash software on the board.
Script erases gpt partition table using following commands
#Delete primary gpt (first 17KiB)
dd if=/dev/zero of=/dev/mmcblk0 bs=1024 count=17
#Delete secondary gpt (last 17KiB)
dd if=/dev/zero of=/dev/mmcblk0 seek=3735535 bs=1024 count=17
Partitions gets deleted however script re-partitions eMMC again in the same number of partitions.
After that it tries to format each partition using mkfs.ext4 (e2fsprogs version 1.42.13).
Now while formatting a partition mkfs.ext4 complains that partition has filesystem on it and it was mounted at particular date in past and ask if it should proceed ?
/dev/mmcblk0p15 contains a ext4 file system labelled 'rootfs'
last mounted on /mnt/rfs_src on Fri Feb 16 13:52:18 2018
Proceed anyway? (y,n)
This was not happening in past i.e. with e2fsprog version 1.42.8
same script used to work.
From release note of e2fsprog-1.42.13 I see that last mounted is added to some structure.
Now question is how can we remove this last mounted information from partition?
I tried wipfs -a but it has the same behavior.
One way to zero while eMMC, however that will take lot of time.
Any suggestion/pointers ?
|
Thanks to @frostschutz, his suggestion worked for.
Just for completeness I am adding that as an answer,
Using following commands did the trick for me.
wipefs -a /dev/mmcblk0p[0-9]*
wipefs -a /dev/mmcblk0
First command deleted filesystem information from each partitions.
second command deleted partition table.
| How to erase gpt partition table and how to make old partition forget mount information |
1,348,735,162,000 |
Fragmentation seems to create a lot of unnecessary seeks when traversing a directory tree on a HDD:
# stat -c %F 00 01 02
directory
directory
directory
# filefrag -v 00 01 02
Filesystem type is: ef53
File size of 00 is 12288 (3 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 0: 428351942.. 428351942: 1:
1: 1.. 2: 428352760.. 428352761: 2: 428351943: last,eof
00: 2 extents found
File size of 01 is 12288 (3 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 0: 428351771.. 428351771: 1:
1: 1.. 2: 428891667.. 428891668: 2: 428351772: last,eof
01: 2 extents found
File size of 02 is 12288 (3 blocks of 4096 bytes)
ext: logical_offset: physical_offset: length: expected: flags:
0: 0.. 0: 428351795.. 428351795: 1:
1: 1.. 2: 428352705.. 428352706: 2: 428351796: last,eof
02: 2 extents found
e4defrag isn't able to defrag them
# e4defrag -v 00
ext4 defragmentation for directory(00)
[1/116] "00"
File is not regular file [ NG ]
So how do I defragment a directory? Not its contents, but the directory itself. The directories are in use, so it should be done atomically, just like defragmenting regular files does not interfere with their use.
|
Since there does not seem to be any online defragmentation tool for directory indices and even the offline defragmenters don't seem to help I had to resort to rebuilding the directory tree recursively.
I've written a small tool (defrag-dirs) for that purpose. Alas, that approach requires the application using the directory tree to be taken down during defragmentation, which can take a considerable amount of time when dealing with millions of files.
| How to atomically defragment ext4 directories |
1,348,735,162,000 |
I would like to convert my raid6 mdadm into encrypted LUKS. Right now raid6 consists of "/dev/sdX1" which are raid partitions. /dev/md0 doesn't have a partition - it is pure ext4 FS.
Is it safe to reencrypt (cryptsetup-reencrypt /dev/md0)? Will LUKS add some specific header which could cause data loss/FS corrupt? Or is it safe only when you have partition on top of mdadm (i.e. /dev/md0p1)?
|
A volume is called a "LUKS volume" because it has a LUKS header. Thus if you convert a non-LUKS volume into a LUKS volume then you do get an additional header and do lose data space.
The LUKS header can be on a differenct device (--header) but I do not know whether cryptsetup-reencrypt supports that. But most probably you want to have the LUKS header within the RAID anyway.
Thus you have to
reduce the file system size by at least 4MiB
run cryptsetup-reencrypt with --new and --reduce-device-size
I suggest that you decrease the file system size by a bit more than the value for --reduce-device-size (which I guess must be 4MiB or more).
You may want to overwrite the gap between the encrypted LUKS data and the end of the device with random data afterwards. But be really careful with that. You should first make a backup (to a different volume, of course) of the blocks you are going to overwrite.
| Is it safe to reencrypt unencrypted mdadm array with LUKS? |
1,348,735,162,000 |
I have a script that mounts /dev/sdc1 to /home. I do not know the state of /dev/sdc1 in advance, it could possibly be "dirty". Do I need to run fsck.ext4 before mounting the filesystem, or has mount some checks that will prevent a "dirty" filesystem to be mounted an possibly corrupting data?
Or even better, is there some way to tell mount to check the filesystem when mounting ?
|
There used to be an option to check ext2 filesystems at mount time, but that is no longer supported. Nowadays boot scripts check filesystems before mounting them, and your scripts should do so too. Mounting a filesystem does still check things to make sure it's safe to mount the filesystem; but it won't fix anything (beyond replaying the journal on ext3 or ext4 filesystems).
You should use fsck -p to perform these checks; the -p option tells e2fsck to fix anything that can be fixed safely without human intervention. If an error occurs requiring human intervention, e2fsck will exit with an appropriate exit code, and your script needs to take those into account as well.
See the mount(8), fsck(8) and fsck.ext4(8) manpages for more details. You might find the source code of ext4_fill_super() interesting; that's the code which mounts an ext4 filesystem.
| do I need fsck.ext4 before mounting a filesystem? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.