date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,594,968,503,000 |
I am seeking a method of getting the crtime of a file in hexadecimal or decimal in unix epoch seconds instead of in a date and clock format and with no additional output.
This command adds additional text at the top of the output ( such as "debugfs 1.42.12 (29-Aug-2014)") that's impossible to remove with grep, sed, etc.
debugfs -R 'stat <7473635>' /dev/sda7 | grep ctime
This command gives the modification time in unix epoch seconds.
date -r default.txt +%s
Additionally all the other posts I've looked at that do get crtime of files get it in a date and clock opposed to unix epoch time. In conclusion how do I get only the creation time of a file in an ext4 fs in unix epoch seconds.
|
You can use grep with PCRE (-P) to extract the desired portion and use it as input for date:
date --date="$(sudo debugfs .... |& grep -Po '^crtime.*-\s\K.*$')" '+%s'
Or
date --date="$(sudo debugfs .... 2>/dev/null | grep -Po '^crtime.*-\s\K.*$')" '+%s'
For example:
$ date --date="$(sudo debugfs -R 'stat <677051>' /dev/sda3 |& grep -Po '^crtime.*-\s\K.*$')" '+%s'
1442488264
You can also use sed:
date --date="$(sudo debugfs .... |& sed -n 's/^crtime.*- \(.*\)$/\1/p')" '+%s'
Or
date --date="$(sudo debugfs .... 2>/dev/null | sed -n 's/^crtime.*- \(.*\)$/\1/p')" '+%s'
For example:
$ date --date="$(sudo debugfs -R 'stat <677051>' /dev/sda3 |& sed -n 's/^crtime.*- \(.*\)$/\1/p')" '+%s'
1442488264
| how to get crtime of a file in an ext4 partition as a single number or string |
1,594,968,503,000 |
I'm trying to set up a custom Linux installation on an Intel Atom (Baytrail) based Android tablet, using Qt 5.5 for device creation. The build system is based on the Yocto project and builds an embedded Linux image. In order to run this image on the tablet (which is originally and Android tablet), I'm replacing the boot partition, with an image containing the kernel, initramfs and initial boot script, and replacing the system partition with the full image, then flashing these to the device with the Intel Manufacturing Tool.
So far, I have the device booting into my new kernel with the initramfs, and running the init script. The issue comes when trying to mount the main partition on the embedded flash. The command to mount the system partition fails with "Invalid Argument".
A cat proc/filesystems shows that ext4 is supported, and parted -l shows the partitions on the internal MMC are all ext4, with the exception of the first, which is the EFI boot partition. I can't mount any of the ext4 partitions, but I can mount the EFI partition, so I think that means the whole MMC should be accessible.
Running fdisk -l only shows the first partition (the EFI boot partition), but I think that is because fsdisk doesn't support GPT.
Does anyone know why I wouldn't be able to mount the ext4 partitions? They are all listed in /dev as:
mmcblk0
mmcblk0p1
mmcblk0p2
mmcblk0p3
mmcblk0p4
mmcblk0p5
mmcblk0p6
mmcblk0p7
mmcblk0p1 is mountable, and is the EFI boot partition.
Sorry I can't post any of the actual output, so this is all from memory, but the battery just died on the device as I started writing this. I should be able to get some actual output from the commands if it's needed once it's charged again.
Update
So I recompiled Busybox, enabling GPT support in fdisk, and fdisk lists the partitions. I also installed TestDisk on the device, and can browse the filesystem using TestDisk. Trying to mount the partitions listed under /dev/mmcblk0p(2 - 7) still doesn't work, but I can successfully mount a partition by getting the start sector from fdisk -l, then setting up a loop device via losetup -o (Start Sector * Sector Size) /dev/loop0 /dev/mmcblk0, then finally mounting /dev/loop0. Why do I have to go through this method instead of being able to just mount /dev/mmcblk0p2 etc.?
|
OK, so it turned out the issue was that not all the partitions were being listed under /dev. The eMMC has 15 partitions on, but only 1 - 7 were listed. I thought that the 1 - 7 were just the ext4 partitions, and that the other partitions (which aren't formatted to ext4) just wouldn't show up there. So when I thought I was mounting the ext4 partitions, it was trying to mount these others, which it couldn't, hence the error. The problem stemmed from the kernel config, particularly CONFIG_MMC_BLOCK_MINORS, which defaults to 8 I think, so only the first few partitions were showing up. I recompiled the kernel with the value at 20, and the rest of the partitions show up under /dev/mmcblk0p8, 9, 10, etc., and I can mount them just fine.
| Trying to mount ext4 partition from eMMC gives "Invalid Argument" error |
1,594,968,503,000 |
Recently had a LVM'd CentOS 6.5 install get accidentally cold-shutdown. On bootup, it says that the home partition need fscking:
/dev/mapper/vg_myserver-lv_home: Block bitmap for group 3072 is not in group. (block 3335668205)
/dev/mapper/vg_myserver-lv_home: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
...but I guess the root partition is OK, since it gives me a shell there. So we run e2fsck -b 32768 /dev/mapper/vg_myserver/lv_home and after saying Yes to various fixes, on Pass 5 it just prints endless numbers to the screen, very fast. Once in a while it will print them in neat columns, and if these are block numbers, after a couple hours we are still nowhere near the first 2% being done of our 1.2 TB LV.
I read that you can use cleap_mmp with tune2fs, but upon trying that, it doesn't accept cleap_mmp nor list it among valid options.
My question is, how does everyone deal with a corrupt ext4 fs without weeks of downtime? Does everyone have this dilemma, or weeks of downtime vs rebuilding your server / lost data? If so why does anyone use or recommend the use of ext4? Is there some trick I'm missing that would let me target the specific block/group it's complaining about, so we can get on with it and mount the home fs again?
|
Run e2fsck -y to say yes to all questions automatically instead of having to manually say yes a million times.
| targeting a specific block with e2fsck to shorten wait |
1,383,063,470,000 |
I had a disk with a full size NTFS partition. I just deleted it and created an EXT4 one.
When it was NTFS, if it wasn't in use (mount but no in use) it was quiet. However now, using EXT4, it is constant reading and I don't know why.
Using EXT3 is fine also.
Any idea?
|
This would have been the "lazy initialization" feature of Ext4 which zeroes out the inode tables on the first mount after creating the file system.
This allows the file system to be created faster, but runs a kernel thread called "ext4lazyinit" in the background. You can confirm if this is happening by running "ps aux | grep ext4lazyinit".
The process may take several hours to complete, depending on the size of the file system.
| Constant reading using ext4 |
1,383,063,470,000 |
The other day I had a script error which wrote 4 million small text files to my home directory:
I've accidentally written 4 million small text files to a folder, how best to get rid of them?
I deleted those files, but since then whenever I hit tab to complete a filename or path there's a half second delay before anything happens.
Although the files are now deleted, I assume there's some lasting damage to the gpt or similar? Are there any useful tools I can use to clean this up?
The filesystem is ext4 (two 3TB drives in RAID 1) and I'm running CentOS 7.
% ls -ld "$HOME"
drwx------. 8 myname myname 363606016 Nov 18 09:21 /home/myname
Thank you
|
As mentioned in the comments, your home directory itself is huge, and won’t shrink again. Scanning your home directory’s contents will involve reading a lot of data, every single time (from cache or disk).
To fix this, you need to re-create your home directory:
log out, log in as root, and make sure no running process refers to your home directory:
lsof /home/myname
copy your home directory:
cd /home
cp -al myname myname.new
rename your home directory out of the way:
mv myname myname.old
rename your new home directory:
mv myname.new myname
You can log back in now. Your shiny, new home directory will only occupy the space it really needs, and file operations should be as fast as you expect. cp -al ensures that all files are available under the new directory, but it uses hard links so that no additional space is taken (apart from the directory structure). Because of the hard links, any changes made to files in one of the directories are reflected in the other directory, but you can safely remove myname.old.
A similar approach can be used for any directory which used to contain a large number of files, although in most other cases you won’t need to log out first.
| Bash tab completion slow after accidentally writing (but then deleting) millions of files to a directory |
1,383,063,470,000 |
I have a USB stick encrypted with LUKS + Ext4. I have forgotten the password...
However, I know which words will be included in the password and have a list of all permutations of those words. About 10,000 permutations.
Instead of me trying each and every permutation 1 by 1 manually (which will be a long, slow, and painfully tedious process), is it possible to automate this process? I know this sounds like some sort of malicious brute force attack, but it's not. If I wanted something like that, I could have easily downloaded some dodgy software from the internet.
Instead, I want to use something which is safe on my computer, a script (or any safe solution) which is custom built for me specifically.
Is this possible?
|
Well, in the most naive case you can roughly do something like
for a in 'fo' 'foo' 'fooo'
do
for b in 'ba' 'bar' 'baar'
do
for c in 'bz' 'baz' 'bazz'
do
echo -n "$a$b$c" | cryptsetup open /dev/luks luks \
&& echo "'$a$b$c' is the winner!" \
&& break 3
done
done
done
and it goes through all the puzzle pieces ... foobarbz foobarbaz foobarbazz ... etc. in order. (If you have optional pieces, add '' empty string. If your pieces are in random order, well, think about it yourself).
To optimize performance, you can:
patch cryptsetup to keep reading passphrases from stdin (lukscrackplus on github for one such example but it's dated)
generate the complete list of words, split it into separate files, and run multiple such loops (one per core, perhaps even across multiple machines)
compile cryptsetup with a different/faster crypto backend (e.g. nettle instead of gcrypt), difference was huge last time I benchmarked it
find a different implementation meant to bruteforce LUKS
But it's probably pointless to optimize if you have either too little (can go through in a day w/o optimizing) or way too many possibilities (no amount of optimizing will be successful).
At the same time, check:
are you using the wrong keyboard layout?
is the LUKS header intact?
(with LUKS1 there is no way to know for sure, but if you hexdump -C it and there is no random data where it should be, no need to waste time then)
There's also a similar question here: https://security.stackexchange.com/q/128539
But if you're really able to narrow it down by a lot, the naive approach works too.
| Automate multiple password enties to decrypted LUKS + Ext4 USB stick |
1,383,063,470,000 |
is it possible to understand , when filesystem was created on disk ( date and time )
we try the following ( on sdb disk )
tune2fs -l /dev/sdb | grep time
Last mount time: Mon Aug 1 19:17:48 2022
Last write time: Mon Aug 1 19:17:48 2022
but we get only the last mount and last write
what we need is when filesystem created by mkfs command
from lsblk -f we get:
lsblk -f | grep sdb
sdb ext4 cc0f5da9-6bbc-42ff-8f5a-847497fd993e /data/sdb
so what actually we need is when mkfs was running ( date & time )
|
Typically a device /dev/sdb contains a partition table, not a filesystem. It's each individual partition that would contain a filesystem. However, since your example uses /dev/sdb itself I'll also use that here.
Using your own tune2fs command and looking at the output:
tune2fs -l /dev/sdb
it's possible to see by inspection that there is a creation date. For example,
Filesystem created: Fri Jul 1 13:11:44 2016
| linux + is it possible to understand when filesystem was created on disk |
1,383,063,470,000 |
I have an ext4 formatted disk with thousands of files that are generated automatically and are needed. A few thousand of them are only one byte long, some two bytes. All files in both groups of tiny files are identical.
How much space can I save by locating these, say 1000, files of 1 byte in length, removing each and hard-linking to a single representative file?
Like this:
# ls -l
-rw-r----- 1 john john 1 Feb 25 10:29 a
-rw-r----- 1 john john 1 Feb 25 10:29 b
-rw-r----- 1 john john 1 Feb 25 10:29 c
# du -kcs ?
4 a
4 b
4 c
12 total
Try to consolidate:
# rm b c
# ln a b
# ln a c
ll
total 12
-rw-r----- 3 john john 1 Feb 25 10:29 a
-rw-r----- 3 john john 1 Feb 25 10:29 b
-rw-r----- 3 john john 1 Feb 25 10:29 c
# du -kcs ?
4 a
4 total
(Please note that du does not even list b and c which I find curious).
Question: Is it really that easy and one can save 999*4 KiB in my 1000 file scenario if an allocation block is 4 KiB in size?
Or, does ext4 have the ability to transparently "merge tails", or store tiny files in the "directory inode" (I vaguely remember some filesystems can do that)?
(I know file allocation blocks can vary and a command like tune2fs -l /dev/sda1 can tell me.)
|
There are three parts to storing files: the blocks used to store the file contents, the inode used to store the file’s metadata, and the directory entry (or entries) pointing to the inode.
When you create multiple separate files, in the most general case you pay this cost as many times as there are files.
With inline data (if your file system was created with the appropriate options), you save the blocks used to store the file contents if the file is small enough, but you still need one inode per file and at least one directory entry per file.
With hard links, you save the blocks used to store the file contents and the inodes: there’s only one inode, one instance of the file data (whether inline in the inode or separate), and as many directory entries as links.
Given that you need to store the directory entries anyway, hard links are effectively free. Anything else will involve more storage; exactly how much depends on your file system’s specific settings.
| How much space can I save on ext4 by replacing 1000 identical 1-byte files with 999 hard-links and 1 file? |
1,383,063,470,000 |
At the time of Linux installation, i have mentioned only one filesystem (/dev/sda1 -> ext4 -> / ). But for dev, run, proc, sys - Linux is creating addition FS which is inferable from mount.
$ mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,nosuid,noexec,relatime,size=12138104k,nr_inodes=3034526,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=2433824k,mode=755)
/dev/sda4 on / type ext4 (rw,relatime,errors=remount-ro)
...
I am aware that /proc is a Virtual FS and is in memory and not on any HDD/SSD. Could some one explain what is the case with /dev, /run and /sys. Do they exist on HDD (if so what is there location if it can be meaningfully traced).
Based on already asked Q - Why inode numbers of /dev and /run are same as that of /?
|
The mount output lists the file system types:
/dev is a devtmpfs (a virtual file system exporting device nodes)
/run is a tmpfs (a virtual memory file system)
/sys is a sysfs (a virtual file system exporting kernel objects)
All of these live in memory, not on your drives. man 5 proc tmpfs sysfs will show you the documentation for these, or you can follow the links above.
| Does /dev, /run and /sys exist on HDD (if so what is there location if it can be meaningfully traced)? |
1,383,063,470,000 |
I have Debian 10 with the latest Proxmox installed.
2 SSD with:
sda1: EFI
sda2: raid1 (/dev/md0)
sda3: swap
sdb1: EFI (clone of sda1)
sdb2: raid1 (/dev/md0)
sdb3: swap
After an update, I wanted to clone /dev/sda1 to /dev/sdb1 with dd, but I made an error and typed dd if=/dev/sda of=/dev/sdb. I cancelled it with Ctrl+C, did the right command dd if=/dev/sda1 of=/dev/sdb1 and rebooted.
The system seems to work, but I have these errors in my log:
Feb 02 20:50:01 Yggdrasil kernel: EXT4-fs error (device loop0): ext4_lookup:1704: inode #174282: comm php: deleted inode referenced: 174769
Feb 02 20:50:01 Yggdrasil kernel: EXT4-fs error (device loop0): ext4_lookup:1704: inode #174282: comm php: deleted inode referenced: 174769
Feb 02 20:50:01 Yggdrasil kernel: EXT4-fs error (device loop0): ext4_lookup:1704: inode #174282: comm php: deleted inode referenced: 174769
Feb 02 20:50:01 Yggdrasil kernel: EXT4-fs error (device loop0): ext4_lookup:1704: inode #174282: comm php: deleted inode referenced: 174769
Feb 02 20:50:01 Yggdrasil kernel: EXT4-fs error (device loop0): ext4_lookup:1704: inode #174282: comm php: deleted inode referenced: 174769
Feb 02 20:50:01 Yggdrasil kernel: EXT4-fs error (device loop0): ext4_lookup:1704: inode #174282: comm php: deleted inode referenced: 174769
Feb 02 20:50:01 Yggdrasil kernel: EXT4-fs error (device loop0): ext4_lookup:1704: inode #174282: comm php: deleted inode referenced: 174769
Feb 02 20:50:01 Yggdrasil kernel: EXT4-fs error (device loop0): ext4_lookup:1704: inode #174282: comm php: deleted inode referenced: 174769
Feb 02 20:50:01 Yggdrasil kernel: EXT4-fs error (device loop0): ext4_lookup:1704: inode #174282: comm php: deleted inode referenced: 174769
Feb 02 20:50:01 Yggdrasil kernel: EXT4-fs error (device loop0): ext4_lookup:1704: inode #174282: comm php: deleted inode referenced: 174769
I ran fsck in rescue mode but didn't seem to find these errors as they continue to pop on the logs.
Can anyone help me?
|
Explaining The Error:
That error does not indicate a problem with a physical disk; loop0 is a loopback device. Which is a block storage device that uses a file on disk as a backing store. A disk within a disk you might say. These loopback disks have their own filesystems and sometimes their own partition tables. So running fsck on the physical disks which hold them will not have any effect
Solution
Find the file which backs the loop device with losetup -a | grep loop0 and run fsck on that
| EXT4-fs error on loop0 |
1,383,063,470,000 |
I have a RHEL 9 with an ext4 fs
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 1258291199 1258289152 600G 83 Linux
it has been requested it be ext4 with Inodes for 100,000,000 files +
I thought I could just run mkfs.ext4 -N 2000000000 /dev/sdb1
to get more than enough inodes however when I mount the partition
mount /dev/sdb1 /fileshare
the /fileshare shows up as only 92gb
when I run mkfs.ext4 /dev/sdb1 then mount it the /fileshare shows as 560gb which is what I want how can I get a inode count that I need to accommodate that large file count (100 million to 500 million) but still have the 560gb disk size?
|
Why -N 2000000000? Your question says your target is around the 100 million mark, not the 2000 million mark - you're asking for 20x more inodes than you need. Fix that and you might get a reasonable result.
However, there is a bigger issue here with regard to file size. Although at a minimum of 1kB/file you're going to need only around 125GB storage to hold that many files (including an approximation for the inodes, at 4x per kB), if we go with your example's 2x10^9 inodes then that's going to be 2.25TB. At least.
| Setting up ext4 filesystem to accomodate 100,000,000 files iNode issues |
1,383,063,470,000 |
Today I learned that there had been a faulty Debian kernel version which caused ext4 data corruption (bug 1057843) in December 2023.
Searching through the /var/log/aptitude and /var/log/apt logs, I noticed that the faulty kernel version was installed for one full day by /usr/bin/unattended-upgrade .
The chronology:
09 Sep 2023 17:53 Rebooted system by hand
07 Oct 2023 20:02 Upgrade via "aptitude" by hand: linux-image-amd64:amd64 6.1.52-1 -> 6.1.55-1
10 Dec 2023 06:41 Unattended "apt" upgrade: linux-image-amd64:amd64 6.1.55-1 => 6.1.64-1 (installed faulty version)
11 Dec 2023 07:00 Unattended "apt" upgrade: linux-image-amd64:amd64 6.1.64-1 => 6.1.66-1 (installed fixed version)
16 Dec 2023 12:23 Upgrade via "aptitude" by hand: linux-image-amd64:amd64 6.1.66-1 -> 6.1.67-1
18 Dec 2023 18:39 Rebooted system by hand
Although the faulty kernel version was installed at December, 10th, the system was not rebooted.
Can I assume that I am not affected by the data correuption bug, since the faulty kernel did not boot?
I am not 100% sure if the ext4 filesystem code is fully embedded in the kernel, or if changes to the ext4 module can apply on a running system.
|
As far as I understand, you are safe from this bug.
The only way to have the ext4 module changes to apply to the currently running non-buggy kernel would have been to first unmount all ext4 filesystems, then unload the old ext4 module and force-load the module from the buggy kernel version (overriding the kernel's preference to load the older version of the module intended for that particular kernel version), then remount all filesystems. If your root filesystem is ext4, it would be even more complicated.
No distribution I know has ever done anything like this, as it would cause a similar interruption to applications as a reboot would, so there would be no benefit.
While any ext4 filesystems are mounted, the current version of the ext4.ko kernel module is in use and cannot be unloaded.
The Debian 12 kernel seems to include the CONFIG_LIVEPATCH option, which would allow the patching of running kernel/module code, but it would require having specific livepatch modules provided for the specific kernel version that is going to be patched. As far as I know, Debian has not actually used this feature.
Anyway, if you have any livepatches applied, you should see them listed as extra kernel modules (presumably named like livepatch-<something>.ko), and also in /sys/kernel/livepatch/.
| Debian file corruption bug possible without reboot after unattended kernel update? |
1,383,063,470,000 |
From man chattr
When a file with the 'A' attribute set is accessed, its atime record
is not modified. This avoids a certain amount of disk I/O for laptop
systems.
However when I am remounting a filesystem with the noatime mount option:
[root@localhost ~]# mount -o remount,noatime /dev/sdb1 /newfs/
creating a file in it
[root@localhost ~]# cd /newfs/
[root@localhost newfs]# touch myfile
and getting its file attributes:
[root@localhost newfs]# lsattr myfile
-------------e-- myfile
the A file attribute is not set despite the fact.
Is this the expected behavior?
|
Yes, this is expected: the two behaviours are orthogonal. Setting the A attribute on a file ensures that its access time is never updated, irrespective of mount options. Mounting a file system with noatime ensures that no access time is updated on it, irrespective of file attributes.
Mounting a file system with a given set of options doesn’t affect any related attributes on files created while the options are active; thus files created with noatime active don’t have the A attribute automatically set, just like it’s possible to create device nodes on a file system mounted with nodev, or executables on a file system mounted with noexec.
| Setting noatime via mount options vs no atime updates (A) file attribute |
1,383,063,470,000 |
I've been Googling about, and it seems the answer is 'no' from anecdotal reports for gparted. However does this apply to parted as well?
I'm not talking about risk factors here involved by inputting the wrong partition, fat fingering a button, power cuts etc - I mean direct effects only.
How does parted know how much 'space' is available? I just see the following output:
GNU Parted 3.2
Using /dev/sdd
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) p
Model: ATA OCZ-VERTEX3 (scsi)
Disk /dev/sdd: 60.0GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 316MB 315MB primary ext4
2 316MB 60.0GB 59.7GB primary ext4
While on gparted I see the following (It's showing up with this drive as /dev/sde after a reboot):
It has some functionality to prevent me from 'resizing' the partition to small (and hence prevent data loss - I assume).
|
gparted and parted may have similar names but they do (very) different things. gparted is a standalone software with a distinct set of features and explicitely not (just) a GUI frontend to parted, even though it's labelled as such in many places.
How does parted know how much 'space' is available?
parted does not know nor care (anymore) in the least about your filesystems. It might display the filesystem type for convenience and orientation of the user only, but there is no filesystem related functionality.
When you grow a partition in parted, the filesystem does not grow along with it. You have to do this yourself. (e.g. resize2fs after growing the partition).
When you shrink a partition in parted, you have to make sure beforehand that the filesystem will not take offense. (e.g. resize2fs before shrinking the partition).
If you want to move a partition to a different start sector, then parted does nothing to help you with the relocation logic whatsoever. (if you REALLY know what you're doing, you could do this manually, but you probably shouldn't).
Is it possible to lose data with any of these? Yes, of course.
You should always have a backup.
| Does parted have the same functionality as gparted for shrinking an ext4 partition? |
1,383,063,470,000 |
Suppose there's a hard drive /dev/sda, and both that:
/dev/sda1 is a single ext4 partition taking up the whole disk, and it's mostly empty of data.
dumpe2fs -b /dev/sda1 outputs the badblocks list, which in this case outputs single high number b representing a bad block near the end of /dev/sda; b is fortunately not part of any file.
Now a swap partition needs to be added to the beginning of /dev/sda1, and gparted (v0.30.0-3ubuntu1) is used to:
Resize (shrink) sda1, so that it starts several gigabytes later, but ends at the same place.
Add a swap partition in the gap left by shrinking sda1.
So gparted finishes the job and we run dumpe2fs -b /dev/sda1 again. What happens? Does it...?
Output nothing, meaning the resize forgot the bad block.
Output b unchanged.
Output b + o where o is an offset corresponding to where the just shrunk /dev/sda1 now begins.
NOTE: To simplify the question, suppose that the hard disk in question has no S.M.A.R.T. firmware. (Comments about firmware are off-topic.)
|
GParted doesn’t take any ext2/3/4 badblocks list into account; I checked this by creating an ext4 file system with a force bad block, then moving it using GParted. Running dumpe2fs -b on the moved partition shows the bad block at the same offset.
The result is 2, so the bad block ignored by the file system no longer corresponds to the real bad block on the medium. This means that the file system ignores a block it could safely use, and is liable to use the bad block it should avoid.
This does make sense, at some level. When GParted (or any other tool) moves a partition, it doesn’t use a file system-specific tool, it moves the container. This works in general because file system data is relative to its container; usually the file system data structures don’t need to be updated as a result of a move. However bad block lists describe features which don’t move with their container... Making GParted handle this would be quite complex: not only would it have to update the bad blocks list itself, it would also have to move data out of the way so that the bad block’s new position in the moved file system is unused.
| Does gparted make good use of badblocks lists? |
1,383,063,470,000 |
Let's say I run rm -Rf on a very large folder with many files and folders of different size, user permissions etc.
I would like to know, does the rm command first accumulate the list of files to delete and only after it scans the whole folder for these files it starts to delete?
or does it actually delete each file as soon as it hits during the command duration?
For example, imagine you run the rm -Rf / command and after 5 seconds you terminate it, will it delete anything meanwhile?
The fs on that particular mounted folder is ext4.
|
If you run rm -Rf /, rm will output an error message and stop, as specified by POSIX:
if an operand resolves to the root directory, rm shall write a diagnostic message to standard error and do nothing more with such operands.
In other cases, or if you force rm to process / (assuming your version can be forced, e.g. GNU rm with the --no-preserve-root option), rm deletes files and directories as soon as it can. It processes directories in depth-first order, so that it can delete directories as they are emptied. So in your five seconds, it is likely to delete files and directories.
This is specified by POSIX (see the above link):
For each file the following steps shall be taken:
If the file does not exist:
a. If the -f option is not specified, rm shall write a diagnostic message to standard error.
b. Go on to any remaining files.
If file is of type directory, the following steps shall be taken:
a. If neither the -R option nor the -r option is specified, rm shall write a diagnostic message to standard error, do nothing more with file, and go on to any remaining files.
b. If file is an empty directory, rm may skip to step 2d. If the -f option is not specified, and either the permissions of file do not permit writing and the standard input is a terminal or the -i option is specified, rm shall write a prompt to standard error and read a line from the standard input. If the response is not affirmative, rm shall do nothing more with the current file and go on to any remaining files.
c. For each entry contained in file, other than dot or dot-dot, the four steps listed here (1 to 4) shall be taken with the entry as if it were a file operand. The rm utility shall not traverse directories by following symbolic links into other parts of the hierarchy, but shall remove the links themselves.
d. If the -i option is specified, rm shall write a prompt to standard error and read a line from the standard input. If the response is not affirmative, rm shall do nothing more with the current file, and go on to any remaining files.
If file is not of type directory, the -f option is not specified, and either the permissions of file do not permit writing and the standard input is a terminal or the -i option is specified, rm shall write a prompt to the standard error and read a line from the standard input. If the response is not affirmative, rm shall do nothing more with the current file and go on to any remaining files.
If the current file is a directory, rm shall perform actions equivalent to the rmdir() function defined in the System Interfaces volume of POSIX.1-2017 called with a pathname of the current file used as the path argument. If the current file is not a directory, rm shall perform actions equivalent to the unlink() function defined in the System Interfaces volume of POSIX.1-2017 called with a pathname of the current file used as the path argument.
If this fails for any reason, rm shall write a diagnostic message to standard error, do nothing more with the current file, and go on to any remaining files.
The rm utility shall be able to descend to arbitrary depths in a file hierarchy, and shall not fail due to path length limitations (unless an operand specified by the user exceeds system limitations).
| What is the actual sequence of steps during rm -Rf on a very large folder? |
1,383,063,470,000 |
I recently had couple of accidents with my disks formatted to etx4. To be honest, I believe the failure was on my side, because one of them was due to incorrect [manual] unmounting of flash card, and the other was related to electricity switched off. The net effect is I physically lost 128 GB flash card [with money related] and information on 2Tb HDD [with time related]. My main concern is that such damage NEVER happened to disks partitioned to NTFS, whatsoever.
My questions are:
Is ext4 safe in general? I mean is it me or are there other people who experienced loss of disks/information on disks formatted to etx4?
In Linux world, what could be a safer alternative to ext4 that can outlive unexpected electricity switch off or unexpected unmounting?
|
My main concern is that such damage NEVER happened to disks partitioned to NTFS, whatsoever.
It may have never happened to you, but it has happened. The only filesystems that can claim things like this never happening are those that have never been exposed to such conditions. Even BTRFS and ZFS, which are both designed to be resilient against stuff like this, can have such issues.
To your actual questions though:
Is ext4 safe in general? I mean is it me or are there other people who experienced loss of disks/information on disks formatted to etx4?
It depends on what you mean by 'safe'. I've personally lost data on disks formatted with ext4, but every time it's happened to me it's been due to bad hardware, and, more importantly, it would have happened eventually with pretty much any other filesystem. Despite this, I do still use it for numerous things on a regular basis because, barring user error or hardware issues (which includes unexpected power loss), it just works. So, I consider it 'safe' by most people's definitions, but you may or may not.
In Linux world, what could be a safer alternative to ext4 that can outlive unexpected electricity switch off or unexpected unmounting?
No, not unless you want to deal with other limitations or issues. In particular:
XFS is a bit more resilient against unexpected power loss and doesn't need long checks on reboot like ext4 does, but has a number of practical limitations that make it questionable for small-scale use (can't shrink filesystems, performance isn't quite as good as ext4 on a new volume, can't do data journaling).
NILFS2 is almost impossible to kill with a power failure, but you might lose 30 or so seconds of changes, it requires a userspace component when mounting, and it is missing a handful of features that are generally considered standard by most Linux filesystems.
BTRFS will save you from failing hardware and reasonably reliably, plus it provides nice support for online replacement of failing disks, but again you may lose some of the most recent changes on an unexpected power loss, and you need to do a lot more to keep the volume healthy than for most other filesystems.
ZFS has all the benefits that BTRFS does with none of it's issues (except the management ones), but it requires you build a third-party kernel module and you won't get get any upstream support for any issues you have if you're not running on enterprise grade hardware.
You can, however, do a number of things to make ext4 safer:
Change the behavior when errors are encountered. By default, if an error is encountered in filesystem metadata, ext4 will just mark the volume as needing to be checked, and then act like nothing happened. It's the only filesystem on Linux that does this, everything else will remount the volume read-only, thus preventing any writes to the filesystem from making things worse. You can get this behavior on ext4 by adding errors=remount-ro to the mount options, or running tune2fs -e remount-ro on the block device containing the filesystem.
Make sure you're not using writeback mode for the journal. Yo can ensure this by double checking the mount options for the volume and making sure that journal=writeback is not in the list. Journal writeback mode can significantly improve the performance of certain workloads on ext4 filesystems, but it makes it much more likely that you lose data if you unexpectedly lose power.
If you want to be really paranoid about data safety, you can enable journaled data mode. Normally, the journal on an ext4 filesystem only tracks changes to metadata (renames, file deletion or creation, timestamp updates, etc). In journaled data mode, all changes go through the journal. This slows things down significantly, but provides a functionally 100% guarantee that the file system will remain internally consistent. You can enable this by passing journal=data in the mount options.
You can add the auto_da_alloc mount option. Essentially, this detects applications not calling fsync() when they should, and properly handles things. It's not the default because it slows things down a bit, and most applications don't need it.
On newer kernels, you can enable journal checksumming. This won't actually 'save' your data, but it will help ensure that you're not getting bogus data back when there was an error. This can be enabled by adding journal_checksum to the mount options.
If you've got a new enough kernel and version of e2fsprogs, you can enable metadata checksumming. Similar to the journal checksumming, this won't save your data, but it will help prevent you from seeing bogus data if there's an error. This has to be enabled at filesystem creation time, by passing -O metadata_checksum,metadata_checksum_seed to mkfs.ext4. If you do this, you (probably) don't need to also enable journal checksumming, as the journal is part of what gets covered by the metadata checksumming.
| Is ext4 filesystem safe? [closed] |
1,383,063,470,000 |
uname -a gives:
Linux devuan 4.9.0-6-amd64 #1 SMP Debian 4.9.88-1 (2018-04-29) x86_64 GNU/Linux
All filesystems on all disks in this box are ext3 (~15T worth over six disks)
ps -A gives:
...
14684 ? 00:00:00 jbd2/sdc1-8
14685 ? 00:00:00 ext4-rsv-conver
14688 ? 00:00:00 jbd2/sdc2-8
14689 ? 00:00:00 ext4-rsv-conver
14692 ? 00:00:00 jbd2/sdc3-8
14693 ? 00:00:00 ext4-rsv-conver
14696 ? 00:00:00 jbd2/sdd1-8
14697 ? 00:00:00 ext4-rsv-conver
14700 ? 00:00:00 jbd2/sdd2-8
14701 ? 00:00:00 ext4-rsv-conver
14704 ? 00:00:00 jbd2/sdd3-8
14705 ? 00:00:00 ext4-rsv-conver
14708 ? 00:00:00 jbd2/sdd4-8
14709 ? 00:00:00 ext4-rsv-conver
14712 ? 00:00:00 jbd2/sdf1-8
14713 ? 00:00:00 ext4-rsv-conver
...
Googling doesn't find explanation for "ext4-rsv-conver" to exist, especially since all I use are ext3.
Why does this exist here, is it really needed & can I get rid of it?
|
Since version 4.3 of the kernel, Ext3 file systems are handled by the Ext4 driver. That driver uses workqueues named ext4-rsv-conversion, one per file system; there is no way to get rid of them.
| Can I get rid of "ext4-rsv-conversion" process? |
1,383,063,470,000 |
Is there any way to recover a few specific files from a deleted EXT4 partition. I deleted all partitions on my 480GB SSD. Afterwards, I created a 200 GB NTFS partition (which is mostly empty) and I have about 280 GB still unpartitioned.
I didn't do a "wipe" (or whatever it's called), so it was a quick deletion process.
I'm currently only running Windows (on a completely different SSD), but I would be happy to boot up a Linux Live CD if needed.
I was going to throw Linux on that unpartitioned 280GB but now I don't want to touch it until I figure out if I can recover that data.
Thank you
|
You can use System rescue CD as a live cd
https://www.system-rescue-cd.org/SystemRescueCd_Homepage you'll find the photoRec software in it, to recover lost data.
You can also use the soft directly from windows:
www.cgsecurity.org/wiki/PhotoRec
Here some details on how to use photoRec
http://www.cgsecurity.org/wiki/PhotoRec_Step_By_Step
PS: Don't be to focus on the name of the software it does recover more than only photos
| Recover specific files from deleted EXT4 partition |
1,383,063,470,000 |
I have an ext4 formatted partition, namely /dev/sdc1. I did not format it but somehow parted reports this partition as an unknown file system. Is there a way to mark this partition as ext4 again without formatting, so that I can try to rescue remaining files as much as possible?
|
For Linux, the partition type identifiers are almost entirely cosmetic: in particular, the filesystem repair tools certainly won't require the partition type to be correctly specified in the partition table.
If you point an ext4 filesystem recovery tool at a partition, it will do its best to find and fix an ext4 filesystem on it, if at all possible.
| How to recover / restore an “ext4” partition? |
1,383,063,470,000 |
I'm studying the Ext4 filesystem and am confused by the 128 byte inode size because it appears to conflict with the last metatdata value it stores which is supposed to be offset at byte 156.
In this documentation it states that inodes are 128 bytes in length. I called dumpe2fs on an unmounted /dev/sdb1. The dumpe2fs result corroborates the inode size is 128.
But I'm confused because this documentation delineates the metadata stored in the inode. For each entry of metadata there is a corresponding physical offset. The last entry is the project id. It's offset is in 0x9c (which is 156 as an integer).
It appears the metadata offsets exceed the allocated size of the inode. What am I misunderstanding here?
|
it states that inodes are 128 bytes in length
No. It states that [emphasis mine]:
[…] each inode had a disk record size of 128 bytes. Starting with ext4, it is possible to allocate a larger on-disk inode at format time for all inodes in the filesystem to provide space beyond the end of the original ext2 inode. The on-disk inode record size is recorded in the superblock as s_inode_size. The number of bytes actually used by struct ext4_inode beyond the original 128-byte ext2 inode is recorded in the i_extra_isize field for each inode […] By default, ext4 inode records are 256 bytes, and (as of August 2019) the inode structure is 160 bytes (i_extra_isize = 32).
Your doubt:
The last entry is the project id. Its offset is in 0x9c (which is 156 as an integer). It appears the metadata offsets exceed the allocated size of the inode.
The last entry starts at 156 and takes 4 bytes (__le32). It's within the default 160 bytes.
If dumpe2fs says the inode size is 128 for your filesystem, this means the filesystem uses the original 128-byte ext2 inode. There is no i_extra_isize (it would be at the offset 0x80, decimal 128) or anything specified beyond.
| Why do inode offset values appear to exceed inode size? |
1,383,063,470,000 |
How/why did fstrim trim more space than I have free?
$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/drystone_2019-debian 93G 84G 4.6G 95% /
$ sudo fstrim -v /
/: 8.8 GiB (9395548160 bytes) trimmed
$ uname -r
4.19.0-6-amd64
$ head -n1 /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
|
Q: df -h - "Used" space + "Avail" Free space is less than the total "Size" of /home
A: By default, ext2/3/4 filesystems reserve 5% of the space to be useable only by root.
(and the reserved space is not shown in "Available").
| How/why did fstrim trim more space than I have free? |
1,383,063,470,000 |
Using debugfs -R 'stat <inode_nr> ' /dev/sda1 returns a result where there is a field crtime which i believe represents the creation date of a file pointed to by inode numbered inode_nr. I use this on an ext4 fs.
I know that the inode stores access_time, modification_time and change_time but not birth of a file
So my question is where is the creation time stored or how does the debugfs command retrieve it?
|
If the filesystem records file creation time (Not all do), it's stored in the inode along with the rest of the file metadata like modification and change times. It can be retrieved with the fairly recently added statx(2) system call in the stx_btime field of the struct statx that it populates. Note that there's no easy to use wrapper for it provided by glibc; you have to make the syscall directly.
debugfs probably examines the inode structures directly, though.
| Where is the file creation time (birth) stored in linux? |
1,383,063,470,000 |
I can't understand, why there is so long time delay between these two lines in my dmesg log.
[ 2.089039] hid-generic 0003:1EA7:2001.0003: input,hiddev0,hidraw2: USB HID v1.10 Mouse [WFDZ Gaming Keyboard] on usb-0000:00:14.0-14/input1
[ 2.752704] clocksource: Switched to clocksource tsc
[ 33.501004] EXT4-fs (sda5): mounted filesystem with ordered data mode. Opts: (null)
[ 34.350611] systemd[1]: RTC configured in localtime, applying delta of 120 minutes to system time.
The whole log is here: dmesg.log.txt
My system: Debian GNU/Linux 9, 4.9.0-3-amd64
Can you help me understand it or solve it? Thank you.
|
Debian initramfs-tools version 0.129 (and later) added a 30s wait for the resume device (used for suspend-to-disk aka hibernate) to appear. Previously, it'd check once and if it didn't find it, continue. Now instead it keeps trying to 30s. That is in general a good thing; it makes resume from suspend-to-disk much more reliable, especially on systems that take time to probe disks (e.g., USB).
However, if when building the initramfs, initramfs-tools (mistakenly) detects a resume device that'll never appear, it means boot gets delayed by 30s. I've seen that on one of my systems with encrypted swap.
To fix, override the autodetected resume device by putting RESUME=«something» in either /etc/initramfs-tools/conf.d/resume or /etc/initramfs-tools/initramfs.conf. That «something» can be one of auto (the default, autodetect), none (disable entirely—do not attempt to resume from suspend to disk); UUID=«uuid» (specify explicitly by UUID), /dev/whatever (specify explicitly by device node).
If your system doesn't support suspend to disk (or you don't use it), set to none.
| Long system startup |
1,383,063,470,000 |
Trying to get behind the internals and secrets related to understanding ext4 I was reading on the ext4.wiki. The author(s) did their best to show the structures used (such as the layout/struct of an ext4_inode) yet somtimes it seems they run out of ideas.
Looking up what l_i_version is used for I found this:
l_i_version Version (High 32-bits of the i_generation field?)
Later in the same struct ext4_inode appears also a field:
__le32 i_version_hi; /* high 32 bits for 64-bit version */
which seems then to be the high 32 bits to the yet already high 32bits.
Can anybody shade a light on this?
|
don_crissti found the original patch submission for the extension of inode versions to 64 bits, which explains the use of these fields:
inode->i_version = le32_to_cpu(raw_inode->i_disk_version);
if (EXT4_INODE_SIZE(inode->i_sb) > EXT4_GOOD_OLD_INODE_SIZE) {
if (EXT4_FITS_IN_INODE(raw_inode, ei, i_version_hi))
inode->i_version |=
(__u64)(le32_to_cpu(raw_inode->i_version_hi)) << 32;
}
i_disk_version is a macro for l_i_version (on Linux); this provides the low 32 bits of the inode version. If the inode size is larger, i_version_hi provides the high 32 bits.
i_version is the inode version, which is incremented every time the inode is modified (see mount(8).
| What does l_i_version in an ext4 inode actually do? |
1,383,063,470,000 |
I made a partition of 5 GB -> /dev/sdd2
then made a filesystem sudo mke2fs -N 700 -t ext4 -L test2 /dev/sdd2
and set root reserved space to 0 sudo tune2fs -r 0 /dev/sdd2
sudo dumpe2fs -h /dev/sdd2 shows:
Filesystem volume name: test2
Last mounted on: <not available>
Filesystem UUID: 64f07e45-910b-4e65-92ba-3ce7fdf1242f
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 1216
Block count: 1220864
Reserved block count: 0
Overhead clusters: 21320
Free blocks: 1199538
Free inodes: 1205
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 596
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 32
Inode blocks per group: 2
Flex block group size: 16
Filesystem created: Sun Jun 16 11:26:49 2024
Last mount time: n/a
Last write time: Sun Jun 16 11:30:12 2024
Mount count: 0
Maximum mount count: -1
Last checked: Sun Jun 16 11:26:49 2024
Check interval: 0 (<none>)
Lifetime writes: 2417 kB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 9ce4737d-6beb-437d-a2fe-adaaa6142d11
Journal backup: inode blocks
Checksum type: crc32c
Checksum: 0x7b46d4d5
Journal features: (none)
Total journal size: 64M
Total journal blocks: 16384
Max transaction length: 16384
Fast commit length: 0
Journal sequence: 0x00000001
Journal start: 0
df /dev/sdd2 shows:
/dev/sdd2 4798176 24 4781768 1% /media/lithium/test2
4798176−4781768=16408
Where are the 16 MBs ?
How does df calculate the available space - formula?
Why are the inodes 1216, i defined 700?
Can somebode help me?
Special thanx to Marcus Müller.
Is that right?. i tried a partition stat for better knowledge of filesystem, save as dfe, requires sudo because of dumpe2fs infos
Usage e.g.: dfe --help or sudo dfe /dev/sdd2
#!/usr/bin/bash
declare -a df_ARR
declare dumpFS df_output FSvolumeName FSuuid FSfeatures FStype FSosType partition mounted unit=M\
journalSize inodeTableSize partitionSize overheadISize overheadIISize rootReservedSize usedSize freeSize availableSize GDTreservedSize\
DP usedPercent useablePercent availablePercent fullReservedPercent\
C1=$'\e[38;2;0;200;0m' C2=$'\e[38;2;200;200;0m' C3=$'\e[38;2;120;180;220m' C4=$'\e[38;2;220;110;120m'\
declare -i inodeCount inodeSize inodesFree partitionBlocks blockSize freeBlocks usedBlocks journalBlocks rootReservedBlocks GDTreservedBlocks\
inodeTableBlocks overheadIBlocks overheadIIBlocks df_1kBlocks availableBlocks nUserUseableBlocks\
max_blockNumber_L max_SizeNumber_L\
blocksPerGroup inodesPerGroup groupDescriptorSize
DP=$(locale decimal_point)
divide_INTtoFLOAT ()
{
local -n LOC_var="$1"; local LOC_c; local -i LOC_a=$2 LOC_b=$3 LOC_d=$4+1 LOC_e=-$4
(( LOC_c=LOC_a*10**LOC_d/LOC_b))
LOC_a=${LOC_c: -1}; LOC_c=${LOC_c:0:-1}
((LOC_a>4 ? LOC_c++ : 0))
LOC_b=${#LOC_c}; for ((LOC_a=LOC_d;LOC_a>LOC_b;LOC_a--)); do LOC_c="0$LOC_c"; done
LOC_var="${LOC_c:0: LOC_e}$DP${LOC_c: LOC_e}"
}
calculate_SizeInUnits ()
{
local -n var=$1; local value unitSTR; local -i i l lN;
case $unit in
"K") value=$2;unitSTR="KB";;
"M") value=$(($2/1000));unitSTR="MB";;
"G") value=$(($2/1000**2));unitSTR="GB";;
"k") value=$(($2*1000/1024));unitSTR="kiB";;
"m") value=$(($2*1000/1024**2));unitSTR="MiB";;
"g") value=$(($2*1000/1024**3));unitSTR="GiB";;
esac
lN=${value: -1}; value=${value:0:-1}
if ((lN>4)); then ((value++)); fi
l=${#value}; for ((i=3;i>l;i--)); do value="0$value"; done
var="${value:0:-2},${value: -2} $unitSTR"
}
assign_Values ()
{
local -n var=$1; local value;
[[ $dumpFS =~ "$2"[^$'\n']* ]]; value=${BASH_REMATCH[0]}; value=${value#*:}
var=${value##*( )}
}
if [[ $1 == "--help" ]]; then
echo -e "\e[38;2;123;183;51m\e[1;4mUsage:\e[39m\e[22;24;4:0m dfe [\e[38;2;240;240;0m\e[3mOPTION...\e[39m\e[23m] [\e[38;2;240;240;0m\e[3mDEVICE\e[39m\e[23m]"
echo -e "prints disk usage and partition info!\n"
echo -e "\e[38;2;123;183;51m\e[1;4mDepends On:\e[39m\e[22;24;4:0m commands - df, dumpe2fs\n"
echo -e "\e[38;2;123;183;51m\e[1;4mOptions:\e[39m\e[22;24;4:0m"
echo -e "\t-u \e[38;2;240;240;0m\e[3munit \e[39m\e[23m \e[38;2;103;134;250mCHAR\e[39m ... K, M, G for KB, MB, GB - k, m, g for KiB, MiB, GiB - Standard: M"
echo -e "\t-v \e[38;2;240;240;0m\e[3mversion\e[39m\e[23m ... output version information and exit."
exit
fi
while getopts "u:v" "option"; do
case $option in
"u") unit=$OPTARG;;
"v") echo "df(e)xtended - version: 1.00 - 2024"; exit;;
"?") exit 2;;
esac
done
shift $((OPTIND-1))
shopt -s extglob
dumpFS=$(dumpe2fs -h "$1" 2> /dev/null)
if [[ $dumpFS == *"Couldn't find valid filesystem superblock"* ]]; then echo "Couldn't find valid filesystem superblock!" 1>&2; exit 5; fi
assign_Values "FSvolumeName" "volume name" ; assign_Values "FSuuid" "UUID"; assign_Values "FSfeatures" "features"; assign_Values "FSosType" "OS type";
assign_Values "inodeCount" "Inode count"; assign_Values "inodeSize" "Inode size"; assign_Values "inodesFree" "Free inodes"
assign_Values "partitionBlocks" "Block count"; assign_Values "blockSize" "Block size"; assign_Values "freeBlocks" "Free blocks"; #assign_Values "gdtReservedBlocks" "GDT blocks"
assign_Values "journalBlocks" "Total journal blocks"; assign_Values "GDTreservedBlocks" "GDT"
assign_Values "rootReservedBlocks" "Reserved block count"
assign_Values "blocksPerGroup" "Blocks per group"; assign_Values "inodesPerGroup" "Inodes per group"; assign_Values "groupDescriptorSize" "descriptor size"
df_output=$(df -T "$1"); df_output=${df_output#*$'\n'}; df_ARR=($df_output)
partition=${df_ARR[0]}; FStype=${df_ARR[1]}; df_1kBlocks=${df_ARR[2]}; usedBlocks=${df_ARR[3]}; availableBlocks=${df_ARR[4]}; mounted=${df_ARR[6]}
((usedBlocks=usedBlocks/4,\
availableBlocks=availableBlocks/4,\
inodeTableBlocks=inodeSize*inodeCount/4096,\
df_1kBlocks=df_1kBlocks/4,\
overheadIBlocks=partitionBlocks-(journalBlocks+inodeTableBlocks+df_1kBlocks+GDTreservedBlocks),\
overheadIIBlocks=df_1kBlocks-(rootReservedBlocks+usedBlocks+availableBlocks),\
nUserUseableBlocks=availableBlocks+usedBlocks ))
calculate_SizeInUnits "partitionSize" "$((partitionBlocks*4096))"
calculate_SizeInUnits "journalSize" "$((journalBlocks*4096))"
calculate_SizeInUnits "inodeTableSize" "$((inodeTableBlocks*4096))"
calculate_SizeInUnits "GDTreservedSize" "$((GDTreservedBlocks*4096))"
calculate_SizeInUnits "overheadISize" "$((overheadIBlocks*4096))"
calculate_SizeInUnits "overheadIISize" "$((overheadIIBlocks*4096))"
calculate_SizeInUnits "rootReservedSize" "$((rootReservedBlocks*4096))"
calculate_SizeInUnits "usedSize" "$((usedBlocks*4096))"
calculate_SizeInUnits "freeSize" "$((freeBlocks*4096))"
calculate_SizeInUnits "availableSize" "$((availableBlocks*4096))"
calculate_SizeInUnits "nUserUseableSize" "$((nUserUseableBlocks*4096))"
max_blockNumber_L=${#partitionBlocks}; max_SizeNumber_L=${#partitionSize}
echo -n "$C1" >/dev/tty; echo -n "Partition:"; echo -n $'\e[39m '>/dev/tty; echo $'\t'"$partition - $FSosType file system $FStype"
echo -n "$C1" >/dev/tty; echo -n "Volume Name:";echo -n $'\e[39m '>/dev/tty; echo $'\t'"$FSvolumeName"
echo -n "$C1" >/dev/tty; echo -n "UUID: "; echo -n $'\e[39m '>/dev/tty; echo $'\t'"$FSuuid"
echo -n "$C1" >/dev/tty; echo -n "Features:"; echo -n $'\e[39m '>/dev/tty; echo $'\t'"$FSfeatures"
echo -n "$C1" >/dev/tty; echo -n "Mounted on:"; echo -n $'\e[39m '>/dev/tty; echo $'\t'"$mounted"
echo -n "$C1" >/dev/tty; echo -n "Groups:"; echo -n $'\e[39m '>/dev/tty; printf "\t%-${max_blockNumber_L}s" "$(((partitionBlocks+blocksPerGroup-1)/blocksPerGroup))"; echo " - Group descriptor size: $groupDescriptorSize bytes - Inodes per group: $inodesPerGroup - Blocks per group: $blocksPerGroup"
divide_INTtoFLOAT "usedPercent" "$(((inodeCount-inodesFree)*100))" "$inodeCount" "3"
echo -n "$C1" >/dev/tty; echo -n "Inodes:"; echo -n $'\e[39m '>/dev/tty; printf "\t%-${max_blockNumber_L}s" "$inodeCount"; echo " - Free inodes: $inodesFree (used: ${usedPercent}%) - Inode size: $inodeSize bytes - Inode ratio: 1 inode per $(((partitionBlocks+inodeCount-1)/inodeCount)) blocks"
echo -n "$C1" >/dev/tty; echo -n "Blocks:"; echo -n $'\e[39m '>/dev/tty; printf "\t%-${max_blockNumber_L}s" "$partitionBlocks"; echo " - Free blocks: $freeBlocks total - Block size: $blockSize bytes"
echo -n " Journal : "; printf "%${max_blockNumber_L}s" "$journalBlocks"; echo -n " blocks "; printf "%${max_SizeNumber_L}s\n" "$journalSize"
echo -n " Inode table ~: "; printf "%${max_blockNumber_L}s" "$inodeTableBlocks"; echo -n " blocks "; printf "%${max_SizeNumber_L}s\n" "$inodeTableSize"
echo -n " Other FS overhead 1 : "; printf "%${max_blockNumber_L}s" "$overheadIBlocks"; echo -n " blocks "; printf "%${max_SizeNumber_L}s\n" "$overheadISize"
echo -n " GD table reserved : "; printf "%${max_blockNumber_L}s" "$GDTreservedBlocks"; echo -n " blocks "; printf "%${max_SizeNumber_L}s\n" "$GDTreservedSize"
echo -n "$C3" >/dev/tty
echo -n " Other FS overhead 2 : "; printf "%${max_blockNumber_L}s" "$overheadIIBlocks"; echo -n " blocks "; printf "%${max_SizeNumber_L}s" "$overheadIISize"; echo $' \u2500\e[17b\u252C\u2500 df blocks: '"$df_1kBlocks - $((df_1kBlocks*4)) 1k blocks"
echo -n " Root reserved : "; printf "%${max_blockNumber_L}s" "$rootReservedBlocks";echo -n " blocks "; printf "%${max_SizeNumber_L}s" "$rootReservedSize";echo $' \e[18b\u2502'
echo -n "$C2" >/dev/tty
echo -n " Used blocks : "; printf "%${max_blockNumber_L}s" "$usedBlocks"; echo -n " blocks "; printf "%${max_SizeNumber_L}s" "$usedSize"; echo $' \u2500\u252C\u2500 nUser useable \u2502'
echo -n " Available blocks : "; printf "%${max_blockNumber_L}s" "$availableBlocks"; echo -n " blocks "; printf "%${max_SizeNumber_L}s" "$availableSize"; echo $' \u2500\u2534\u2500\e[15b\u2518'
echo -n $'\e[39m' >/dev/tty
echo " "$'\u2500\e['"$((max_blockNumber_L+max_SizeNumber_L+9))b"
divide_INTtoFLOAT "availablePercent" "$((availableBlocks*100))" "$partitionBlocks" "3"
divide_INTtoFLOAT "fullReservedPercent" "$(((partitionBlocks-availableBlocks)*100))" "$partitionBlocks" "3"
echo -n "$C4" >/dev/tty
echo -n " Partition blocks : "; printf "%${max_blockNumber_L}s" "$partitionBlocks"; echo -n " blocks "; printf "%${max_SizeNumber_L}s\n" "$partitionSize (available: ${availablePercent}% - full & reserved: ${fullReservedPercent}%)"
divide_INTtoFLOAT "useablePercent" "$((nUserUseableBlocks*100))" "$partitionBlocks" "3"
echo -n "$C2" >/dev/tty
echo -n " nUser useable : "; printf "%${max_blockNumber_L}s" "$nUserUseableBlocks";echo -n " blocks "; printf "%${max_SizeNumber_L}s\n" "$nUserUseableSize (useable: ${useablePercent}%)"
echo -n $'\e[39m' >/dev/tty
|
So, the sad part up front: I can't tell you in each detail why the filesystem is structured the way it is – multiple decades of experience, features and bug fixes flow into that, and the ext2/3/4 source tree in the kernel source is not exactly small.
I will still try to address the explicit questions you ask:
Where are the 16 MBs?
File system overhead. I know this sounds silly, but think about how block groups are supposed to be independent enough to reduce contention – some metadata structures will have to be duplicated among these.
Also, superblock backups.
How does df calculate the available space - formula?
not at all. It asks the kernel, via the statvfs function (wrapped in gnulib. Don't read gnulib code if you can avoid it, it's very #ifdef heavy). What these values actually mean is for all practical purposes basically undefined. So, the answer to your question "how is the total size in blocks calculated?" is honestly "you'll have to look this up in the Linux kernel ext4 driver source code".
Why are the inodes 1216, i defined 700?
No, you defined 700 to be reserved. Obviously, reserving more is especially reserving 700; this is compatible with what you wanted!
Probably, it's the smallest sensible size for the 38 block groups you seem to have (divide the number of blocks by the blocks per group, round up). And, 1216 actually happens to be 38 · 32; my guess is that you can't allocate inode tables of arbitrary size, they always need to contain a power of 2, or some other relatively sensible restriction on a file system. You can verify that yourself: specify -n 7000; you should be getting 7296 (= 38 · 192 = 38 · 64 · 3).
To be completely honest, trying to build a file system with 700 inodes seems – wrong. At the very least, you'd have to reduce the number of block groups (I'm not sure how far ext4 will allow you to do that); at that point, why use ext4? Seems like the wrong filesystem to whatever job you're solving here!
| calculate df available space [duplicate] |
1,383,063,470,000 |
Due to an accident specifying a block device, the first 32GB of a 4TB ext4 filesystem on a SATA disk was overwritten by the dd command with the contents of a USB flash drive.
fdisk -l /dev/sda reports the following:
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xeaad24fe
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 8388607 8386560 4G 6 FAT16
/dev/sda2 8388608 73924607 65536000 31.3G 83 Linux
parted shows the following:
GNU Parted 3.5
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: ATA ST4000NM0165 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 4295MB 4294MB primary ext4
2 4295MB 37.8GB 33.6GB primary
Side Note: I'm not sure why parted thinks the file system is ext4 while fdisk shows it as FAT16, but it could possibly be related to the fsck that I tried to run before understanding what had happened. I attempted to run "e2fsck /dev/sda1", and answered yes to the following question:
Superblock has an invalid journal (inode 8).
Clear<y>? yes
It then came back with the statement that the partition size didn't match the physical size, and it was at that point I stopped without proceeding further. (I apologize I don't have the full text of my aborted attempt with fsck. I retyped the above from memory, and I only answered yes once.)
This is what used to be on the disk:
This disk was originally auto-partitioned+formatted by the installer of Ubuntu 18.04. It was an ext4 filesystem, with a single partition, sda1, that took the entire drive. There is a separate, NVME drive that is the system partition, and this disk was configured as a secondary data disk. The parameters will be whatever the Ubuntu 18.04 installer would have selected as the defaults in this instance.
I understand any data in the first 32GB of this disk is irretrievably lost. But the data on this disk is critically important. Is there any way to recover what was on the remaining 99% of the drive?
Can someone recommend steps that would allow me to recreate the original filesystem?
Edit: gdisk -l /dev/sda shows the following:
GPT fdisk (gdisk) version 1.0.5
Caution: invalid main GPT header, but valid backup; regenerating main header
from backup!
Warning: Invalid CRC on main header data; loaded backup partition table.
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.
Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!
Warning! One or more CRCs don't match. You should repair the disk!
Main header: ERROR
Backup header: OK
Main partition table: ERROR
Backup partition table: OK
Partition table scan:
MBR: MBR only
BSD: not present
APM: not present
GPT: damaged
Found valid MBR and corrupt GPT. Which do you want to use? (Using the
GPT MAY permit recovery of GPT data.)
1 - MBR
2 - GPT
3 - Create blank GPT
|
You should make a full "dd" copy of the partition to another device, just for safekeeping in case something goes wrong.
In general, e2fsck should be able to recover from such an issue, subject to loss of the overwritten metadata. The superblock, root directory, journal, and other metadata would be lost. However, the superblock and other critical metadata have multiple backups later in the partition, so the majority of the data should be intact.
You might need to specify a backup superblock location, like e2fsck -fy -B 4096 -b 11239424 /dev/sda2. The backups are stored in groups numbered 3^n, 5^n, 7^n, with 128MiB group size, so if you clobbered up to 32 GiB that is 256 groups and the next highest group number is 7x7x7 = 343, so the backup superblock is in block 343x32768 = 11239424.
It will put everything into the lost+found directory, so you will have to identify files/directories by their content, age, etc.
| First 32GB of a 4TB ext4 filesystem overwritten. How to recover? |
1,383,063,470,000 |
I'm formatting an external hard drive with gparted.
The original NTFS read 232.28 MB used.
Now, with Ext4, it reads 1.92 GB used.
Questions:
Why?
Is there a better file system I should use for external drives?
(This drive will only be used with linux computers. )
Thank you!
|
I assume those "used" values are both for a freshly-created, empty filesystem of the same size, and that's where your confusion comes from. Those "used" values suggest that the actual size of the filesystem on the external drive is probably quite large (say, more than 1 TB?).
On a freshly-created empty filesystem, the non-zero "used" value indicates the disk space allocated for filesystem metadata.
NTFS seems to store most of its metadata as special hidden files. That probably allows the space allocated to metadata to easily grow as needed, and so the filesystem does not need to allocate all of it at filesystem creation time.
On the other hand, ext4 is fundamentally based on a fairly classic filesystem design, where e.g. all the inodes are pre-allocated and the ratio of inodes-per-megabyte is set at filesystem creation time and cannot easily be changed afterwards. As a result, all the metadata space the filesystem will ever need (while at its current size) will be allocated as part of the filesystem creation process.
Using NTFS on an external drive that will only be used with Linux systems does not really make sense: on older Linux versions you might be forced to use NTFS-3g, which is a FUSE (Filesystem in User-Space) driver, and ... not exactly a performance-optimized solution. On very old or hardened systems, there is no guarantee that a NTFS filesystem driver would be available at all.
On the other hand, ext4 is very well supported and has excellent compatibility features. If you know you'll need to work with very old Linux versions, you could disable some of the newer filesystem options or even create the filesystem as ext3 or ext2 to allow even extremely old Linux systems to fully access it.
And since the ext2/3/4 has always been a kernel-based driver, it can achieve good performance (with certain caveats though: if you disable the dir_index filesystem option to achieve compatibility with old systems, directories with very large numbers of files will be slow).
However, you should note that using any Unix-style filesystem (like ext4 or XFS), the UID and GID numbers used on the system that writes files to the disk will be preserved in the file metadata. If you need to ensure all the files on the external drive will be easily accessible on any Linux system the external drive might be plugged into, see this answer I wrote in 2018.
| Formatting the same drive with gparted: original NTFS=232.28MB used | Ext4= 1.92 GB used Why? |
1,383,063,470,000 |
As the title states, grub is unable to recognise my ext4 partition:
GNU GRUB version 2.06-3~deb11u5
Minimal BASH-like line editing is supported. For the first word, TAB
lists possible command completions. Anywhere else TAB lists possible
device or file completions.
grub> ls (hd0
Possible partitions are:
Device hd0: No known filesystem detected - Sector size 512B - Total size
2097152KiB
Partition hd0,gpt1: No known filesystem detected - Partition start at
131072KiB - Total size 1966063.5KiB
...
The disk is using GPT partitioning scheme and the bootloader is the default EFI GRUB2 (grub-efi-amd64-signed) shipped with Debian 11. The partition contains a Linux installation cloned from another disk with rsync -ahPHAXx (as suggested here) (however GRUB doesn't recognise it even when the partition is empty).
On another Linux installation, I am able to mount and browse the above mentioned filesystem and no errors are reported by e2fsck either: /dev/sdb1: clean, 25991/122880 files, 176823/491515 blocks
This ext4 partition has been formatted using the following command:
sudo mkfs.ext4 -v -o 'Linux' -O '^has_journal,resize_inode,^filetype,^64bit,sparse_super2,^huge_file,extra_isize,inline_data' -E 'resize=8388608,root_owner=0:0' -M '/' /dev/sdXY
This issue first occurred on a virtual machine. However, I tried to replicate the same setup on a physical machine by creating a partition of the same size on an existing GPT disk and formatting it with the same options, and trying to ls the disk with different versions of EFI GRUB2 shipped with different distros (CentOS, openSUSE etc.) but always got the same issue (No known filesystem detected).
Can someone point out which of the specified options passed to mkfs is causing the partition not to be recognised by GRUB, but causing no issues in mounting and using on a booted Linux?
|
GRUB2 doesn't currently support the inline_data ext4 feature.
I can't say for sure whether you can disable it at runtime using tune2fs (on an unmounted partition) but you could try.
| grub does not recognise specially-formatted ext4 partition |
1,383,063,470,000 |
According to the mount man page,
Access time is only updated if the previous access time was earlier than the current modify or change time.
However if I do this (ext4 with relatime option(*)):
> date +%T.%N ; dd if=/dev/random of=random.dat bs=1 count=4096 ; date +%T.%N ; stat random.dat
18:52:00.616084761
4096+0 records in
4096+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0319383 s, 128 kB/s
18:52:00.651183318
File: random.dat
Size: 4096 Blocks: 8 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28313073 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ me) Gid: ( 1000/ me)
Access: 2022-09-26 18:52:00.616297607 +0200
Modify: 2022-09-26 18:52:00.648297639 +0200
Change: 2022-09-26 18:52:00.648297639 +0200
Birth: -
The access time seems to be stuck to the creation time, and if I rerun it (so now random.dat exists, and it is the same inode that is updated) I get:
> date +%T.%N ; dd if=/dev/random of=random.dat bs=1 count=4096 ; date +%T.%N ; stat random.dat
18:52:43.014712313
4096+0 records in
4096+0 records out
4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0633748 s, 64.6 kB/s
18:52:43.081174320
File: random.dat
Size: 4096 Blocks: 8 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 28313073 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ me) Gid: ( 1000/ me)
Access: 2022-09-26 18:52:00.616297607 +0200
Modify: 2022-09-26 18:52:43.076338407 +0200
Change: 2022-09-26 18:52:43.076338407 +0200
Birth: -
... where the access time hasn't changed at all despite a complete rewrite of the file contents.
What am I missing/misunderstanding? Shouldn't the access time be updated together with the modify and change ones?
(*) /dev/mapper/vgkubuntu-root on / type ext4 (rw,relatime,errors=remount-ro)
(**) Use of dd if=/dev/random for demo purposes (slow output)
|
Since you are not accessing the data blocks (only writing to them) then atime is not updated. If you read the random.dat then the atime will get updated (as long as the relatime criteria is met).
you can see this for looking for calls to file_accessed() in the kernel:
https://github.com/torvalds/linux/blob/master/fs/ext4/file.c
file_accessed will call the routines to update atime in the inode, and is only called in the read functions (and mmap).
| Access time strangeness |
1,383,063,470,000 |
I have a 2tb hard drive containing gpt and a single 2tb partition with ext4 file system. The partition has one 1.5tb file inside it. I want to change the type of file system of this partition from ext4 to exfat without deleting the 1.5tb file. Can I do that without writing a custom program?
|
There is a tool which some people have successfully used to convert Ext4 partitions to exFAT in place, fstransform. Note that the tool doesn’t officially support conversions to exFAT, and I haven’t tried it — but there are apparently reports of it working (with the --force-untested-file-systems flag).
In any case you should have a backup of your file before attempting this, in which case you might as well reformat and restore your file from backup.
| Change the file system of a partition without deleting its content |
1,383,063,470,000 |
I'm looking on my Debian 11 Server for the easiest way to allocate 100GB of extra space after the /dev/sda1 device in command line.
The sda1 partition is almost full and needs to be resize with the unallocated space.
Here is the structure of my hard drive:
Disk: /dev/sda
Size: 200 GiB, 214748364800 bytes, 419430400 sectors
Label: dos, identifier: 0xea1313af
Device Boot Start End Sectors Size Id Type
>> /dev/sda1 * 2048 192940031 192937984 92G 83 Linux
/dev/sda2 192942078 209713151 16771074 8G 5 Extended
└─/dev/sda5 192942080 209713151 16771072 8G 82 Linux swap / Solaris
Free space 209713152 419430399 209717248 100G
Partition type: Linux (83) │
│ Attributes: 80 │
│Filesystem UUID: b4804667-c4f3-4915-a95d-d3b83fac302c │
│ Filesystem: ext4 │
│ Mountpoint: / (mounted)
Could you help me to easily achieve this in command line? Thanks!
Best regards
|
The free space is not directly after the sda1 partition so you can't use it, you need to remove (or move, but removing is easier) the swap partition sda5.
Stop the swap using swapoff /dev/sda5
Remove the sda5 partition and the sda2 extended partition.
Resize the sda1 partition. Don't forget to resize the filesystem too using resize2fs. You can check this question for more details about resizing partitions using fdisk.
Create a new swap partition (optionally a logical one inside a new extended partition if you want setup similar to your current one).
Update your /etc/fstab swap record with the new partition number or UUID.
| Extend 100GB of unallocated space on /dev/sda1 device in command line |
1,383,063,470,000 |
how can I add the rest of the 19.5GB from sda2 to vg00-lv01? I tried lvextend but this tells me Insufficient free space: 512 extents needed, but only 0 available.
I'm using Ubuntu 20.4.
NAME FSTYPE SIZE MOUNTPOINT LABEL
sda 20G
├─sda1 ext4 487M /boot
└─sda2 LVM2_member 19.5G
├─vg00-lv00 swap 1.9G [SWAP]
└─vg00-lv01 ext4 7.6G /
sr0 1024M
Output of sudo fdisk -l /dev/sda
Disk /dev/sda: 20 GiB, 21474836480 bytes, 41943040 sectors
Disk model: Virtual disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x520f1760
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 999423 997376 487M 83 Linux
/dev/sda2 999424 41943039 40943616 19.5G 8e Linux LVM
Output of pvs -a
PV VG Fmt Attr PSize PFree
/dev/sda1 --- 0 0
/dev/sda2 vg00 lvm2 a-- <9.52g 0
Thanks in advance!
|
You need to resize both the physical volume and the logical volume within.
pvresize /dev/sda2
lvextend /dev/vg00-lvol1 /dev/sda2 # grow the LV by the amount of free PV
fsadm resize /dev/vg00/lvol1 # grow the FS within the LV
| Extend LVM on Ubuntu 20.4 |
1,625,998,088,000 |
I have a directory which is write-intensive (/home/user/project/.comp, used by the compilation tools), is there a way to buffered the writes only for this directory? (every one hour or at shutdown)
I use ArchLinux with ext4 on a SSD.
|
Depending on the size of the directory and memory available, you might be able to create a ramdisk of suitable size, then mount it in "project/.comp".
A cron job and a shutdown task could then rsync it with the real "project/.comp-real".
You might also want to experiment with different file systems (XFS, for example) on the ramdisk.
| Mount a directory in buffered write mode |
1,625,998,088,000 |
Many sources, such as https://www.commandlinux.com/man-page/man8/mkfs.ext4.8.html read:
...block-size is heuristically determined by the filesystem size...
What is this heuristic?
In which source file can the calculation be found?
Do all modern HDDs/SSDs (i.e. over 100 GB) cause 4KiB blocks by default with this heuristic?
|
The calculation seems to be quite simple: if the block size is not selected by user it defaults to page size (so 4096) or logical sector size of the device if it's bigger than page size (there are few more exceptions, but this should cover most of the cases).
The e2fsprogs source is available here and the code that sets blocksize in mke2fs is here.
| Do almost all ext4 filesystems have 4KiB blocks? |
1,625,998,088,000 |
I have performed a rsync between two folders:
rsync -avzh /mnt/folder1 /mnt/folder2
(folder1 was /dev/sdb and folder2 was /dev/sdc, both ext4 partitions)
Then I have unmounted folder1 and I made a mistake and I mounted /dev/sdc directly over /mnt with:
mount /dev/sdc /mnt
When in fact I wanted to mount /dev/sdc over /mnt/folder1
Now I am not able to umount /mnt:
umount /mnt/
umount: /mnt/: target is busy
(In some cases useful info about processes that
use the device is found by lsof(8) or fuser(1).)
How can I fix this?
|
This is, to prevent data loss !
Run the following command, to see what process prevents unmounting. I am assumimg, that you skipped the partition on the dev to type less:
lsof | grep '/dev/sdc'
Close your work or end the given processes and unmount again. You cal also see, what files are still open with:
fuser -u /mnt/
Whereas the following command will kill all processes itself and probaply result in data loss:
fuser -km /mnt
Or show it as unmounted and let it automatically unmount, when the processes have finished:
umount -l /mnt
| Mount directly over /mnt by mistake |
1,625,998,088,000 |
So I'm making some kind of research on EXT4 checksums.
I found this page and tried to calculate checksum by myself. I've started with Superblock since it is sounds pretty simple: "The entire superblock up to the checksum field".
But it does not work: I can't get the same result as the superblock.
For this task I wrote superblock checksum calculator on Python. You may look at my program on GitHub. I tried a lot of things.
First of all, I tried to read the whole superblock up to checksum (1020 bytes) and put it in CRC32C (algorithm is independent library from pip). Although this was written in wiki, this does not work.
Then I simply reversed the whole superblock. It has not much sense, I think. And I failed again.
After this, I tried more complicated way. I tried to reverse all fields of the superblock separately. It gives another result as you can see:
Raw data: 1F DC 5E 4A
2-byte fields reversed: DC 1F 4A 5E
Full data reverse: 4A 5E DC 1F
And once again, I failed. Here all interpretations of phrase "The entire superblock up to the checksum field" ended.
I tried to add zero-filled checksum field to all algorithms and tried to reverse only little-endian fields (seemed like a good idea actually), without reversing char and u8 fields.
But there is no chance to get the same checksum as the original superblock.
My script makes output like this for superblock:
00c0390000cae600198a0b00c6aca40039a835000000000002000000020000000080000000800000002000001ee68c5c17e68c5c2000ffff53ef01000100000055936d5c000000000000000001000000000000000b000000000100003c000000c60200006b040000d6eb1a5613a44a8a91b66dbfe7cbbca9000000000000000000000000000000002f0061726765740000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000004000000000000000000000000000000000800000000000000f00c3000ca7d5363a49944fd9db16c0f95cfab15010140000c0000000000000055936d5c0af3020004000000000000000000000000800000008070000080000000800000000071000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000200020000100000000000000000000000000000000000000040100000d63df0f00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000fe3731ed
ORIGINAL CHECKSUM (not calculated!): 0xfe3731ed
CALCULATED WHOLE SUPERBLOCK: 0xffffffffL - this always happens, some overflow error maybe.
CALCULATED SUPERBLOCK WITHOUT CHECKSUM: 0x12cec801L
CALCULATED FULLY-REVERSED SUPERBLOCK WITHOUT CHECKSUM: 0x7fe225e5L
CALCULATED FIELDS-REVERSED SUPERBLOCK: 0x8cce5045L
I can't find any documentation, and ext4 source code files are poorly commented (and really complicated), I can't make any sense of them.
|
OK. I've got the answer from Reddit (nightbladeofmalice). This man noticed that checksum of the raw superblock without checksum field (0x12cec801) will give reversed (in big-endian) original checksum if you subtract it from 0xFFFFFFFF:
ORIGINAL SUPERBLOCK:
00c0390000cae600198a0b008f99a400e8a53500000000000200000002000000008000000080000000200000082d8e5c012d8e5c2100ffff53ef01000100000055936d5c000000000000000001000000000000000b000000000100003c000000c60200006b040000d6eb1a5613a44a8a91b66dbfe7cbbca9000000000000000000000000000000002f00617267657400000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000040000000000000000000000000000000008000000000000008e013000ca7d5363a49944fd9db16c0f95cfab15010140000c0000000000000055936d5c0af302000400000000000000000000000080000000807000008000000080000000007100000000000000000000000000000000000000000000000000000000000000001000000000000000000000000020002000010000000000000000000000000000000000000004010000df1b5b100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000015de7cf3
ORIGINAL CHECKSUM (not calculated, big endian!): 0xf37cde15
RAW SUPERBLOCK IN CRC32C WITHOUT CHECKSUM FIELDS (1020 bytes): 0xc8321eaL
INVERTED CHECKSUM (0xFFFFFFFF-previous field): 0xf37cde15L
| EXT4 CRC32C checksum algorithms are badly documented |
1,625,998,088,000 |
What I understand about it is its "Zero fill on demand" extent. That means file system doesn't allocate write data to the file but it just gives us ZFOD extent and when an application tries to read/write the data it just fills out data zeros and then performs read/write [source].
My questions are:
Is my above understanding correct?
If it just doesn't allocate the data, does that mean is it a HOLE?
|
You must consider two things:
whether or not there is space allocated for the data, and
whether or not there is data actually written.
If a file has no data and no space allocated for it, you get an end-of-file indication if you attempt to read it. If you write to it, the filesystem must allocate space at the time of write: in other words, the write operation may fail with an ENOSPC error if there is no free disk space to allocate.
A hole in a sparse file has no space allocated for it, and if you attempt to read that part of the file, you get back data which is all zeroes. If you write to that part of the file, the filesystem must again allocate space for it at the time of the write operation, so the write operation may fail with an ENOSPC error if the disk is full.
A ZFOD extent is nominally allocated to a file, but there is no data written to it yet. If you read it, you get back all-zeroes; if you write to it, the space is already allocated so there is no risk for an ENOSPC error condition.
And finally, there is a normal data extent: if you read it, you get back the actual data, and if you write to it, you replace the existing data with new data.
In other words, the ZFOD extent is an optimization for situations where an application may allocate a large file, won't use all of it immediately, and still needs a guarantee that the space will be available when needed. For SSD storage, erasing existing data from a block is the slowest operation, so ZFOD extents allow the system to quickly create a large by allocating it initially as ZFOD extents: then the filesystem can do the actual erasing & filling with zeroes on-demand for each block that is actually used.
If a SSD storage is used for storing something like disk images for virtual machines, ZFOD extents can help in minimizing the number of times the actual disk blocks need to be erased, and so improve the usable lifetime of the SSD.
| What are ZFOD extents? |
1,625,998,088,000 |
we have Beaglebone black based custom board,
we have busybox shell including coreutils.
busybox version is BusyBox v1.20.2 (2017-10-16 16:39:36 EDT)
now we wanted to check the inode usage in each partition,
So when I run df -i I get following output
# df -i
Filesystem Inodes Used Available Use% Mounted on
rootfs 125 9 116 7% /
/dev/root 125 9 116 7% /
tmpfs 62 0 62 0% /tmp
tmpfs 62 0 62 0% /dev/shm
tmpfs 62 0 62 0% /var/run
tmpfs 62 0 62 0% /var/spool/cron
tmpfs 62 0 62 0% /var/sftp
/dev/mmcblk0p18 15 0 15 0% /var/db
/dev/mmcblk0p19 64 0 64 0% /var/firmware
now when I run tun2fs to get inode count I get following output
# tune2fs -l /dev/mmcblk0p18 | grep -i inode
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse
Inode count: 15360
Free inodes: 15346
Inodes per group: 1920
Inode blocks per group: 240
First inode: 11
Inode size: 128
Journal inode: 8
Journal backup: inode blocks
# tune2fs -l /dev/mmcblk0p19 | grep -i inode
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse
Inode count: 65536
Free inodes: 65525
Inodes per group: 8192
Inode blocks per group: 512
First inode: 11
Inode size: 256
Journal inode: 8
Journal backup: inode blocks
I don't understand why it is different.
Inodes informed by busybox for a partition e.g. mmcblk0p18 is 15
And same thing reported by tune2fs is 15*1024 = 15360
same thing for partition mmcblk0p19.
I am not understanding why it is reported that way in busybox because inode size is also different in both the partitions 128 and 256 for partition 18 and 19 respectively.
Can someone help or give any pointers?
|
I looked at the buglist of busybox but did not find any reference of my error.
Fact that busybox df is working as expected in my ubuntu machine, I looked at the busybox configuration.
Initially I was enabling below two configs only,
CONFIG_DF=y
CONFIG_FEATURE_DF_FANCY=y
And with that I was not able to get expected output. However once I enabled below configuration df -i started working as expected.
#
# Common options for df, du, ls
#
CONFIG_FEATURE_HUMAN_READABLE=y
So culprit was this common config in busysbox config.
| df from busybox shows different number of inodes than that of tune2fs |
1,625,998,088,000 |
I had an external hard drive that I mounted internally. It came formatted with NTFS, and I wanted to move to ext4. So I copied everything I wanted to keep onto other drives, created a brand new partition table (GPT) with a single ext4 partition, and now I'm trying to copy everything back. I'm using rsync -a --info=progress2 for most of the copy operations.
My problem is that after 100 GB or so, I tend to get weird errors:
rsync: write failed on "somepath": Read-only file system (30)
rsync error: error in file IO (code 11) at receiver.c(389) [receiver=3.1.0]
If I try to list the directory that rsync was working on when it failed, I see weird results:
drwx------ 3 pdaddy pdaddy 4096 Aug 28 2011 subdirectory1
drwx------ 3 pdaddy pdaddy 4096 Mar 12 2014 subdirectory2
d????????? ? ? ? ? ? subdirectory3
d????????? ? ? ? ? ? subdirectory4
Trying to list the directories with question marks in their listings, and even some of them without, gives me:
ls: reading directory subdirectory3: Input/output error
total 0
Even fdisk has errors:
~ % fdisk /dev/sde
fdisk: unable to read /dev/sde: Input/output error
If I try to unmount the drive, the umount command hangs. I ran htop and saw that umount was using 100% of one CPU core. I assumed it was committing journals or some such, so I let it go all night once, but it was in the same state in the morning. Issuing sudo reboot or sudo init 6 while umount is hung results in yet another hung terminal. I have to hold the power button. Just now I tried rebooting without explicitly unmounting, and it hung with a black screen (the monitor went to sleep), and no response via ssh or the keyboard.
After a hard power cycle, I unmounted the disk and did sudo fsck.ext4 -f /dev/sde1, and there were no errors. I checked the files, and they seemed to all be there and a sample of them were correct.
I assumed the errors had something to do with the journal being too large (maybe it's limited to a maximum size?), so I remounted with -o data=writeback. I figured it's a good idea anyway to mount this way temporarily while restoring terabytes worth of files.
This helped to marginally speed the copy, but did not help with the errors. Twice more, I've gotten into the same state. A hard power cycle is the only thing I can do, and afterward, a disk check shows no errors, the files seem okay, and I can copy another 100 GB or so.
What's going on? I think the disk itself is healthy. I had no problems with it before reformatting. Should I do a sector scan on the disk? It's 5 TB, so I'm hesitant to do that.
I've restored some more files, watching the kernel logs, as suggested by Stephen Kitt. Before rsync failed, I started seeing some funky errors:
[ 8807.572286] ata4.00: exception Emask 0x0 SAct 0x7fffffff SErr 0x0 action 0x6 frozen
[ 8807.572290] ata4.00: failed command: WRITE FPDMA QUEUED
[ 8807.572293] ata4.00: cmd 61/40:00:c0:57:b6/05:00:b7:00:00/40 tag 0 ncq 688128 out
[ 8807.572293] res 40/00:00:00:4f:c2/00:00:00:00:00/40 Emask 0x4 (timeout)
[ 8807.572295] ata4.00: status: { DRDY }
The last three messages repeat many times, then I get:
[ 8807.572412] ata4: hard resetting link
[ 8808.060464] ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
[ 8808.062462] ata4.00: configured for UDMA/133
[ 8808.076459] ata4.00: device reported invalid CHS sector 0
The last message repeats 20 times or so, and then I get:
[ 8808.076526] ata4: EH complete
47 seconds later, the sequence repeats itself. And again 81 seconds after that, and 120 seconds after that, except this time, it starts with:
[ 9160.779935] ata4.00: NCQ disabled due to excessive errors
The next time, it's different. It starts the same, but then I see:
[ 9235.819291] ata4: hard resetting link
[ 9241.181501] ata4: link is slow to respond, please be patient (ready=0)
[ 9245.839449] ata4: COMRESET failed (errno=-16)
This repeats a couple of times, and then:
[ 9290.922301] ata4: limiting SATA link speed to 1.5 Gbps
[ 9290.922303] ata4: hard resetting link
[ 9295.948393] ata4: COMRESET failed (errno=-16)
[ 9295.948400] ata4: reset failed, giving up
[ 9295.948401] ata4.00: disabled
There are some new errors:
[ 9295.948522] sd 3:0:0:0: [sdf] FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 9295.948524] sd 3:0:0:0: [sdf] CDB:
[ 9295.948525] Write(16): 8a 00 00 00 00 00 b9 0c fd 00 00 00 40 00 00 00
[ 9295.948538] blk_update_request: I/O error, dev sdf, sector 3104636160
[ 9295.948542] EXT4-fs warning (device sdf1): ext4_end_bio:317: I/O error -5 writing to inode 49807774 (offset 155189248 size 4194304 starting block 388079688)
[ 9295.948543] Buffer I/O error on device sdf1, logical block 388079264
(Note that I've shuffled some drives since I started this post, and this drive is now sdf instead of sde.)
This last error repeats several times with different logical blocks, and then I get this an equal number of times:
[ 9295.948585] EXT4-fs warning (device sdf1): ext4_end_bio:317: I/O error -5 writing to inode 49807774 (offset 155189248 size 4194304 starting block 388079856)
There's more of the same, and all the while the copy is still going on without complaining. Finally I get:
[ 9295.950321] Aborting journal on device sdf1-8.
[ 9295.950345] Buffer I/O error on dev sdf1, logical block 610304000, lost sync page write
[ 9295.950361] EXT4-fs (sdf1): Delayed block allocation failed for inode 49807775 at logical offset 0 with max blocks 1024 with error 30
[ 9295.950362] Buffer I/O error on dev sdf1, logical block 0, lost sync page write
[ 9295.950365] EXT4-fs (sdf1): This should not happen!! Data will be lost
[ 9295.950365]
[ 9295.950366] EXT4-fs error (device sdf1) in ext4_writepages:2421: Journal has aborted
[ 9295.950368] EXT4-fs error (device sdf1): ext4_journal_check_start:56: Detected aborted journal
[ 9295.950370] JBD2: Error -5 detected when updating journal superblock for sdf1-8.
[ 9295.950371] EXT4-fs (sdf1): Remounting filesystem read-only
[ 9295.950372] EXT4-fs (sdf1): previous I/O error to superblock detected
[ 9295.950379] Buffer I/O error on dev sdf1, logical block 0, lost sync page write
[ 9295.950394] Buffer I/O error on dev sdf1, logical block 0, lost sync page write
[ 9326.009002] scsi_io_completion: 10 callbacks suppressed
[ 9326.009007] sd 3:0:0:0: [sdf] FAILED Result: hostbyte=DID_BAD_TARGET driverbyte=DRIVER_OK
[ 9326.009009] sd 3:0:0:0: [sdf] CDB:
[ 9326.009011] Write(16): 8a 00 00 00 00 00 00 00 0f b8 00 00 00 08 00 00
[ 9326.009018] blk_update_request: 10 callbacks suppressed
[ 9326.009020] blk_update_request: I/O error, dev sdf, sector 4024
[ 9326.009023] Buffer I/O error on dev sdf1, logical block 247, lost async page write
(Note that this time I did not unmount and remount with data=writeback, so it was doing its default journaling.)
After this, the rsync failed, presumably because the file system was remounted read-only.
I'm sorry for the log dump. I've tried to pare it down to the essentials, but I'm afraid I'm not familiar enough with what's going on here to pare it down any further.
|
This looks like a hardware issue, rather than a kernel bug. You can try the following:
re-seat the SATA cable
use another SATA cable
run SMART diagnostics (the self-tests, see smartmontools)
run a destructive badblocks scan
If you have a spare drive or computer you could also try switching (use another drive in the same computer, use the troublesome drive in another computer) to check whether the motherboard's at fault. Since the drive seems to have issues under load a simple dd if=/dev/zero of=... with appropriate size parameters might be enough to reproduce the errors.
I'm not sure if your drive's warranty would apply since it was originally an external drive...
| Filesystem errors when restoring many files |
1,625,998,088,000 |
I continuously get these messages upon boot:
[ 17.806441] EXT4-fs (sda1): re-mounted. Opts: (null)
[ 157.196550]
postgres (1297): /proc/1297/oom_adj is deprecated, please use
/proc/1297/oom_score_adj instead.
As you can see from the time differences, this is a massive delay! How would I fix this? This happens on every single version of my builds (across 30-40 hard drives), so i do not believe its a hard drive issue, though they are all direct copies of one master.
Is this the boot delay? How do i fix it? Any insight would be helpful.
My superior believes there is not enough proof to say this is the reason the boot up is taking so long. If its not this (dmesg print out), then what could it be?
Notes:
Version = Linaro 13.08 (GNU/Linux 3.15.0+ armv7l)
|
The issue was found, by using the application bootchart A graph was formed of all the start up processes, found within it was a large 2 minute sleep process!
This 2 minute sleep was found within /etc/init/failsafe.conf <- this delay is meant to echo out to the terminal, though it did not. by modifying the script i managed to get my system booted in 23 seconds. Though other issues come with this, but not anything i cannot fix easily / hack together.
The other issues was the DHCP server would not come up when broadcasting the SSID, it would just fail.
I threw together a script which i put with @reboot into crontab -e which looped through checking if the service was running, if it wasnt running it would start it and keep trying untill started.
| Boot delay errors? |
1,625,998,088,000 |
I previously had a grub bootloader with crunchbang and win7. Since then I reinstalled win7 which now boots it automatically. I stuck in a Debian install CD that brings me to the stage in the above image. The highlighted logical ext4 partition is my existing Debian installation. If I change the settings to make it bootable it warns me that's generally only for primary partitions.
The existing ntfs primary partition is just a storage disk the bootable windows 7 partition is on the 64gb Kingston drive.
Am I safe to switch the ext4 to bootable? Do I need to make it primary too?
|
If I'm not mistaken, you don't have to set the partition to bootable. I run only dual-boot machines, and have never done this.
Do you have a particular reason for doing so?
| Reinstalling Debian: ext4 partition is not primary |
1,625,998,088,000 |
I have a question which is slightly programming related, but it mostly relates to how ext4 works.
I have a program which writes 128MB to a file with changing random aligned offsets. I write 256KB every write call. Now the speed results are significantly different between the two devices.
I have /dev/sda and /dev/sdb both ext4, while sda is 8GB, sdb is 512MB.
For example, a write task on /dev/sda took 0.7 seconds to complete, while the same write took 0.05 seconds to complete on /dev/sdb. Both partitions are on the same hard disk, which is not an SSD.
EDIT: Sorry I forgot to add that this is running on a virtual machine with VirtualBox with the host being a Windows system. Its definitely only one physical drive because my laptop only has one.
EDIT2: I've found the issue, I was running the program on what I thought to be '/dev/sda' but it was a shared folder the I mounted from the host system. I didn't realize the filesystem will be different.
I'm interested to know what behind of scenes stuff could cause such a dramatic change in performance, thanks!
|
Since this is a little bit too long for a comment here it goes...
There are two things that got me intrigued:
First of all, /dev/sda and /dev/sdb are two different physical drives, otherwise we would be talking about /dev/sda1 and /dev/sda2. So if we are talking about different physical drives their performances may vary.
Second, in case this info is wrong and we have two partitions on the same drive (/dev/sda1 and /dev/sda2), what is the physical drive size? drives greater than 2TB partitions must be properly alligned otherwise you will have performance issues. Could it be that one of your partitions is alligned while the other one is not? Have you tried testing the partitions speed by other means apart from your program? check this to test drive speeds.
| writing to ext4, 8GB partition vs 512MB partition |
1,625,998,088,000 |
This question is following Unable to mount /home/ partition after reinstalling grub after reinstalling windows 7 where the diagnostic was that installing windows 7 deleted my /home partition, lovingly called /dev/sda3.
Since almost nothing have been done with this computer since the incident, we can expect that the content of the partition is still intact and that it is only unusable for the moment.
The mission is to try to rescue the files that were inside this partition by restoring it to its original ext4 format.
Does anyone know how to proceed?
|
Right off the bat make a dd disk image of the drive, and work with that instead of the drive itself. That lets you experiment.
dd if=/dev/sda3 bs=1M > sda3.img
Beyond that I'm not sure. I'd hit google. Might look at it later.
edit; http://www.cgsecurity.org/wiki/TestDisk looks promising.
| how to restore a logical partition to its original ext4 format |
1,625,998,088,000 |
Let's assume I was very unlucky and ran out of inodes in my ext4 filesystem but left with enough free space.
Inode usage is 100%, but it has 50% disk free space.
How can I resolve it?
|
One option is to recreate your filesystem specifying bytes-to-inode ratio with -i option.
Backup all of your data to another disk.
List your filesystems and find the one you want to modify:
$ df -h
assuming that filesystem is /dev/sdX and is mounted on /mnt/mountpoint.
Unmount that filesystem:
$ umount /mnt/mountpoint
Create that filesystem using mkfs.ext4 command specifying -i byte-to-inode ratio:
$ mkfs.ext4 -i 4096 /dev/sdX
This command will create ext4 filesystem with 4 KB per inode ratio (which will create four times more inodes than the default value - 16 KB per inode).
Mount that filesystem:
$ mount /dev/sdX /mnt/mountpoint
| Inode limit reached with free space available |
1,625,998,088,000 |
I am looking for a method to read/write an ext4 partition from a Windows 10. Both partitions are on the same physical hard drive on a dual boot system. That means, that NFS wouldn't be sufficient. Are there open tools to achieve that?
Cheers, Jens
|
I don't know of any FOSS drivers that work with recent versions of Windows 10/11.
There are claims that ext2fsd still works, but it stopped working reliably for me several years ago and I wouldn't want to trust it with my data.
There are commercial offerings such as the one from Paragon. I've not used this at all.
Finally some people say that you can boot WSL and share a mounted filesystem back to Windows. I've not tried this but it seems hopeful
| Need full access to ext4 file system from Windows 10 |
1,625,998,088,000 |
I have an external drive with an ext4 partition /dev/sda1 I use for my local borg backups.
It is simply plugged in via usb port, and, mounted with an fstab generated systemd automount entry. I ran a backup yesterday in the evening without any errors, and this morning, I plugged it in, it was not recognized anymore. The drive would show up with lsblk, but no partition under it.
I ran sudo fsck -R -C -V -t ext4 /dev/sda1 and got the following output:
fsck from util-linux 2.39.2
[/usr/bin/fsck.ext4 (1) -- /dev/sda1] fsck.ext4 -C0 /dev/sda1
e2fsck 1.47.0 (5-Feb-2023)
fsck.ext4: Attempt to read block from filesystem resulted in short read while trying to open /dev/sda1
Could this be a zero-length partition?
/dev/sda1: status 8, rss 3232, real 0.002321, user 0.001784, sys 0.000000
I have no idea how to interpret that. I only can see the exit code status 8 the man page describes as an 'operational error'.
BEGIN EDIT
Output for sudo parted /dev/sda print
Error: Invalid partition table on /dev/sda -- wrong signature 0.
Ignore/Cancel? I
Model: SABRENT (scsi)
Disk /dev/sda: 1000GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 8225kB 1000GB 1000GB extended lba
Output of sudo dmesg right after plugging the drive in
[16265.871467] usb 2-6.4: new SuperSpeed USB device number 15 using xhci_hcd
[16265.889474] usb 2-6.4: New USB device found, idVendor=152d, idProduct=1561, bcdDevice= 2.04
[16265.889486] usb 2-6.4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[16265.889491] usb 2-6.4: Product: SABRENT
[16265.889495] usb 2-6.4: Manufacturer: SABRENT
[16265.889499] usb 2-6.4: SerialNumber: DB9876543214E
[16265.899660] scsi host4: uas
[16265.900160] scsi 4:0:0:0: Direct-Access SABRENT 0204 PQ: 0 ANSI: 6
[16268.706521] sd 4:0:0:0: [sda] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB)
[16268.706530] sd 4:0:0:0: [sda] 4096-byte physical blocks
[16268.706759] sd 4:0:0:0: [sda] Write Protect is off
[16268.706768] sd 4:0:0:0: [sda] Mode Sense: 53 00 00 08
[16268.707113] sd 4:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
[16268.707265] sd 4:0:0:0: [sda] Preferred minimum I/O size 4096 bytes
[16268.707270] sd 4:0:0:0: [sda] Optimal transfer size 33553920 bytes not a multiple of preferred minimum block size (4096 bytes)
[16268.724287] sda: sda1 < >
[16268.724396] sd 4:0:0:0: [sda] Attached SCSI disk
[16296.811964] usb 2-6.3: reset SuperSpeed USB device number 14 using xhci_hcd
[16340.865861] sda: sda1 < >
Following telecoM advice, I ran sudo losetup --sector-size 4096 -P -f /dev/sdx. I have a loop1p1 device/partition now.
❯ sudo parted /dev/loop1p1 print
Error: /dev/loop1p1: unrecognised disk label
Model: Unknown (unknown)
Disk /dev/loop1p1: 4096B
Sector size (logical/physical): 4096B/4096B
Partition Table: unknown
Disk Flags:
❯ sudo fsck.ext4 -f /dev/loop1p1
e2fsck 1.47.0 (5-Feb-2023)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/loop1p1
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
END EDIT
Should I give up on trying to recover this partition ? (I have redundant backups, it is not a catastrophe for me, I just want to learn in anticipation for the day it might be one).
Thanks in advance for your help.
|
My first assumption would be a faulty case, particularly as you've got pluggable USB involved. Check that it's plugged in properly (both ends of the cable) and there's sufficient power.
I would also check the partition table, which you added to your question. Unfortunately it too shows a faulty read from the device, which is why I suspect the hardware.
Sadly, there are lots of "it doesn't work" posts with the particular vendor id (0x152d) and the product id (0x1561) that you showed. As an example I searched Google for "linux sabrent 152d 1561 usb". You might be better with a different case (I've found no issues with my RSHTECH 3.5in SATA case, but as they're all built to low budgets my experience suggests it's often a matter of potluck.)
| Recovering an ext4 partition |
1,625,998,088,000 |
Preface (my 1st attempt ended badly): Fstab adding data=journal crashed my Linux' ext4 upon boot, how to fix?
I can't find some reliable step-by-step instructions on How to enable data=journal ext4 fs mode? (It is my root file system.)
Can anyone help? Thank you!
OS: Linux Mint 21.1 Cinnamon
Here is the tune2fs dump:
$ sudo tune2fs -l /dev/nvme0n1p2
[sudo] password for vlastimil:
tune2fs 1.46.5 (30-Dec-2021)
Filesystem volume name: <none>
Last mounted on: /
Filesystem UUID: f1fc7345-be7a-4c6b-9559-fc6e2d445bfa
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 122093568
Block count: 488354304
Reserved block count: 20068825
Free blocks: 387437462
Free inodes: 121112327
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Reserved GDT blocks: 817
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Sat Jun 16 11:26:24 2018
Last mount time: Sun Jul 2 17:28:19 2023
Last write time: Sun Jul 2 17:28:11 2023
Mount count: 1
Maximum mount count: 1
Last checked: Sun Jul 2 17:28:11 2023
Check interval: 1 (0:00:01)
Next check after: Sun Jul 2 17:28:12 2023
Lifetime writes: 39 TB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
First orphan inode: 132249
Default directory hash: half_md4
Directory Hash Seed: 48360d76-0cfb-4aed-892e-a8f3a30dd794
Journal backup: inode blocks
Checksum type: crc32c
Checksum: 0xe1a6cb12
|
Since this is your root filesystem, adding the mount option in /etc/fstab would pose a bit of a chicken-vs-egg problem: the system would need to know the mount option before starting to mount the root filesystem, but the /etc/fstab file cannot be read until the root filesystem is already mounted.
That's why there is a separate way for specifying mount options for your root filesystem: the rootflags= kernel boot option.
Within GRUB boot menu, you can press E to edit the selected boot entry (non-persistently, for the current boot only!), find the line that starts with the linux or linuxefi keyword, and add rootflags=data=journal to the end of that line. Then follow the on-screen instructions to boot with the modified entry.
If this results in a successful boot, you can add the boot option to /etc/default/grub file (to the GRUB_CMDLINE_LINUX variable) and then run sudo update-grub to make it persistent.
If the initial boot attempt with the rootflags=data=journal fails, you can simply boot again to return to previous state, as the changes made in GRUB boot menu will not be stored on disk.
| How to enable data=journal ext4 fs mode? |
1,625,998,088,000 |
I have a log file on a CentOS system that is taking up 700MB of space (seen using ls), but when I run the df -h command, it shows that only 200MB of space is being used on the file system (ext4).
What could be causing this discrepancy?
Is it possible for a file to take up more space than is being reported by df, and if so, how can I tell which files are not using space?
edit:
I clicked to fast, the other post don't answer my question.
Here is the problem in a simplified form:
# ls -lh /mnt
total 29M
-rw-r--r-- 1 apache apache 678M Jan 6 10:01 Somelog.log
-rw-r--r-- 1 apache apache 1.1M Jan 1 03:20 Somelog.log-20230101.gz
-rw-r--r-- 1 apache apache 1.1M Jan 2 03:23 Somelog.log-20230102.gz
....etc....
# du -sh /mnt
29M /mnt
I wan't some information about the file that's not counted in the total. (what's the term used when a file is still in memory ? if that's the case)
|
ls -l shows the apparent size of the file, i.e. how much data can be read from the file. du shows the amount of space the file actually occupies on disk.
In your case, the log file is sparse: it contains close to 27MiB of actual data, and around 650MiB of blocks which are all zeroes. The way the file was written results in the latter blocks taking up no room on disk, so they aren’t counted by du. The way this can happen is as follows:
a process writes to the log file, with 650MiB of real data;
the log file is rotated and cleared;
the initial process continues writing to the same log file, at the same offset where it finished writing before the log file was rotated.
The last step causes the file to be extended to the appropriate size, with no data, before the new data is appended.
The fix for this is to force the writing process to close and re-open the log file after it’s rotated, either by restarting the daemon, or by signalling it to re-open its log files (if it supports such a mechanism).
| Discrepancy between size of log file and space reported by df on CentOS 6.6 |
1,625,998,088,000 |
Im using RHEL 8.7
I've added new HD nvme0n2 to my linux and created partititions successfully
the output of lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
nvme0n2
├─nvme0n2p1
│ xfs 5d966f3d-7aca-4f06-bf74-aa32d97aba76
└─nvme0n2p2
ext4 56f6e1d8-58f3-47c7-840b-c1eebc24c3f7
But when I try to mount that hard disk sudo mount /dev/nvme0n2 /mnt/newHardDrive/ , it says
mount: /mnt/newHardDrive: wrong fs type, bad option, bad superblock on /dev/nvme0n2, missing codepage or helper program, or other error.
when i tried checking in /var/log/messages, it shows:
kernel: XFS (nvme0n2): Invalid superblock magic number
I also tried replacing the superblock using the backup superblocks with the command sudo fsck -b 32768 /dev/nvme0n2
But then I get this error:
fsck from util-linux 2.32.1
e2fsck 1.45.6 (20-Mar-2020)
fsck.ext2: Bad magic number in super-block while trying to open /dev/nvme0n2
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193
or
e2fsck -b 32768
Found a gpt partition table in /dev/nvme0n2
Please Help.
|
You can’t mount the device as a whole, you need to mount individual partitions:
sudo mount /dev/nvme0n2p1 /mnt/newHardDrive/
| Bad magic number while trying to mount a new hard disk |
1,625,998,088,000 |
I work on an ext4 filesystem. I have doubts about the accuracy of a directory entry regarding the description of a file that I have copied from an NTFS filesystem and that might have spanned some bad sectors (but I am not sure). I now believe the file might have been truncated when copied from its source but that the directory entry in the ext4 filesystem does not reflect the now-truncated file size but rather kept the information of the file size from the NTFS filesystem table. I don't know if such a situation is possible, but I want to be sure the file is not truncated (unfortunately the file is in a proprietary format and I cannot just open the file to check it).
I run a dd command on the file toward /dev/null and it seemed to have "copied" as much as the original file size. However, I am now wondering whether the dd command used the meta data about the file size from the inode table, which would defeat the goal.
Can it be that after copying a file the inode table doesn't reflect the real size of what was actually copied? (I think I just did a click-and-drag in a file explorer)?
Is using the dd command a good option?
Are there metadata in the ext4 filesystem that could be used to independently check the size of the file and thus accuracy of the information in the inode table (I think about data integrity fields)?
|
So first of all, the untrustworthy filesystem involved here seems to be the originating, not the target file system. If the origin read nonsense, that nonsense gets "correctly" written to the target file, and there's nothing you could do about afterwards – for all that there is in information, that's how the original file was.
So, no, we can from a perspective of your use case rule out that data corruption of any form happened on the writing end. If anything, the data was read corruptedly.
Then:
kept the information of the file size from the NTFS filesystem table.
That's not how copying between file systems work. You make the target file, you write over the contents of the source file (either through reading the source and writeing to the target file, or through the copy_file_range system call), but the metadata is kept and structured by the target file system itself, which knows nothing about the original file. So, no, that does not happen.
Can it be that after copying a file the inode table doesn't reflect the real size of what was actually copied? (I think I just did a click-and-drag in a file explorer)?
No, or: very very unlikely. In ext4, the metadata (so, everything but the actual data) is actually journaled. In other words, metadata is either changed completely, or not at all.
For ext4, the default mode of how data is journaled relative to metadata is data=ordered (see man ext4); which means that data is written to the file system completely before the metadata gets updated. So, the one thing that could happen if your system, for example, loses power, is that the data write has happened, but the metadata has not. That would have the effect of the file size being less than what was copied, not more.
So, because the data is written before the metadata, and the metadata is correct, we can be sure that the data is correct. (again, garbage in, garbage out would still apply if the data on the source was corrupted)
Is using the dd command a good option?
no, because any program will get exactly the amount of data that the file system gives it. If the file system was corrupted and incorrectly thinking the file was longer, it would give out some data that it incorrectly attributed, so that your dd wouldn't be any wiser. Not because it had any metadata access, but because the very error you want to catch with it would mean it would look like the error was not there, for all that dd cares about.
Are there metadata in the ext4 filesystem that could be used to independently check the size of the file and thus accuracy of the information in the inode table (I think about data integrity fields)?
No, unless you want to have a second copy of the filesystem metadata, which is exactly what you're asking for to be correct.
Ext4 is, as mentioned above, a journaling file system, which guarantees that metadata writes are complete or did not happen.
| Is there a way to be sure a file is not truncated compared to the file size stored in the inode table? (Does a dd command do?) |
1,625,998,088,000 |
We have a cluster of flash drives (8TB) mounted on /data as per:
/dev/mapper/vg.data-lv.data on /data type ext4 (rw,relatime)
There are a couple of directories inside /data with one being tmp:
ls -lst /data/
total 1036468
...
1036360 drw-r--r-- 1 secadmin sudo 1061183488 Nov 8 13:10 tmp
...
For some reason this folder /data/tmp seems to be malfunctioning: Any attempts to modify its, delete it, list it contents etc. leads to the respective command idling / blocking forever not returning any results:
# ls /data/tmp
^C
#
# rm -rf /data/tmp
^C
#
I already ran fsck.ext4 -fvy to check the underlying file system but everything seems to be fine.
How can I get control back over this directory and what is the underlying issue that I lost control over it?
|
As @meuh wrote, this looks like a very large directory, and may have thousands or millions of files and/or subdirectories in it. The GNU ls and rm tools are not very useful for dealing with such large directories, because they try to sort or otherwise process the full output.
You could try "find /data/tmp -print" to generate a list of filenames, or "find /data/tmp -ls" to get a long listing. This should immediately begin printing files, unless the directory has a very large number of empty blocks. In that case, you could also try "e2fsck -fD <dev>" to have it shrink the directory, but that may also take a long time.
It would also be useful to determine what was creating so many files in /data/tmp so that these files are either cleaned up when no longer needed, and/or put into a proper directory hierarchy instead of a single large directory.
| RHEL8: Cannot read / modify / delete directory |
1,625,998,088,000 |
Please consider the prior discussion as background to this new question.
I have modified my script and applied the same filesystem options to my USB drive's ext4 partitions using tune2fs, and mount options specified in the fstab.
Those options are all the same as for the previous discussion. I have applied those changes and performed a reboot, but the mount command is not reporting what I would have expected, namely that it would show mount options similar to those reported for the internal hard drive partitions. What is being reported is the following:
/dev/sdc3 on /site/DB005_F1 type ext4 (rw,relatime)
/dev/sdc4 on /site/DB005_F2 type ext4 (rw,relatime)
/dev/sdc5 on /site/DB005_F3 type ext4 (rw,relatime)
/dev/sdc6 on /site/DB005_F4 type ext4 (rw,relatime)
/dev/sdc7 on /site/DB005_F5 type ext4 (rw,relatime)
/dev/sdc8 on /site/DB005_F6 type ext4 (rw,relatime)
/dev/sdc9 on /site/DB005_F7 type ext4 (rw,relatime)
/dev/sdc10 on /site/DB005_F8 type ext4 (rw,relatime)
/dev/sdc11 on /site/DB006_F1 type ext4 (rw,relatime)
/dev/sdc12 on /site/DB006_F2 type ext4 (rw,relatime)
/dev/sdc13 on /site/DB006_F3 type ext4 (rw,relatime)
/dev/sdc14 on /site/DB006_F4 type ext4 (rw,relatime)
/dev/sdc15 on /site/DB006_F5 type ext4 (rw,relatime)
/dev/sdc16 on /site/DB006_F6 type ext4 (rw,relatime)
/dev/sdc17 on /site/DB006_F7 type ext4 (rw,relatime)
/dev/sdc18 on /site/DB006_F8 type ext4 (rw,relatime)
These are all reporting the same, but only reporting "rw,relatime", when I expected much more.
The full dumpe2fs report for the first USB partition (same as for all others) is as follows:
root@OasisMega1:/DB001_F2/Oasis/bin# more tuneFS.previous.DB005_F1.20220907-210437.dumpe2fs
dumpe2fs 1.45.5 (07-Jan-2020)
Filesystem volume name: DB005_F1
Last mounted on: <not available>
Filesystem UUID: 11c8fbcc-c1e1-424d-9ffe-ad0ccf480128
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_fi
le dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: journal_data user_xattr acl block_validity nodelalloc
Filesystem state: clean
Errors behavior: Remount read-only
Filesystem OS type: Linux
Inode count: 6553600
Block count: 26214400
Reserved block count: 1310720
Free blocks: 25656747
Free inodes: 6553589
First block: 0
Block size: 4096
Fragment size: 4096
Reserved GDT blocks: 1017
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
Filesystem created: Sat Nov 7 09:57:44 2020
Last mount time: Wed Sep 7 18:18:32 2022
Last write time: Wed Sep 7 20:55:33 2022
Mount count: 211
Maximum mount count: 10
Last checked: Sun Nov 22 13:50:57 2020
Check interval: 1209600 (2 weeks)
Next check after: Sun Dec 6 13:50:57 2020
Lifetime writes: 1607 MB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: 802d4ef6-daf4-4f68-b889-435a5ce467c3
Journal backup: inode blocks
Checksum type: crc32c
Checksum: 0x21a24a19
Journal features: journal_checksum_v3
Journal size: 512M
Journal length: 131072
Journal sequence: 0x000000bd
Journal start: 0
Journal checksum type: crc32c
Journal checksum: 0xf0a385eb
Does anyone know why this is happening?
Can something be done to have both internal and USB hard disk report same options?
In my /etc/default/grub file, I currently use the following definition involving a quirk:
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash scsi_mod.use_blk_mq=1 usb-storage.quirks=1058:25ee:u ipv6.disable=1"
Do I need to specify another quirk for the journalling and mount options to take effect as desired? Or is this again an "everything is OK" situation, the same as for the other post?
Modified script:
#!/bin/sh
####################################################################################
###
### $Id: tuneFS.sh,v 1.3 2022/09/08 03:31:12 root Exp $
###
### Script to set consistent (local/site) preferences for filesystem treatment at boot-time or mounting
###
####################################################################################
TIMESTAMP=`date '+%Y%m%d-%H%M%S' `
BASE=`basename "$0" ".sh" `
###
### These variables will document hard-coded 'mount' preferences for filesystems
###
count=1
BOOT_MAX_INTERVAL="-c 20" ### max number of boots before fsck [20 boots]
TIME_MAX_INTERVAL="-i 2w" ### max calendar time between boots before fsck [2 weeks]
ERROR_ACTION="-e remount-ro" ### what to do if error encountered
#-m reserved-blocks-percentage
###
### This OPTIONS string should be updated manually to document
### the preferred and expected settings to be applied to ext4 filesystems
###
OPTIONS="-o journal_data,block_validity,nodelalloc"
ASSIGN=0
REPORT=0
VERB=0
SINGLE=0
USB=0
while [ $# -gt 0 ]
do
case ${1} in
--default ) REPORT=0 ; ASSIGN=0 ; shift ;;
--report ) REPORT=1 ; ASSIGN=0 ; shift ;;
--force ) REPORT=0 ; ASSIGN=1 ; shift ;;
--verbose ) VERB=1 ; shift ;;
--single ) SINGLE=1 ; shift ;;
--usb ) USB=1 ; shift ;;
* ) echo "\n\t Invalid parameter used on the command line. Valid options: [ --default | --report | --force | --single | --usb | --verbose ] \n Bye!\n" ; exit 1 ;;
esac
done
workHorse()
{
reference=`ls -t1 "${PREF}."*".dumpe2fs" 2>/dev/null | tail -1 `
if [ -n "${reference}" -a -s "${reference}" ]
then
if [ ! -f "${PREF}.dumpe2fs.REFERENCE" ]
then
mv -v ${reference} ${PREF}.dumpe2fs.REFERENCE
fi
fi
reference=`ls -t1 "${PREF}."*".verify" 2>/dev/null | tail -1 `
if [ -n "${reference}" -a -s "${reference}" ]
then
if [ ! -f "${PREF}.verify.REFERENCE" ]
then
mv -v ${reference} ${PREF}.verify.REFERENCE
fi
fi
BACKUP="${BASE}.previous.${PARTITION}.${TIMESTAMP}"
BACKUP="${BASE}.previous.${PARTITION}.${TIMESTAMP}"
rm -f ${PREF}.*.tune2fs
rm -f ${PREF}.*.dumpe2fs
### reporting by 'tune2fs -l' is a subset of that from 'dumpe2fs -h'
if [ ${REPORT} -eq 1 ]
then
### No need to generate report from tune2fs for this mode.
( dumpe2fs -h ${DEVICE} 2>&1 ) | awk '{
if( NR == 1 ){ print $0 } ;
if( index($0,"revision") != 0 ){ print $0 } ;
if( index($0,"mount options") != 0 ){ print $0 } ;
if( index($0,"features") != 0 ){ print $0 } ;
if( index($0,"Filesystem flags") != 0 ){ print $0 } ;
if( index($0,"directory hash") != 0 ){ print $0 } ;
}'>${BACKUP}.dumpe2fs
echo "\n dumpe2fs REPORT [$PARTITION]:"
cat ${BACKUP}.dumpe2fs
else
### Generate report from tune2fs for this mode but only as sanity check.
tune2fs -l ${DEVICE} 2>&1 >${BACKUP}.tune2fs
( dumpe2fs -h ${DEVICE} 2>&1 ) >${BACKUP}.dumpe2fs
if [ ${VERB} -eq 1 ] ; then
echo "\n tune2fs REPORT:"
cat ${BACKUP}.tune2fs
echo "\n dumpe2fs REPORT:"
cat ${BACKUP}.dumpe2fs
fi
if [ ${ASSIGN} -eq 1 ]
then
echo " COMMAND: tune2fs ${COUNTER_SET} ${BOOT_MAX_INTERVAL} ${TIME_MAX_INTERVAL} ${ERROR_ACTION} ${OPTIONS} ${DEVICE} ..."
tune2fs ${COUNTER_SET} ${BOOT_MAX_INTERVAL} ${TIME_MAX_INTERVAL} ${ERROR_ACTION} ${OPTIONS} ${DEVICE}
rm -f ${PREF}.*.verify
( dumpe2fs -h ${DEVICE} 2>&1 ) >${BACKUP}.verify
if [ ${VERB} -eq 1 ] ; then
echo "\n Changes:"
diff ${BACKUP}.dumpe2fs ${BACKUP}.verify
fi
else
if [ ${VERB} -eq 1 ] ; then
echo "\n Differences:"
diff ${BACKUP}.tune2fs ${BACKUP}.dumpe2fs
fi
rm -f ${BACKUP}.verify
fi
fi
}
workPartitions()
{
case ${PARTITION} in
1 ) case ${DISK_ID} in
1 ) DEVICE="/dev/sda3" ; OPTIONS="" ;;
5 ) DEVICE="/dev/sdc3" ;;
6 ) DEVICE="/dev/sdc11" ;;
esac ;;
2 ) case ${DISK_ID} in
1 ) DEVICE="/dev/sda7" ;;
5 ) DEVICE="/dev/sdc4" ;;
6 ) DEVICE="/dev/sdc12" ;;
esac ;;
3 ) case ${DISK_ID} in
1 ) DEVICE="/dev/sda8" ;;
5 ) DEVICE="/dev/sdc5" ;;
6 ) DEVICE="/dev/sdc13" ;;
esac ;;
4 ) case ${DISK_ID} in
1 ) DEVICE="/dev/sda9" ;;
5 ) DEVICE="/dev/sdc6" ;;
6 ) DEVICE="/dev/sdc14" ;;
esac ;;
5 ) case ${DISK_ID} in
1 ) DEVICE="/dev/sda12" ;;
5 ) DEVICE="/dev/sdc7" ;;
6 ) DEVICE="/dev/sdc15" ;;
esac ;;
6 ) case ${DISK_ID} in
1 ) DEVICE="/dev/sda13" ;;
5 ) DEVICE="/dev/sdc8" ;;
6 ) DEVICE="/dev/sdc16" ;;
esac ;;
7 ) case ${DISK_ID} in
1 ) DEVICE="/dev/sda14" ;;
5 ) DEVICE="/dev/sdc9" ;;
6 ) DEVICE="/dev/sdc17" ;;
esac ;;
8 ) case ${DISK_ID} in
1 ) DEVICE="/dev/sda4" ;;
5 ) DEVICE="/dev/sdc10" ;;
6 ) DEVICE="/dev/sdc18" ;;
esac ;;
esac
PARTITION="DB00${DISK_ID}_F${PARTITION}"
PREF="${BASE}.previous.${PARTITION}"
echo "\n\t\t PARTITION = ${PARTITION}"
echo "\t\t DEVICE = ${DEVICE}"
count=`expr ${count} + 1 `
COUNTER_SET="-C ${count}"
workHorse
}
workPartitionGroups()
{
if [ ${SINGLE} -eq 1 ]
then
for PARTITION in `echo ${ID_SET} `
do
echo "\n\t Actions only for DB00${DISK_ID}_F${PARTITION} ? [y|N] => \c" ; read sel
if [ -z "${sel}" ] ; then sel="N" ; fi
case ${sel} in
y* | Y* ) DOIT=1 ; break ;;
* ) DOIT=0 ;;
esac
done
if [ ${DOIT} -eq 1 ]
then
#echo "\t\t PARTITION ID == ${PARTITION} ..."
workPartitions
exit
fi
else
for PARTITION in `echo ${ID_SET} `
do
#echo "\t\t PARTITION ID == ${PARTITION} ..."
workPartitions
done
fi
}
if [ ${USB} -eq 1 ]
then
for DISK_ID in 5 6
do
echo "\n\n DISK ID == ${DISK_ID} ..."
ID_SET="1 2 3 4 5 6 7 8"
workPartitionGroups
done
else
DISK_ID="1"
echo "\n\n DISK ID == ${DISK_ID} ..."
ID_SET="2 3 4 5 6 7 8"
workPartitionGroups
fi
exit 0
exit 0
exit 0
|
Some ext4 filesystem options may not take effect if specified in /etc/fstab as they require changes to filesystem structures. Some of those can be simply applied with tune2fs while the filesystem is unmounted, but there are some options that may require running a full filesystem check after tune2fs to take effect properly.
As far as I know, there is no mechanism that would affect filesystem options based on whether the disk is connected by USB or not.
| EXT4 on USB - how to specify journalling behaviour to be same as for root disk partitions |
1,625,998,088,000 |
Yesterday, a message poped up in Debian, saying that my root partition is full (0 MB free) after I copied a new software under /opt. So I moved the folder back to another partition to temporarily fix the issue.
I freed some space from /dev/nvme0n1p9 using a Debian installation USB, and now try to extend the root partition using this freed space.
The bios of my HP laptop does not have a "legacy" boot option, so I cannot use a bootable GParted USB stick to increase the size of the root partition.
I search a bit and it appears that extending the root partition is tricky.
I would like to confirm a few things:
Does extending the root partition mean pushing partitions located after this one further on the disk, or can I use the unallocated space at the end of the disk and have a root partition split in two?
Can I just move these partitions around without consequences?
In my case, how would you sort this out, if it's even possible?
If it's not, am I doomed to reinstall the OS?
Can I bypass this limit by installing new applications outside of this root partition?
OS: GNU/Linux Debian 11 (bullseye)
Thank you.
Edit - Details of root partition usage
Following comment from @oldfred, here is the biggest folders of the root partition.
The biggest usage is for texlive but I don't want to uninstall it, if possible.
|
Does extending the root partition mean pushing partitions located after this one further on the disk, or can I use the unallocated space at the end of the disk and have a root partition split in two?
Yes, it means exactly that. A partition must always be contiguous from the beginning to the end. A LVM logical volume could use multiple discontinuous pieces of disk, but converting an existing system to LVM is not exactly trivial.
Can I just move these partitions around without consequences?
If your /etc/fstab is written to use partition UUIDs instead of their device names, or if gparted won't rearrange the entries in the table to match their ordering on the disk, yes.
In my case, how would you sort this out, if it's even possible?
(Exactly as you ended up doing, as your own answer appeared while I was writing this one.)
First, move all the partitions that are located "to the right" of the partition you wish to extend as far towards the right as you can.
After that, boot to the installed OS(s) to verify everything still works.
Then boot back to the external media to extend the root partition.
If it's not, am I doomed to reinstall the OS?
Not doomed at all. It just takes a bit of slow and careful work.
Can I bypass this limit by installing new applications outside of this root partition?
That's certainly one way to bypass it, but it might be difficult to achieve for programs installed through the OS's package manager. For third-party software, it might actually be easy.
Another possible way would be to locate some branch of the directory tree on the root filesystem that occupies a fairly large amount of space but is not essential for early boot processes, and move it to another filesystem, then create a symbolic link so that it will still be reachable using original pathnames. For example, you could easily move /usr/share/doc to a different filesystem:
mv /usr/share/doc /new/filesystem/mountpoint/
ln -s /new/filesystem/mountpoint/doc /usr/doc
But the more filesystems you have, the more you'll run the risk of not having the free space in the filesystem you need it. That's why it can be worthwhile to extend partitions if they are clearly too small for your requirements.
| How do I resize root partition with UEFI |
1,625,998,088,000 |
The commands I invoke are the following
Create image file
dd if=/dev/zero of=benj.luks bs=1k count=666000
Set up LUKS container
cryptsetup luksFormat benj.luks
Set up loop device and open the LUKS container
cryptsetup luksOpen benj.luks benjImage
Check that the loop device has been set up and mapped
lsblk
Output
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 650.4M 0 loop
└─benjImage 254:1 0 634.4M 0 crypt
Create file system ext4 on benjImage
sudo mkfs.ext4 -b 2048 -F -F /dev/mapper/benjImage
Command fails
mke2fs 1.46.5 (30-Dec-2021)
mkfs.ext4: Invalid argument while setting blocksize; too small for device
|
cat /sys/block/loop0/queue/physical_block_size
cat /sys/block/loop0/queue/logical_block_size
revealed, that the loop device was mounted as a 4096 bytes block device on which no 2048 byte file system can be created.
hence the solution is to set up the loop device manually and define the sector size at 2048 by utilising the -b option as in
sudo losetup -b 2048 -f benj.luks
before step 2 and then applying consecutive commands on /dev/loop0 (or whichever loop device is assigned) instead of the image file, ie
cryptsetup luksFormat /dev/loop0
cryptsetup luksOpen /dev/loop0 benjImage
sudo mkfs.ext4 -b 2048 /dev/mapper/benjImage
voila
| Why can mkfs.ext4 not create a 2048 block size file system on 650 MB image file? |
1,625,998,088,000 |
Can someone explains what is defined as the "order of the request" in the buddy block allocation in ext4 file system? It was not possible to find a clear and definite answer. Is there a detailed documentation (a paper or technical report) this stuff? I read the comments in the commits but they are too short and technical. Thanks a lot :-)
|
Order, as in order of magnitude, refers to the size of the allocation.
| Block allocation in ext4 file system |
1,625,998,088,000 |
This is something I imagine I might have to submit a patch or feature request for, but I'd like to know if it is possible to create a hardlink to a file, that when that hardlink which was not the original file is editted, that it would be copied first before it was actually editted?
Which major filesystem would this apply to?
Thanks.
|
After you create a hard link to a file, there are just two links to one file. While you may remember which link was first and which was second, the filesystem doesn't.
So it is just possible for an editor to determine whether there is more than one link to a file or not. An editor may or may not preserve the link when it saves the new file.
What you may want is a filesystem that supports cp --reflink. That way you get a space efficient copy, but when you change the copy, your original file is not modified.
| How can I have it so, that when hardlinks which are not the original, are editted, that they would first be copied then editted? |
1,625,998,088,000 |
I am running
$ uname -a
Linux myhostname 4.14.15-041415-generic #201801231530 SMP Tue Jan 23 20:33:21 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
$ lsb_release -a
No LSB modules are available.
Distributor ID: Nitrux
Description: Nitrux 1.1.4
Release: 1.1.4
Codename: nxos
It has a single hard disk with a system ext4 partition and a swap partition. The hard disk can't complete neither the Smart short test, nor the long one.
SMART Self-test log structure revision number 1
Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error
# 1 Short offline Completed: read failure 90% 32232 11202419
# 2 Extended offline Completed: read failure 90% 32229 11202419
Maybe the disk should be replaced.
In the meanwhile, is it possible to simply instruct the filesystem to avoid the block corresponding to that LBA? So that no further read/write errors are generated from there. In fact, it seems to be an isolated error and the hard disk (except, of course, for that area) is still able to work.
The SMART parameters are weird, because there are pending sectors to be re-allocated, but there are also 0 reallocated sectors. Note that this hard disk is about 10 years old.
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 19
3 Spin_Up_Time 0x0027 140 139 021 Pre-fail Always - 3966
4 Start_Stop_Count 0x0032 098 098 000 Old_age Always - 2058
5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0
7 Seek_Error_Rate 0x002e 100 253 000 Old_age Always - 0
9 Power_On_Hours 0x0032 056 056 000 Old_age Always - 32232
10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0
11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0
12 Power_Cycle_Count 0x0032 098 098 000 Old_age Always - 2001
192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 206
193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 1851
194 Temperature_Celsius 0x0022 103 086 000 Old_age Always - 40
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 78
198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 70
199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0
200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 89
In the linked page there is no chosen answer. I must keep the system up and I would like to avoid dd (and there is no clear example about how to use it in this case). Can I run fsck.ext2 -c on a mounted filesystem?
|
From the e2fsck man page (e2fsck is also linked to names fsck.ext2, fsck.ext3 and fsck.ext4):
Note that in general it is not safe to run e2fsck on mounted filesystems. The only exception is if the -n option is specified, and -c, -l, or -L options are not specified. However, even if it is safe to do so, the results printed by e2fsck are not valid if the filesystem is mounted. If e2fsck asks whether or not you should check a filesystem which is mounted, the only correct answer is "no". Only experts who really know what they are doing should consider answering this question in any other way.
So the answer is "no, you cannot run fsck on a mounted ext2/3/4 filesystem in any mode that would make any changes to the filesystem at all".
At boot time, the root filesystem may be checked while it's mounted in read-only mode or the system is still running on initramfs. But in this situation, the system should be rebooted immediately afterwards if the fsck indicates it had to make any changes.
If a disk block has totally failed so that even repeated retries won't result in the disk being confident that the data has been read correctly, the disk cannot automatically reallocate that block until its contents are overwritten by the OS - because doing the reallocation without having the correct data is equivalent to silently corrupting the data (by replacing a block of data with zeroes). That is worse than a file that simply produces a read error, because the corrupt data may be used in further processing and silently cause other results to be corrupted until it is finally noticed.
A file that produces read errors is usually pretty straightforward to restore from backups, unless it is a critical system file and the system crashes or is unable to run the restore tool if that file is missing.
The fact that SMART indicates there are sectors pending to be re-allocated but no actual re-allocations might mean just that the failed sectors are occupied by system files that are normally only ever read and practically never written. If you can figure out which package those files belong to, you can instruct the package management system to reinstall that package; Nitrux seems to use .deb packages, so apt-get reinstall <package name> would be the command to run. This would cause the file to be rewritten, allowing the disk to complete the re-allocation.
Unfortunately, some disk manufacturers have created disks with incomplete SMART implementations, so you can only really trust SMART if it's telling bad news; if it says things are OK but the operating system is reporting read/write errors, then something is bad regardless of what SMART says - and since HDDs are a wear item, in most cases it's the disk that is faulty.
I've worked in various roles in server administration for a living for more than 20 years now. Through all that time, our team's reaction on seeing a 10+ years old disk still in use would have been - and still is:
"Holy ****! If that disk spins down for any reason, there is practically no guarantee at all that it will ever restart again. Can we even get spare parts for old hardware like that with any reasonable price and response time? At the very least, we need to make a very realistic plan on what to do when (not if) that thing fails, preferably get the ball rolling right now on either replacement or virtualization of that old thing ASAP."
Granted, we deal with servers that are almost always running 24 hours a day, every day of every year, through their whole lifetime - and that might not be the case with your system.
But a 10 year old disk, if used anywhere near the "typical" way for the market segment it's designed for, is definitely well into the rising edge of the bathtub curve: its design lifetime has been exceeded and it's wearing out.
| Avoid damaged block in ext4 |
1,625,998,088,000 |
When copy and pasting commands with a space at the end it automatically runs the command without requiring the user to press enter.
This is why I overwrote a large text file with a cp ./newfile ./oldfile command.
How can I restore the oldfile after I aborted the command?
The oldfile is on a hard drive encrypted with VeraCrypt which is mounted (an ext4 partition).
The file is not still in use.
I already tried sudo grep -i -a -B100 -A100 'text in oldfile' /dev/sdx1 > ./restored (replace sdx1 with what's displayed with lsblk -f) but it doesn't find anything. Should it work with this command? Is it possible at all?
|
The disk is encrypted so there's absolutely no point in looking on the disk for a plain text string. At best you need to search the mounted filesystem, as this is the decrypted layer, but any writes to it whatsoever are likely to overwrite your deleted data.
Look at the output of this command to identify the filesystem device to search (for example /dev/mapper/myhome):
df -h /path/to/oldfile | awk 'NR>1 {print $1}'
You can then attempt to retrieve what remains of the file data with instructions at Recovering accidentally deleted files
| How to recover an overwritten file from a mounted VeraCrypt encrypted disk? |
1,625,998,088,000 |
There seem to be many scenarios, I read quite a few, but I caould not find a match for my problem.
I have this gparted view on my system:
I had sda6 swap sit right behind sda5; I moved that sawp space to another disk. Then swapoff and deleted sda6; then extended sda5 to consume the free 8GB... all good so far.
These two (sda 5 and sda6) were in the same "extended" partition.
However, allocating the unallocated 256 GB does not work; as I cannot extended the sda3 partition.
My understanding is that the 0.25 version of gparted allows for online disk extending.
What options do I have?
Can this be done on a live system?
|
gparted and other partitioning software will be funny about extending logical partitions (those within the extended partition sda3) because the underlying extended partition would need to be extended first with the others still inside.
I suggest, you clone your disk to be safe, boot a live image and try gparted from there. If it still won't work, there's options:
Possibly use a lower level partitioning tool (fdisk, sfdisk, parted) which may give you more control
annotate the exact start and end locations (and types) of each logical partition (sda5 in this case)
delete sda3 and sda5
recreate the extended partition to take up as much space as you like, making sure it starts exactly where sda3 started (the new extended partition will also be called sda3)
recreate sda5 making sure it starts exactly where it did before
make sure all partitions are of the same type as before
And all your data will be where you expect it to be. You might need to extend the file system under sda5 but that's the easy bit.
These primary and extended and logical partition constructs are part of the old DOS disk label that originally supported only 4 partitions. Modern GPT disk labels don't have such limitations and there's no primary or extended partitions or partitions within partitions.
| Extending an extended partition with following unallocated disk space |
1,590,335,475,000 |
I have an external backup hard drive that is encrypted using LUKS. As I was re-organising my backups, I copied the data to another encrypted drive and did a kind of "quick wipe" on the original drive by replacing the key in the key slot with random data.
Goal was to use the drive afterwards as a second backup, but at that moment, time failed to properly clean-up the drive and do the second copy.
Unfortunately, meanwhile, my second backup drive and computer were stolen. I'm left with the original backup drive, which theoretically contains the data, but behind a key that I don't know. However, I still know the original key, the one that was previously used in the keyslot, before replacement.
Is there a chance to get back this old keyslot? The drive is a standard magnetic 2.5" USB3 drive, not an SSD. So I don't know if it uses some kind of copy-on-write for such metadata or if some tools could find the data buried underneath the new keyslot?
Internal FS is EXT4 for what is worth.
|
The problem is the content of the key slot. In order to access the data you need the master key. The people having one of the slot keys may not be supposed to know the master key (because then you could not lock somebody out without reencrypting all the data).
Thus a new key is given (password or file), turned into a key of the suitable key length, and then the key slot data is generated by XORing the input key and the master key. In other words: When you know the password then you still need the slot data for building the master key. Without the slot data, the password is completely useless.
Sorry.
If you had a dump of the LUKS header, then you could restore that and use the old password.
| LUKS: find a deleted key(slot) |
1,590,335,475,000 |
I want to change a partition to Ext3 or Ext4 and used fdisk to print the availlable partition types:
19 Linux swap 0657FD6D-A4AB-43C4-84E5-0933C84B4F4F
20 Linux filesystem 0FC63DAF-8483-4772-8E79-3D69D8477DE4
21 Linux server data 3B8F8425-20E0-4F3B-907F-1A25A76F98E8
22 Linux root (x86) 44479540-F297-41B2-9AF7-D131D5F0458A
23 Linux root (ARM) 69DAD710-2CE4-4E3C-B16C-21A1D49ABED3
24 Linux root (x86-64) 4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709
25 Linux root (ARM-64) B921B045-1DF0-41C3-AF44-4C6F280D3FAE
26 Linux root (IA-64) 993D8D3D-F80E-4225-855A-9DAF8ED7EA97
27 Linux reserved 8DA63339-0007-60C0-C436-083AC8230908
28 Linux home 933AC7E1-2EB4-4F13-B844-0E14E2AEF915
29 Linux RAID A19D880F-05FC-4D3B-A006-743F0F84911E
30 Linux extended boot BC13C2FF-59E6-4262-A352-B275FD6F7172
31 Linux LVM E6D6D379-F507-44C2-A23C-238F2A3DF928
So, I guess i will go with 20 or maybe 28...
EDIT: Full list of availlable partition types I can chose from:
1 EFI System C12A7328-F81F-11D2-BA4B-00A0C93EC93B
2 MBR partition scheme 024DEE41-33E7-11D3-9D69-0008C781F39F
3 Intel Fast Flash D3BFE2DE-3DAF-11DF-BA40-E3A556D89593
4 BIOS boot 21686148-6449-6E6F-744E-656564454649
5 Sony boot partition F4019732-066E-4E12-8273-346C5641494F
6 Lenovo boot partition BFBFAFE7-A34F-448A-9A5B-6213EB736C22
7 PowerPC PReP boot 9E1A2D38-C612-4316-AA26-8B49521E5A8B
8 ONIE boot 7412F7D5-A156-4B13-81DC-867174929325
9 ONIE config D4E6E2CD-4469-46F3-B5CB-1BFF57AFC149
10 Microsoft reserved E3C9E316-0B5C-4DB8-817D-F92DF00215AE
11 Microsoft basic data EBD0A0A2-B9E5-4433-87C0-68B6B72699C7
12 Microsoft LDM metadata 5808C8AA-7E8F-42E0-85D2-E1E90434CFB3
13 Microsoft LDM data AF9B60A0-1431-4F62-BC68-3311714A69AD
14 Windows recovery environment DE94BBA4-06D1-4D40-A16A-BFD50179D6AC
15 IBM General Parallel Fs 37AFFC90-EF7D-4E96-91C3-2D7AE055B174
16 Microsoft Storage Spaces E75CAF8F-F680-4CEE-AFA3-B001E56EFC2D
17 HP-UX data 75894C1E-3AEB-11D3-B7C1-7B03A0000000
18 HP-UX service E2A1E728-32E3-11D6-A682-7B03A0000000
19 Linux swap 0657FD6D-A4AB-43C4-84E5-0933C84B4F4F
20 Linux filesystem 0FC63DAF-8483-4772-8E79-3D69D8477DE4
21 Linux server data 3B8F8425-20E0-4F3B-907F-1A25A76F98E8
22 Linux root (x86) 44479540-F297-41B2-9AF7-D131D5F0458A
23 Linux root (ARM) 69DAD710-2CE4-4E3C-B16C-21A1D49ABED3
24 Linux root (x86-64) 4F68BCE3-E8CD-4DB1-96E7-FBCAF984B709
25 Linux root (ARM-64) B921B045-1DF0-41C3-AF44-4C6F280D3FAE
26 Linux root (IA-64) 993D8D3D-F80E-4225-855A-9DAF8ED7EA97
27 Linux reserved 8DA63339-0007-60C0-C436-083AC8230908
28 Linux home 933AC7E1-2EB4-4F13-B844-0E14E2AEF915
29 Linux RAID A19D880F-05FC-4D3B-A006-743F0F84911E
30 Linux extended boot BC13C2FF-59E6-4262-A352-B275FD6F7172
31 Linux LVM E6D6D379-F507-44C2-A23C-238F2A3DF928
32 FreeBSD data 516E7CB4-6ECF-11D6-8FF8-00022D09712B
33 FreeBSD boot 83BD6B9D-7F41-11DC-BE0B-001560B84F0F
34 FreeBSD swap 516E7CB5-6ECF-11D6-8FF8-00022D09712B
35 FreeBSD UFS 516E7CB6-6ECF-11D6-8FF8-00022D09712B
36 FreeBSD ZFS 516E7CBA-6ECF-11D6-8FF8-00022D09712B
37 FreeBSD Vinum 516E7CB8-6ECF-11D6-8FF8-00022D09712B
38 Apple HFS/HFS+ 48465300-0000-11AA-AA11-00306543ECAC
39 Apple UFS 55465300-0000-11AA-AA11-00306543ECAC
40 Apple RAID 52414944-0000-11AA-AA11-00306543ECAC
41 Apple RAID offline 52414944-5F4F-11AA-AA11-00306543ECAC
42 Apple boot 426F6F74-0000-11AA-AA11-00306543ECAC
43 Apple label 4C616265-6C00-11AA-AA11-00306543ECAC
44 Apple TV recovery 5265636F-7665-11AA-AA11-00306543ECAC
45 Apple Core storage 53746F72-6167-11AA-AA11-00306543ECAC
46 Solaris boot 6A82CB45-1DD2-11B2-99A6-080020736631
47 Solaris root 6A85CF4D-1DD2-11B2-99A6-080020736631
48 Solaris /usr & Apple ZFS 6A898CC3-1DD2-11B2-99A6-080020736631
49 Solaris swap 6A87C46F-1DD2-11B2-99A6-080020736631
50 Solaris backup 6A8B642B-1DD2-11B2-99A6-080020736631
51 Solaris /var 6A8EF2E9-1DD2-11B2-99A6-080020736631
52 Solaris /home 6A90BA39-1DD2-11B2-99A6-080020736631
53 Solaris alternate sector 6A9283A5-1DD2-11B2-99A6-080020736631
54 Solaris reserved 1 6A945A3B-1DD2-11B2-99A6-080020736631
55 Solaris reserved 2 6A9630D1-1DD2-11B2-99A6-080020736631
56 Solaris reserved 3 6A980767-1DD2-11B2-99A6-080020736631
57 Solaris reserved 4 6A96237F-1DD2-11B2-99A6-080020736631
58 Solaris reserved 5 6A8D2AC7-1DD2-11B2-99A6-080020736631
59 NetBSD swap 49F48D32-B10E-11DC-B99B-0019D1879648
60 NetBSD FFS 49F48D5A-B10E-11DC-B99B-0019D1879648
61 NetBSD LFS 49F48D82-B10E-11DC-B99B-0019D1879648
62 NetBSD concatenated 2DB519C4-B10E-11DC-B99B-0019D1879648
63 NetBSD encrypted 2DB519EC-B10E-11DC-B99B-0019D1879648
64 NetBSD RAID 49F48DAA-B10E-11DC-B99B-0019D1879648
65 ChromeOS kernel FE3A2A5D-4F32-41A7-B725-ACCC3285A309
66 ChromeOS root fs 3CB8E202-3B7E-47DD-8A3C-7FF2A13CFCEC
67 ChromeOS reserved 2E0A753D-9E48-43B0-8337-B15192CB1B5E
68 MidnightBSD data 85D5E45A-237C-11E1-B4B3-E89A8F7FC3A7
69 MidnightBSD boot 85D5E45E-237C-11E1-B4B3-E89A8F7FC3A7
70 MidnightBSD swap 85D5E45B-237C-11E1-B4B3-E89A8F7FC3A7
71 MidnightBSD UFS 0394EF8B-237E-11E1-B4B3-E89A8F7FC3A7
72 MidnightBSD ZFS 85D5E45D-237C-11E1-B4B3-E89A8F7FC3A7
73 MidnightBSD Vinum 85D5E45C-237C-11E1-B4B3-E89A8F7FC3A7
74 Ceph Journal 45B0969E-9B03-4F30-B4C6-B4B80CEFF106
75 Ceph Encrypted Journal 45B0969E-9B03-4F30-B4C6-5EC00CEFF106
76 Ceph OSD 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D
77 Ceph crypt OSD 4FBD7E29-9D25-41B8-AFD0-5EC00CEFF05D
78 Ceph disk in creation 89C57F98-2FE5-4DC0-89C1-F3AD0CEFF2BE
79 Ceph crypt disk in creation 89C57F98-2FE5-4DC0-89C1-5EC00CEFF2BE
80 VMware VMFS AA31E02A-400F-11DB-9590-000C2911D1B8
81 VMware Diagnostic 9D275380-40AD-11DB-BF97-000C2911D1B8
82 VMware Virtual SAN 381CFCCC-7288-11E0-92EE-000C2911D0B2
83 VMware Virsto 77719A0C-A4A0-11E3-A47E-000C29745A24
84 VMware Reserved 9198EFFC-31C0-11DB-8F78-000C2911D1B8
85 OpenBSD data 824CC7A0-36A8-11E3-890A-952519AD3F61
86 QNX6 file system CEF5A9AD-73BC-4601-89F3-CDEEEEE321A1
87 Plan 9 partition C91818F9-8025-47AF-89D2-F030D7000C2C
|
Text mode
Let fdisk do it's job on the external drive (if you need to create one or more partitions). Use the default partition type (don't worry about it).
Then use mkfs.ext4 and create an ext4 file system.
Graphical mode
Use gparted and let it create partition(s) and file system(s).
| Which Linux partition type to chose for external USB HDD with EXT3 or EXT4? |
1,590,335,475,000 |
I can not figure out why its not allowing me to put a filesystem on this logical volume, does anyone have a solution or troubleshoot for this?
root@Home-Pi:~# vgs
VG #PV #LV #SN Attr VSize VFree
VG_Remote_Storage 2 1 0 wz--n- 18.19t 0
root@Home-Pi:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
LV_Remote_Storage VG_Remote_Storage -wi-a----- 18.19t
root@Home-Pi:~# wipefs -a /dev/mapper/VG_Remote_Storage-LV_Remote_Storage
root@Home-Pi:~# mkfs.ext4 /dev/mapper/VG_Remote_Storage-LV_Remote_Storage
mke2fs 1.44.5 (15-Dec-2018)
Creating filesystem with 4883200000 4k blocks and 305201152 inodes
Filesystem UUID: bbe76c30-9d69-4528-8c20-711801aca7de
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848, 512000000, 550731776, 644972544, 1934917632,
2560000000, 3855122432
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): mkfs.ext4: Attempt to read block from filesystem resulted in short read
while trying to create journal
root@Home-Pi:~# fsck.ext4 -F /dev/VG_Remote_Storage/LV_Remote_Storage
e2fsck 1.44.5 (15-Dec-2018)
ext2fs_open2: Bad magic number in super-block
fsck.ext4: Superblock invalid, trying backup blocks...
fsck.ext4: Bad magic number in super-block while trying to open /dev/VG_Remote_Storage/LV_Remote_Storage
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem. If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
e2fsck -b 8193 <device>
or
e2fsck -b 32768 <device>
root@Home-Pi:~#
|
Your raspberry pi is running a 32-bit version of linux, so mkfs.ext4 is formatting the filesystem with 2^32 blocks, which (with a 4k block size) limits the filesystem to a maximum size of 16 TiB. XFS on 32-bit linux is also limited to 16 TiB.
Interestingly, the Raspberry Pi 4 Model B has a Broadcom BCM2711 which is a 64-bit ARM v8 quad-core CPU. The default OS for all models of raspberry pis is 32-bit, and IIRC even raspbian is 32-bit. Probably so they only have to maintain one version of it rather than a 32-bit and a 64-bit version. 64-bit distros are available for rpis - but I don't know anywhere near enough about them to recommend one. Google is your friend here, or perhaps try https://raspberrypi.stackexchange.com/
On 32 bit, your only real option is to reduce the partition size down to 16 TiB. The remainder can be used as a 2nd partition of about 3 TiB.
In comments, I suggested using ZFS - unfortunately, zfsonlinux requires a 64-bit linux kernel as it is unstable on 32-bit. I also suggested btrfs, but it also has limitations on 32-bit and is not recommended.
My final suggestion was to acquire a PC with an amd64 CPU and use that to build your file server.
These can be picked up cheap, or even free, and even a 10+ year old machine will make a far better file server than a raspberry pi - it will have multiple SATA3 ports (use one for an SSD for the boot + OS drive, or two in mdadm RAID-1; and 2 or more ports for your 19TiB storage), at least 4GB RAM (and room for expansion - the more memory a file server has, the better it performs), and it can run a 64-bit Linux so can format a 64bit ext4 or XFS, or use ZFS or btrfs without problem.
Your drives would be on SATA ports and your network interface(s) would be on PCI-e - both of which are faster, far superior (and far more reliable) for that purpose than USB.
(BTW, you can use a partition on the SSD(s) to cache the hard disks. ZFS calls this Layer 2 ARC or L2ARC, and for other filesystems bcache is part of the kernel)
The only downside is that a PC would take more space, and use more power than a raspberry pi.
| mkfs.ext4 not working on 19.1TB logical volume |
1,590,335,475,000 |
I installed Fedora 28. But I resized the partition that contained Fedora (/dev/sda7) wrongly using GParted and now I can't boot my system. (Note the partition format is ext4)
|
Resizing a partition does not implicitly resize any filesystem it might contain. You should have shrunk the filesystem and then shrunk the partition. (I'm surprised gparted didn't warn you.)
However, to try and fix the damage, resize the partition back to whatever it was before. If you're not sure of the value then make sure it's at least the size it was before.
If you have already used the space then all bets are off, unfortunately. You might get some of your files back with a filesystem rescue tool. Or you might not.
| ext4 partition is broken during resizing with gparted |
1,590,335,475,000 |
We have a Raspberry Pi located at a location where it may experience frequent power loss. I'm trying to make it scan, and repair (if necessary) a filesystem every time it boots up, in case the power loss causes FS corruption. The filesystem in question is ext4, but it is NOT the root filesystem.
It seems that I can do what I want by using tune2fs -c 1 /dev/sdX#, and setting /etc/fstab's Filesystem Check Order to 2 for that partition. What I'm not sure about is what it does when it detects problems. Does this automatically fix them? Will it stop booting, and wait for someone to confirm that it should fix things?
The Pi is headless - there's no one to confirm anything.
|
You don't need to set "-c 1" on the filesystem. That means "force a full e2fsck run each mount", which would both be annoying (slow boot time), and unnecessary for ext4 with a journal. Even without a journal you don't strictly need to run a full e2fsck if the filesystem has been cleanly unmounted (it will record this into the superblock itself).
By default, if there is a check phase in /etc/fstab then e2fsck will repair the filesystem automatically. Per the e2fsck.8 man page, the default is to run with "-p", though "-y" is more aggressive in fixing problems automatically.
| Does a filesystem check initiated from /etc/fstab auto-repair? |
1,590,335,475,000 |
I need to backup / copy the files of my Linux installation to an external drive, so that I can restore them onto the new, larger disk.
The destination disk for the restoration is twice as large, and will have larger partitions, ext4 and linux-swap. Imaging the entire disk or its first partition is not really a good option, because both require later re-partitioning I'd like to avoid.
I am backing up to an exFAT-formatted drive, there are some issues with copying an ext4 Linux installation to exFAT though
may destroy important hard links and fast* symbolic links from the ext4 file system (will break Linux)
won't preserve file ownership / permissions and setuid bits (will break Linux)
won't preserve capabilities (will break Linux)
won't preserve files extended attributes (xattrs) as well, as I believe many files have important information there (I don't care about Unix ACLs as I don't think I have any files using them)
If I copied the files directly to NTFS, FAT32, exFAT, etc, much of this metadata would be destroyed.
I don't care about compression since the original disk is smaller than my backup drive, but (GNU) tar seems to preserve only permissions/ownership (with -p and extract with --same-owner), links and xattrs, but file capability support is needed to backup modern Linux.
It seems the other main options are a CloneZilla Live system, and cpio which seems to create tar archives.
So the main options are
CloneZilla or just imaging the parition
tar itself, which may break things
cpio, which may be limited by the tar archive format?
*80,000 of the 83,000 symlinks are fast symlinks, and I'd like to preserve their fast-ness if possible
|
Per @cat's comment, posting my comment as an answer -
Have you considered making a sparse file the size of your old installation, formatting it as a ext4 file system, and mounting on loopback, then copying to that? Would solve all the permissions loss, etc. issues. exFAT's filesize limit is 16EiB, surely large enough.
And per @cat's comment back to me, apparently a single file big enough won't be an issue ...
| Backing up Linux to a Windows file system for later restoration |
1,590,335,475,000 |
I like to use dynamically allocated images in VirtualBox.
It is preferred way if you like to distribute you image (remember Vagrant?).
What Linux FS can reclaim unused blocks to VirtualBox when dynamically allocated image is used?
I saw that users run:
sudo dd if=/dev/zero of=/EMPTY bs=1M || : ; rm -f /EMPTY
to shrink VDI images.
Also we all know about TRIM ATA command for SSD drives (discard option for mount)...
|
Official docs state: https://www.virtualbox.org/manual/ch08.html#vboxmanage-storageattach
VBoxManage storageattach <UUID> --nonrotational:
This switch allows to enable the non-rotational flag for virtual hard disks. Some guests (i.e. Windows 7+) treat such disks like SSDs and don't perform disk fragmentation on such media.
VBoxManage storageattach <UUID> --discard:
This switch enables the auto-discard feature for the virtual hard disks. This specifies that a VDI image will be shrunk in response to the trim command from the guest OS. The following requirements must be met:
The disk format must be VDI.
The size of the cleared area must be at least 1MB.
VirtualBox will only trim whole 1MB blocks. The VDIs themselves are organized into 1MB blocks, so this will only work if the space being TRIM-ed is at least a 1MB contiguous block at a 1MB boundary. On Windows, occasional defrag (with "defrag.exe /D"), or under Linux running "btrfs filesystem defrag" as a background cron job may be beneficial.
Notes: the Guest OS must be configured to issue trim command, and typically this means that the guest OS is made to 'see' the disk as an SSD. Ext4 supports -o discard mount flag; OSX probably requires additional settings. Windows ought to automatically detect and support SSDs - at least in versions 7, 8 and 10. Linux exFAT driver (courtesy of Samsung) supports the trim command.
It is unclear whether Microsoft's implementation of exFAT supports this feature, even though that file system was originally designed for flash.
Alternatively, there are ad hoc methods to issue trim, e.g. Linux fstrim command, part of util-linux package. Earlier solutions required a user to zero out unused areas, e.g. using zerofree, and explicitly compact the disk - only possible when the VM is offline.
So storage defined as:
<AttachedDevice discard="true" nonrotational="true" type="HardDisk">
with FS like Ext4 / Btrfs / JFS / XFS / F2FS / VFAT mounted with -o discard should work...
UPDATE TRIM support in VirtualBox is still unstable: https://www.virtualbox.org/ticket/16795
See also https://superuser.com/questions/646559/virtualbox-and-ssds-trim-command-support
| What FS can reclaim unused blocks to VirtualBox when dynamically allocated image is used? |
1,590,335,475,000 |
Basically, the EXT4 filesystem has chunks of data in size of 128MiB (by default), and this chunk is called a "group" or "group of blocks", which basically looks like this on the disk:
The bigger the disk, the more groups you have one after the other.
When you use the dumpe2fs tool, it can simply tell you how many groups you have and return also some info about them, for instance:
Group 690: (Blocks 22609920-22642687) csum 0x7443 [ITABLE_ZEROED]
Block bitmap at 22544386 (bg #688 + 2), csum 0xab2a9072
Inode bitmap at 22544402 (bg #688 + 18), csum 0x1ef9c14a
Inode table at 22545440-22545951 (bg #688 + 1056)
0 free blocks, 8182 free inodes, 10 directories, 8182 unused inodes
Free blocks:
Free inodes: 5652491-5660672
And my question is: what would happen if you zeroed out the whole group of blocks using the dd tool?
If you had a file (files) small enough to fit entirely in that group, the file (files) would be permanently lost. But there are other cases, and I'm not sure how system reacts in such situations.
Files can be fragmented and described by many groups of blocks, which (in this case) means that only some part of a file was zeroed out. What would happen to that file after fsck is run? Will the entire file be placed in lost&found dir? Or maybe would it be visible in the filesystem's structure, but you won't be able to open/access it?
What would happen to the files in the directories if the group above was zeroed out? There's only info that 0 free blocks, 8182 free inodes, 10 directories, 8182 unused inodes, which means that 10 dirs and 0 files are described by this group, and all blocks are used by data. So we have 10 dirs and probably some big file which is fragmented. At least I understand the info in this way. But the dirs can contain files, and what would happen to the files in the dirs when you zero out this group? Will they be accessible after fsck?
|
Basically it boils down to whether the meta data and/or the data is affected.
Generally speaking, when the meta data is corrupted the affected files/directories become inaccessible (or partially inaccessible). Tools such as fsck may be able to fix the corruption, but it really depends on precisely what has been damaged.
However, when the data is corrupted unless the filesystem implements data checksums, which EXT4 does not, the filesystem will act as if the data is fine. It is up to the application to validate and perform repairs, if possible.
| What would happen if you zeroed out a certain group of blocks in the EXT4 filesystem? |
1,590,335,475,000 |
I am trying to recover whatever data I can from a bad partition on a hard drive. Unfortunately fsck failed to resolve the issue. The output from mount is as follows:
$ sudo mount /dev/sda3 /mnt/
mount: wrong fs type, bad option, bad superblock on /dev/sda3,
dmesg output is as follows:
[77027.447708] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
[77027.447714] ata1.00: irq_stat 0x40000001
[77027.447719] ata1.00: failed command: READ DMA
[77027.447726] ata1.00: cmd c8/00:08:00:28:c3/00:00:00:00:00/e8 tag 25 dma 4096 in
res 51/01:00:00:28:c3/00:00:08:00:00/e8 Emask 0x9 (media error)
[77027.447730] ata1.00: status: { DRDY ERR }
[77027.447733] ata1.00: error: { AMNF }
[77027.448901] ata1.00: configured for UDMA/100
[77027.448915] sd 0:0:0:0: [sda] tag#25 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
[77027.448919] sd 0:0:0:0: [sda] tag#25 Sense Key : 0x3 [current]
[77027.448922] sd 0:0:0:0: [sda] tag#25 ASC=0x13 ASCQ=0x0
[77027.448926] sd 0:0:0:0: [sda] tag#25 CDB: opcode=0x28 28 00 08 c3 28 00 00 00 08 00
[77027.448929] blk_update_request: I/O error, dev sda, sector 147007488
[77027.448934] Buffer I/O error on dev sda3, logical block 0, async page read
[77027.448967] ata1: EH complete
Smartctl output is as follows:
SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE
1 Raw_Read_Error_Rate 0x000f 200 200 051 Pre-fail Always - 108
3 Spin_Up_Time 0x0003 242 185 021 Pre-fail Always - 2891
4 Start_Stop_Count 0x0032 081 081 000 Old_age Always - 19060
5 Reallocated_Sector_Ct 0x0033 199 199 140 Pre-fail Always - 1
7 Seek_Error_Rate 0x000f 200 200 051 Pre-fail Always - 0
9 Power_On_Hours 0x0032 076 076 000 Old_age Always - 17595
10 Spin_Retry_Count 0x0013 100 100 051 Pre-fail Always - 0
11 Calibration_Retry_Count 0x0012 100 100 051 Old_age Always - 0
12 Power_Cycle_Count 0x0032 084 084 000 Old_age Always - 16934
190 Airflow_Temperature_Cel 0x0022 056 028 045 Old_age Always In_the_past 44
194 Temperature_Celsius 0x0022 106 078 000 Old_age Always - 44
196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0
197 Current_Pending_Sector 0x0012 182 182 000 Old_age Always - 761
198 Offline_Uncorrectable 0x0010 182 182 000 Old_age Offline - 763
199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 1
200 Multi_Zone_Error_Rate 0x0009 161 127 051 Pre-fail Offline - 1311
.. removed some excessive output...
After command completion occurred, registers were:
ER ST SC SN CL CH DH
-- -- -- -- -- -- --
01 51 00 00 28 c3 e8 Error: AMNF at LBA = 0x08c32800 = 147007488
Commands leading to the command that caused the error were:
CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name
-- -- -- -- -- -- -- -- ---------------- --------------------
c8 00 08 00 28 c3 08 00 21:23:05.524 READ DMA
ca 00 08 20 31 bf 02 00 21:23:05.524 WRITE DMA
ca 00 08 20 30 bf 02 00 21:23:05.523 WRITE DMA
ca 00 08 60 2f bf 02 00 21:23:05.523 WRITE DMA
ca 00 08 08 2f bf 02 00 21:23:05.523 WRITE DMA
The sector affected is the first sector of the partition.
dumpe2fs, tune2fs, and debugfs all fail to read the drive, even if I use alternative superblocks (found using mke2fs -n /dev/sda2). Curiously if I do a dd on the affected sector I get no errors.
sudo dd if=/dev/sda of=/dev/null bs=512 skip=147007488 count=1
1+0 records in
1+0 records out
512 bytes copied, 0.000177462 s, 2.9 MB/s
I'm not too hopeful about recovering my data, but interested in the process that would be required to attempt this. Should i write zeros over the affected sectors or is there a better approach?
Thanks
|
Use ddrescue insted of dd (it will ignore read errors), then you can run fsck on the dumped image with various -b parameters.
man fsck:
-b superblock
Instead of using the normal superblock, use an alternative
superblock specified by superblock. This option is normally
used when the primary superblock has been corrupted. The loca‐
tion of the backup superblock is dependent on the filesystem's
blocksize. For filesystems with 1k blocksizes, a backup
superblock can be found at block 8193; for filesystems with 2k
blocksizes, at block 16384; and for 4k blocksizes, at block
32768.
| Recovering from an AMNR Hard Drive error (ext4) |
1,590,335,475,000 |
What does it mean "Structure needs cleaning" ?
I've never seen such error code before - and man cp is not that helpful.
It has happened to me on ext4.
I was trying to copy directory using:
cp -arv dirname dirname.bak
|
Some googling told me that there was a patch to ext4 last year that mentions returning EUCLEAN upon an out of space error.
E.g. https://patchwork.ozlabs.org/patch/452275/
The fix is either to run e2fsck -E bmap2extent, or to chattr +e the file.
| cp: cannot create directory 'ABC.DEF/G/H': Structure needs cleaning |
1,590,335,475,000 |
I'm using Linux (Ubuntu) and I was told that I can use the method described below to clone the system's hard drive to another one - to plug into a new machine. (Without booting from a Live CD)
It assumes that the system's disk is /dev/sda, the partition mounted as root is /dev/sda1, and an empty disk to clone it to is /dev/sdb.
echo u > /proc/sysrq-trigger
Remounts all filesystems including the one mounted as root read-only.
e2fsck -fy /dev/sda1
Corrects the filesystem errors caused by forcing the R/O remount.
dd if=/dev/sda of=/dev/sdb clones the disk to the empty one.
e2fsck -fy /dev/sdb1 fixes the newly cloned filesystem. At this step it usually tells about fixed block checksums.
reboot -f
Reboots the system. At this step I disconnect the newly cloned disk, and plug it into a new PC.
I've used this method two times, and all machines are working fine, but I'm afraid that doing that could cause some dangerous filesystem issues? If yes, why? And should I avoid using this method to clone hard disks in the future?
|
The best way to do this would be to create an LVM snapshot of the filesystem and then use the snapshot as the source for making the copy. That has two benefits:
it doesn't require rebooting the system when you are done
it flushes and syncs the filesystem so you don't have inconsistent/corrupt metadata
Failing that, if you are already willing to reboot your system to make a copy, then you could shut down cleanly and boot into single-user mode (add single on the kernel command line from grub, ro should already be there) and then use dd to copy the source disk to the target. In single-user mode the root partition is already mounted read-only so no problems with inconsistent on-disk (meta)data.
| Is using the SysRQ Emergency Remount an acceptable way to clone hard disk? |
1,590,335,475,000 |
According to Debian
Ext2/3/4 filesystems are upgradeable to Btrfs; however, upstream recommends backing up the data, creating a pristine btrfs filesystem with wipefs -a and mkfs.btrfs, and restoring from backup -- or replicating the existing data (eg: using tar, cpio, rsync etc).
Because Debian doesn't support BTRFS through installation, I'm comparing wipefs, mkfs.btrfs after install and the upgrade approach? Do you lose anything with the upgrade to btrfs with btrfs upgrade approach?
|
Other than the liability that something goes wrong you need only
Remove the original filesystem metadata with btrfs subvolume delete
Defrag to make file extents more contiguous with btrfs filesystem defrag
Run btrfs balance
Summary of btrfs convert
| Upgrading from ext3/4 to BTFS vs fresh install? |
1,590,335,475,000 |
I changed my /etc/fstab from:
UUID=f1fc7345-be7a-4c6b-9559-fc6e2d445bfa / ext4 errors=remount-ro 0 1
UUID=4966-E925 /boot/efi vfat umask=0077 0 1
to this:
UUID=f1fc7345-be7a-4c6b-9559-fc6e2d445bfa / ext4 data=journal,errors=remount-ro 0 1
UUID=4966-E925 /boot/efi vfat umask=0077 0 1
Effectively adding data=journal, before errors=remount-ro option. The reasoning was this computer is running a fragile application 24/7, problems are the power cuts longer than my UPS can hold.
Upon the next boot the TTY greeted me, suppose I will be able to log in, is there a way to fix this?
|
TTYs have insanely fast keyboard set by default. I tried to log in about 30 times before the final success.
If you have numbers in your password or login name, you may want to turn the numlock on.
Issue this command, but make sure you use your drive and partition number:
sudo mount -o data=ordered,remount,rw /dev/nvme0n1p2 /
Edit your /etc/fstab not to contain the data=journal part, save it.
Reboot.
Optional, but recommended step: You may want to check your root filesystem upon boot now, if so, please refer to answers here, just a summary:
To force a fsck on every boot on Linux Mint 18.x, use either tune2fs, or fsck.mode=force, with optional fsck.repair=preen / fsck.repair=yes, the kernel command line switches.
| Fstab adding data=journal crashed my Linux' ext4 upon boot, how to fix? |
1,677,841,150,000 |
I am trying to root cause a customer case where 2 Identical drives, formatted with the same command, led to a difference of ~55GB in total disk space due to additional Inode overhead.
I want to understand
The math on how 2xInodes per group translates to 2xInode count
How does Inodes per group get set when lazy_itable_init flag is used
Environment:
The 2 drives are on 2 identical hardware servers, running on the same exact OS.
Here are the details of the 2 drives (Sensitive info redacted):
Drive A:
=== START OF INFORMATION SECTION ===
Vendor: HPE
Product: <strip>
Revision: HPD4
Compliance: SPC-5
User Capacity: 7,681,501,126,656 bytes [7.68 TB]
Logical block size: 512 bytes
Physical block size: 4096 bytes
LU is resource provisioned, LBPRZ=1
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
Logical Unit id: <strip>
Serial number: <strip>
Device type: disk
Transport protocol: SAS (SPL-3)
Local Time is: Mon Apr 25 07:39:27 2022 GMT
SMART support is: Available - device has SMART capability.
Drive B:
=== START OF INFORMATION SECTION ===
Vendor: HPE
Product: <strip>
Revision: HPD4
Compliance: SPC-5
User Capacity: 7,681,501,126,656 bytes [7.68 TB]
Logical block size: 512 bytes
Physical block size: 4096 bytes
LU is resource provisioned, LBPRZ=1
Rotation Rate: Solid State Device
Form Factor: 2.5 inches
Logical Unit id: <strip>
Serial number: <strip>
Device type: disk
Transport protocol: SAS (SPL-3)
Local Time is: Mon Apr 25 07:39:23 2022 GMT
SMART support is: Available - device has SMART capability.
The command run to format the drive is:
sudo mke2fs -F -m 1 -t ext4 -E lazy_itable_init,nodiscard /dev/sdc1
The issue:
The df -h output for Drives A and B respectively shows DriveA with size 6.9T vs Drive B with size 7.0T:
/dev/sdc1 6.9T 89M 6.9T 1% /home/<strip>/data/<serial>
...
/dev/sdc1 7.0T 3.0G 6.9T 1% /home/<strip>/data/<serial>
Observations:
fdisk output on both drives show they both have identical partitions.
DriveA:
Disk /dev/sdc: 7681.5 GB, 7681501126656 bytes, 15002931888 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disk label type: gpt
Disk identifier: 70627C8E-9F97-468E-8EE6-54E960492318
# Start End Size Type Name
1 2048 15002929151 7T Microsoft basic primary
DriveB:
Disk /dev/sdc: 7681.5 GB, 7681501126656 bytes, 15002931888 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disk label type: gpt
Disk identifier: 702A42FA-9A20-4CE4-B938-83D3AB3DCC49
# Start End Size Type Name
1 2048 15002929151 7T Microsoft basic primary
/etc/mke2fs.conf contents are identical on both systems, so no funny business here:
================== DriveA =================
[defaults]
base_features = sparse_super,filetype,resize_inode,dir_index,ext_attr
enable_periodic_fsck = 1
blocksize = 4096
inode_size = 256
inode_ratio = 16384
[fs_types]
ext3 = {
features = has_journal
}
ext4 = {
features = has_journal,extent,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize,64bit
inode_size = 256
}
...
================== DriveB =================
[defaults]
base_features = sparse_super,filetype,resize_inode,dir_index,ext_attr
enable_periodic_fsck = 1
blocksize = 4096
inode_size = 256
inode_ratio = 16384
[fs_types]
ext3 = {
features = has_journal
}
ext4 = {
features = has_journal,extent,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize,64bit
inode_size = 256
}
If we take a diff between the tune2fs -l output for both drives, we see Inodes per group on DriveA are 2x DriveB
We also see Inode count on DriveA is 2xDriveB (Full diff HERE)
DriveA:
Inode count: 468844544
Block count: 1875365888
Reserved block count: 18753658
Free blocks: 1845578463
Free inodes: 468843793
...
Fragments per group: 32768
Inodes per group: 8192
Inode blocks per group: 512
Flex block group size: 16
DriveB:
Inode count: 234422272 <----- Half of A
Block count: 1875365888
Reserved block count: 18753658
Free blocks: 1860525018
Free inodes: 234422261
...
Fragments per group: 32768
Inodes per group: 4096 <---------- Half of A
Inode blocks per group: 256 <---------- Half of A
Flex block group size: 16
From How to calculate the "Inode blocks per group" on ext2 file system? I understand Inode blocks per group is a result of Inodes per group
From the mke2fs code (Source), Inodes per group value seems to be called in the write_inode_tables function only when lazy_itable_init is provided:
write_inode_tables(fs, lazy_itable_init, itable_zeroed);
...
static void write_inode_tables(ext2_filsys fs, int lazy_flag, int itable_zeroed)
...
if (lazy_flag)
num = ext2fs_div_ceil((fs->super->s_inodes_per_group - <--------- here
ext2fs_bg_itable_unused(fs, i)) *
EXT2_INODE_SIZE(fs->super),
EXT2_BLOCK_SIZE(fs->super));
If we take the difference in inode count and multiply it by the constant inode size (256) we get (468844544-234422272)*256 = 60012101632 bytes ~55GiB of extra inode overhead.
Can anyone help me the math on how Inode count increased to 2x when Inodes per group increased to 2x?
Does lazy_itable_init have an impact at runtime that decides the value of Inodes per group, if so how can we understand what value will it set?
(This flag was the only reference to s_inodes_per_group in the code)
|
I found the difference in these 2 cases was a difference in the e2fsprogs version - 1.42.9 & 1.45.4. I didn't think of checking that and only relied on mke2fs.conf file. Apologies for this obvious miss and thanks @lustreone for suggesting.
I am still curious to know the math relating to Inodes per group and Inode count.
| How does "Inodes per group" and "lazy_itable_init" flag relate to the "Inode count" value in an ext4 filesystem? |
1,677,841,150,000 |
I was wondering how to log flag changes in a file, e.g. chattr +a somefile.
I realized that timestamps shown by stat somefile are not useful to audit flag changes: when the file is appended, it overrides the last time a flag was changed.
|
auditd was created exactly for that. inotify/fsnotify require a ton of code to be useful and they are generally not used for this purpose.
There are plenty of manuals on the net, e.g.
https://www.thegeekdiary.com/how-to-audit-file-access-on-linux/
https://www.xmodulo.com/how-to-monitor-file-access-on-linux.html
| How to log flag changes to files on ext4 and xfs filesystems? |
1,677,841,150,000 |
I have a VPS with CentoS 7 that is robbing me of 68GB of space. My server has 160GB of storage. It says it is using 120GB. But my server should only be using about 50GB - 65Gb.
I found that there is a file in the root that is 68GB with file name of "." when I ran (du -h --max-depth=1) which I thought was the sum total being used but when I run (du -cksh *) the total is actually 60GB.
Can there be a hidden file with no name for 68GB?
[root@srv ~]# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda 154G 122G 31G 80% /
[root@srv ~]#
[root@srv /]# df -H
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.1G 0 4.1G 0% /dev
tmpfs 4.2G 0 4.2G 0% /dev/shm
tmpfs 4.2G 12M 4.1G 1% /run
tmpfs 4.2G 0 4.2G 0% /sys/fs/cgroup
/dev/sda 165G 131G 33G 80% /
/dev/sdc 85G 40G 41G 49% /mnt/DRIVE1
tmpfs 821M 0 821M 0% /run/user/0
[root@srv /]#
[root@srv ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 156.3G 0 disk /
sdb 8:16 0 3.8G 0 disk [SWAP]
sdc 8:32 0 80G 0 disk /mnt/DRIVE1
[root@srv ~]#
[root@srv ~]# du -h --max-depth=1
16K ./.local
156M ./.cache
7.0M ./.acme.sh
4.0K ./.spamassassin
8.0K ./.ssh
9.3M ./.npm
56K ./.razor
48K ./.subversion
20K ./.config
35M ./.composer
4.0K ./.conf
11M ./node_modules
129M ./jail
8.0K ./.pki
8.0K ./.cwp_sessions
346M .
[root@srv ~]# cd /
[root@srv /]# du -h --max-depth=1
318M ./boot
4.3M ./tmp
6.6G ./usr
0 ./sys
20G ./home
4.0K ./.trash
0 ./dev
1.7G ./opt
2.8G ./var
16K ./lost+found
36K ./.channels
11M ./run
4.0K ./srv
37G ./mnt
265M ./root
4.0K ./media
du: cannot access ‘./proc/24037/task/24037/fd/4’: No such file or directory
du: cannot access ‘./proc/24037/task/24037/fdinfo/4’: No such file or directory
du: cannot access ‘./proc/24037/fd/3’: No such file or directory
du: cannot access ‘./proc/24037/fdinfo/3’: No such file or directory
0 ./proc
43M ./etc
68G .
[root@srv /]#
[root@srv /]# du -cksh *
16K aquota.group
16K aquota.user
0 bin
318M boot
0 dev
44M etc
12G home
0 lib
0 lib64
16K lost+found
4.0K media
37G mnt
1.7G opt
du: cannot access ‘proc/24756/task/24756/fd/4’: No such file or directory
du: cannot access ‘proc/24756/task/24756/fdinfo/4’: No such file or directory
du: cannot access ‘proc/24756/fd/4’: No such file or directory
du: cannot access ‘proc/24756/fdinfo/4’: No such file or directory
0 proc
4.0K razor-agent.log
311M root
11M run
0 sbin
0 scripts
4.0K srv
0 sys
4.3M tmp
6.5G usr
2.8G var
60G total
[root@srv /]#
|
So, the good news is you don't have a big file called '.'. That's just the summary for the current directory.
I'd throw -a into your du flags to see filenames too.
If that doesn't show you anything new you might want to check the output of lsof | grep deleted to see if maybe there's a process running that tried to delete a file but still has it open.
Another thing I'd check is to make sure /mnt was written to without anything mounted in there.
| Unknown usage of HDD Space |
1,677,841,150,000 |
gparted reports 74GB used and 9.02TiB available (seems reasonable).
df reports 40MB used, but only shows 8.6TiB available (suddenly 425 GiB missing)
Disk Info in the file manager reports similar to df, showing 0 bytes used but only 8.6TiB available
Am I actually losing over 5% of my disk to overhead?
|
I found the solution to this over on serverfault - the reserved blocks for root-owned processes by default take 5% of your drive. I lowered this to 0% using tune2fs -m 0 /dev/sdb1 and now I am showing all my free space, as espected.
| Created a new volume on a 10TB (9.1TiB) hard drive, getting conflicting information regarding free space! |
1,677,841,150,000 |
I'm trying to create an update image from an ext4 filesystem which should only consists of the changed files. So basically I have a Debian distro with an ext4 root file system. I want to create a base image A from that and install updates afterwards (e.g. apt upgrade). After the update I want to create another image B.
Is there someway to create a "diff image" by comparing what changed between image A and image B so I don't have to copy the full image B all the time?
The reason for this is that the final image should be mounted on another device and all the changes should just be copied over (think of it as updating the second device with the changes from the update of the first device).
|
I don't think that the "diff image" will work at the filesystem level like you want it to. There are too many variables in the placement of files/directories, metadata checksums (on newer systems), journal blocks, etc. that make it impossible to take a block-level diff from one filesystem and apply it to another filesystem even if the two filesystems were originally identical. If there have been any modifications to the second filesystem during normal use, then applying a block level diff would corrupt the second filesystem. In some cases the corruption might be minor (e.g. inconsistent free block/inode counts in the superblocks or group descriptors), in other cases there would be significant corruption (e.g. incorrect bitmaps, journal blocks, etc).
What would be more practical is to use a file level diff, which is essentially "rsync" (or a parallel version thereof), and would not be ext4 specific. You may be able to speed up the rsync process by having an fsnotify watcher on the filesystem during the upgrade to generate a list of all modified files so that a full filesystem scan is not needed.
| Create ext4 img from diff |
1,611,512,903,000 |
I have a 35G mount as my root file system, and until now, it was reporting 1% usage.
I was using an SD card for my storage and today I got a new one. I'm mounting my /swap partition on that, so I decided to partition the new one with a swap and a "normal" one.
First I created an NTFS partition in case I want to use the card in Windows. I had problems, so I tried FAT, and ultimately I went to ext4.
In the process, I was modifying my /etc/fstab file and restarting, but when I got the card working, now my root file system reports being used at 100%!!! without me changing anything there.
Output of df -h:
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p5 35G 33G 0 100% /
...
And df -i:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/nvme0n1p5 2.2M 20K 2.2M 1% /
...
I thought maybe this is because fsck, and I did
touch /forcefsck && reboot
But that didn't solve the issue. I didn't change anything in my root file system.
Line in fstab for that:
UUID=<uuid> / ext4 defaults,noatime 0 1
...
Can someone please help me troubleshoot this?
|
Turns out during my mounting/umounting/rebooting etc., something happened.
I was mounting the SD card on /media, and I'm not sure why, but when I finished with the SD card setup, apparently the /media was created as a directory and not a mount, so it was that directory taking up space.
I didn't notice that it's not actually my mount and a directory üôÅ, so I deleted the directory and mounted the SD card on /mount and the issue is solved üôÇ
| Root file system reported as being 100% without adding any file |
1,611,512,903,000 |
I have a situation where I observe that BackupPC stalls for a particular host. This host runs Debian 10 (and has e.g. Docker installed).
During that situation, two rsync-related processes are running on that host (parent sudo /usr/bin/rsync --server ... and child /usr/bin/rsync --server ...). When I try to find out which file rsync is currently handling (i.e. where it stalls) by issuing lsof -p $child_pid, this also stalls (i.e. it apparently never returns but can be stopped e.g. with Ctrl-C). ls /proc/$child_pid/fs works fine at the same time (and returns only 4 fds).
So perhaps this is close to the root cause why rsync is stalling. How can it be the case that lsof -p is stalling esp. when ls /proc/$child_pid/fd is not? Should it not always come back with an (almost) immediate answer? And how can I further diagnose the situation (as well as then resolve it)?
UPDATE I am now checking for fragmentation in ext4 file systems on that host. This also takes a long time ...
time e4defrag -v -c $(df -t ext4 | tail -n +2 | awk '{print $1}')
UPDATE By now it looks as if e4defrag -v -c is stuck; its last output reads "/media/cdrom0" File is not regular file. The host is in fact a Proxmox VM, so could the issue perhaps be related to its virtual CD-ROM? This seems unlikely though, because df /media/cdrom0 indicates it is mounted on /, and if I am not mistaken e4defrag is already past this file system and now into /var. Perhaps /var (size 23G) is so heavily fragmented that a long duration is normal or perhaps e4defrag hits some limitations.
|
In the end, this looked related to "orphaned" files apparently stemming from NFS mounts into containers at a time when an NFS server could not be reached. Once I identified and removed those (e4defrag and) lsof were no longer stallingm, i.e. behaved again as expected.
| How can lsof -p $pid be stalling when ls /proc/$pid/fd is not? |
1,611,512,903,000 |
I have an SSD with LVM with one LV dedicated to a Win7 VM .vdi file of 80Gb.
The underlying fs is ext4.
After installing a new SSD and setting up the new LV's in migrating, the copy from the old SSD failed on copy with
Input Output Err No.5
Failed on cp, rsync, dd
And a quick look at
dmesg
[ 5829.294651] sd 2:0:0:0: [sdb] tag#14 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE cmd_age=0s
[ 5829.294653] sd 2:0:0:0: [sdb] tag#14 Sense Key : Medium Error [current]
[ 5829.294654] sd 2:0:0:0: [sdb] tag#14 Add. Sense: Unrecovered read error - auto reallocate failed
[ 5829.294656] sd 2:0:0:0: [sdb] tag#14 CDB: Read(10) 28 00 51 50 f9 47 00 00 08 00
[ 5829.294658] blk_update_request: I/O error, dev sdb, sector 1364261191 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
A selftest with smartctl gives the a fail with the LBA
badblocksshows me 6 bad blocks and a further check with debugfs confirms the inode for all bad blocks as belonging to the .vdi file.
There is nothing on that LV bar the VM which currently still boots fine in virtualbox (which also won't copy the VM).
So the assumption is that the bad blocks are in some rarely used part of the VM filesystem and it doesn't care (yet) but the day will come.
Now I can't blame my Linux box for disliking the Win7 VM but I would like to save the old girl if only for sentiment.
Is there a way to recover the .vdi, perhaps by defaulting to a zero filled block on a bad read and skipping to the next block?
Just found
https://serverfault.com/questions/489696/recovering-a-file-with-bad-blocks-in-the-middle
Soon as I typed ....giving it a go
|
I didn't find and answer on U&L but serverfault provided one.
So I will leave their solution here for anyone else. Let me know if there is a dupe in U&L in which case I will take this down.
The solution was as simple as
dd if=Win7.vdi of=~/mnt/Win7.vdi bs=4k conv=noerror,sync
After having checked that the block size was correct.
All good now.
| Recover large file which contains a few bad blocks |
1,611,512,903,000 |
I'd like to set up one external hard drive that would serve as the backup drive for two different laptops, both running Linux. I understand this is problematic b/c each machine will have its own set of user IDs, which can cause permission conflicts/general chaos.
I'm just wondering if there are any solutions I haven't considered. I would use ext4, but for the permissions issue. I thought about using NFS and sharing the drive over the network, but that's not really the use case I want -- I want each laptop to be able to plug into the drive and use it. I would also like the file system to be encrypted.
So is there a graceful way to do this, or is it just not in the cards? Is there another file system designed for this use case? Should I just use NTFS or HFS+?
UPDATE:
As requested below, updating to add: there is no trust issue, as the two laptops are just mine and my wife's. And there is no specific problem that I foresee -- rather, it just feels sketchy, since I don't think ext4 was designed to be used this way.
With that said, I think I'll just stick with a single ext4 encrypted partition, keeping each backup in separate directories, and not worry about it. :) Thanks!
|
No, it is not a problem. Use a normal Linux filesystem.
I would recommend that you use one directory for one computer backup and an other for the second computer. Create the directory as root, and give the expected owner/group to the directory (or just keep root, if you want to backup all system). So mkdir and chown. Just do it on every system, so that the directory which need to be accesses from one system has the correct permission of that system.
The problem could arise if you try to access data of the other computer, with one computer. Just do it as root (and set the user/group when copying file to the "other" computer).
Note: If you are root on a computer (any computer), you may read all files on a disk that you can physically attach to that computer. The only exception are encrypted files/filesystems. So careful with external disks (especially on moving them out of house).
| Shared Hard Drive? |
1,611,512,903,000 |
I have a project which requires a lot of contact with ext file system. But the majority of tools is based on Windows. Thus is there any software or plugin for explorer which allow the access to ext on Windows?
|
Yes one method could be via third party software installed on your Windows computer such as outlined in this post.
However, I cannot vet for the authenticity of any Windows programs or whether they will compromise the integrity of your ext file system.
Another method, if your Windows System and Linux System are installed on separate systems as either Virtual Machines or on different hardware, would be to make a file share server using something like Samba or NFS to share access to the ext file system via the network.
| Is there a practical way to view ext file system on Windows? |
1,611,512,903,000 |
I am using an Ubuntu 16.04 LTS. I ran into an unusual problem with my disk usage. Some of my applications were aborted with the message on the terminal stating "not enough disk space available".
The following is the out put of
df -hT
Filesystem Type Size Used Avail Use% Mounted on
udev devtmpfs 5.7G 0 5.7G 0% /dev
tmpfs tmpfs 1.2G 9.6M 1.2G 1% /run
/dev/nvme0n1p7 ext4 69G 66G 40M 100% /
tmpfs tmpfs 5.8G 102M 5.7G 2% /dev/shm
tmpfs tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
/dev/nvme0n1p1 vfat 256M 32M 225M 13% /boot/efi
tmpfs tmpfs 1.2G 84K 1.2G 1% /run/user/1000
My ext4 partition seems to be used up 100% and I find that it is mounted on '/'. I don't know if this is unusual. Before typing the df -hT command, I checked gparted and found that ext4 was mounted on /var/lib/docker/aufs. So hastily I uninstalled docker (since I wasn't using it anyways) and now it shows as '/'.
Also, while trying to find out what is consuming the space, I found that /tmp consumes 15G. But I am not sure how to free that. Any help regarding this is appreciated. Thanks.
|
It is not only normal for a filesystem to be mounted as /,
it is mandatory.
It is common for the root filesystem to be ext4.
To free the space used in /tmp:
cd /tmp.
ls -la.
Look at the files and see whether any of them are important
(they shouldn’t be),
and try to figure out if they are being used by running processes.
rm -r *, or rm everything except the ones you don’t want to remove.
You may need to use sudo to get all the files,
but, if so, try to figure out why.
Are there files there that are owned by other people?
If possible, you might want to reboot before doing the above.
This might just clear out /tmp all by itself.
And, even if it doesn’t,
it should clear out any processes that might be using files in /tmp.
| ext4 mounted on / and tmp consuming disk space |
1,611,512,903,000 |
I am getting many of these messages on one of my systems:
[ 348.515157] EXT4-fs (vda9): VFS: Can't find ext4 filesystem
[ 348.517587] EXT4-fs (vda9): VFS: Can't find ext4 filesystem
[ 348.519944] EXT4-fs (vda9): VFS: Can't find ext4 filesystem
[ 348.522487] squashfs: SQUASHFS error: Can't find a SQUASHFS superblock on vda9
[ 348.524974] FAT-fs (vda9): bogus number of reserved sectors
[ 348.525946] FAT-fs (vda9): Can't find a valid FAT filesystem
[ 348.533493] XFS (vda9): Invalid superblock magic number
[ 348.536738] FAT-fs (vda9): bogus number of reserved sectors
[ 348.537781] FAT-fs (vda9): Can't find a valid FAT filesystem
[ 348.543638] VFS: Can't find a Minix filesystem V1 | V2 | V3 on device vda9.
[ 348.546068] hfsplus: unable to find HFS+ superblock
[ 348.547531] qnx4: no qnx4 filesystem (no root dir).
[ 348.549902] ufs: You didn't specify the type of your ufs filesystem
mount -t ufs -o ufstype=sun|sunx86|44bsd|ufs2|5xbsd|old|hp|nextstep|nextstep-cd|openstep ...
>>>WARNING<<< Wrong ufstype may corrupt your filesystem, default is ufstype=old
[ 348.557643] ufs: ufs_fill_super(): bad magic number
[ 348.561613] hfs: can't find a HFS filesystem on dev vda9
The disk looks like this:
Model: Virtio Block Device (virtblk)
Disk /dev/vda: 6001GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 6001GB 6001GB zfs zfs-5514fd574fd36816
9 6001GB 6001GB 8389kB
It is a Ubuntu-VM with VirtIO disk on a KVM/Qemu system (Proxmox). It works fine but I do not understand whats happening here.
How can I fix this? Thank you!
|
Those messages are normally generated by mount attempting to guess the filesystem type when mounting. The -t option to specify a filesystem type is optional and if it's not specified, mount will attempt to determine the type automatically.
In your case, something is mounting or attempting to mount vda9 without specifying a filesystem type. As you're using ZFS and looking at the size and position of the partition I'm guessing vda9 is the spare partition that ZFS automatically creates when given a whole disk. I'm not sure why something is trying to mount it, but the messages you've pasted above are purely informational and can safely be ignored - I'd only be concerned if you were getting hundreds of them a day and they were clogging up your log files.
| "VFS: Can't find ext4 filesystem" on ZFS disk |
1,611,512,903,000 |
I'm trying to extend my root partition with the unallocated space but it seems like the unallocated space didn't exist. When I reinstalled the system I had two different Free Spaces, one with 32GB and other with 19GB but wasn't able to fix them in the same partition.
I've tried with the solutions in other posts with no result...
|
Seems like you're running gparted from your Linux-distro, and that means that some of you're partitions - including your root-partition - are in use (that's what that icon looking like a numerical keypad or whatever means). You can't move or resize a partition your actually actively using (which you here are).
Try running gparted from a "live"-DVD. It may use your swap-partition, but should leave the root-partition alone. When it's not in active use, you should be able to resize it into your free-space (will probably take a long while, since stuff will have to be moved too).
(Could add that the "mother-program" - parted - may be a bit less restrictive than gparted... but then again, the interface is more difficult and it's easier to make a mistake.)
| Can't extend root partition with unallocated space [duplicate] |
1,611,512,903,000 |
Know that my chances are small (if even there are :().
I have a HDD of 500G with 1 logical volume (ext4). By accident I overwrote it with an ISO of 1,5G:
dd if=linuxmint-18-xfce-32bit.iso of=/dev/sdb
Now I lost all data. Is there some way to re-create the logical volume to get some data back? I have the LVM backup file.
The HDD was used as system disk until 1 month ago. Then I added the boot and swap partition to the logical volume so I presume that dd only overwrote the space that was used by these partitions.
|
I did not succeed to restore the partition since it missed the start of it. What I did was recovering most of my data with the tool photorec that is from the same author as testdisk. Before I could use it I had to create a partition that started after the ISO partition until the end of the disk. photorec searched this partition and recovered the 'known to the tool' file types.
The only (small) problem is the fact that the file names are all numeric but with the correct suffix.
| Recover LVM after overwrite with dd |
1,430,793,528,000 |
Is it possible to reduce a mounting lv safely at RHEL6/CentOS6 with ext4 file system? If it is, how can I do that?
|
As per RedHat documentation you can't reduce mounted filesystem. Check here for detailed document
| Lvreduce online on RHEL 6 or centos 6 with ext4 file systems |
1,430,793,528,000 |
I have a Linux partition formatted to ext4 and I accidentally formatted it with Mac OS X Disk Utility to HFS+ (Mac OS X Extended Journaled).
I tried to recover it with http://www.r-studio.com/ on Windows but I can't get any files except few from trash or temporary browser files.
Now I'm trying another software on OS X: http://www.stellarinfo.com/mac-data-recovery.htm, however it takes some time to scan partitions.
Any ideas how to recover that partition? Should I recover it on Linux or Mac?
|
I don't have any knowledge of the tool you are using, but I used before UFSexplorer and it was good (take a look here)
Now, if you managed with the tool r-studio to get some files that's good, it means that the HFS+ format didn't wipe your disk and prepared it's initial data struture,
So, you can use UFS explorer or the software you are using from stellarinfo (although I dont' see that support ext4 ). And if it is slow it is normal. Basically, the softwares has to scan your whole HDD and identify which block belongs to which file.
Also, don't expect 100% of data retrieval, for sure some data is gone by now.
| Recover Linux ext4 partition formatted to hfs+ |
1,430,793,528,000 |
we have VM machine with disks as sdb sdc sdd ,,, etc
we create ext4 file system on sdb disk as the following
mkfs.ext4 -j -m 0 /dev/sdb -F
mke2fs 1.42.9 (28-Dec-2013)
Discarding device blocks: done
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
262144 inodes, 1048576 blocks
0 blocks (0.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
now we want to check the filesystem type as the following
lsblk -o NAME,FSTYPE | grep sdb
sdb
but we should get the following expected results as
lsblk -o NAME,FSTYPE | grep sdb
sdb ext4
we not understand why after we create ext4 file system on our disks as sdb or sdc etc we not get the file-system type - ext4
|
lsblk uses UDev database to get information about filesystems so if it doesn't show the filesystem type, something is probably wrong with UDev. To check what UDev knows about the device, use udevadm info /dev/sdb and look for ID_FS_TYPE key.
It's also possible that UDev is just too slow on your device and the value is not yet updated in the database so lsblk prints the "old" value which is "no filesystem". You can try running udevadm settle before running lsblk.
Alternatively you can use sudo blkid -p /dev/sdb (or sudo blkid -p /dev/sdb -s TYPE -o value to print only the filesystem type value). blkid with -p option actually reads the metadata area of the device (that's why it needs sudo), so the information will always be correct/up to date.
| lsblk + file system type not appears from lsblk |
1,430,793,528,000 |
I'm learning a bit about Ext4 file system here. In the first table on this link they are describing fields of the inode. Each field entry has an:
offset
size
name
description
In the description field the document states some of these values are Lower or Higher bits. What does lower/highter bits mean, and what is the explanation behind use of the concept in this ext4 file system example?
|
Let's take one example, using the doc that you link to in your question for reference: the i_uid field has size __le16 and is described as Lower 16-bits of Owner UID. If the system which created this filesystem allows 16-bit user IDs only, then all of the user ID can fit into the i_uid field: __le16 does indeed stand for "little-endian 16 bits". in your analogy, if you can only have two-digit numbers and the cost is $29, you are all set because it fits.
If it uses 32-bit user IDs (I'm pretty sure no system uses bigger ones), then the 32-bit user ID will not fit into a field of size __le16, so the 32 bits are split into two 16-bit quantities. If we number the bits from 0 for the least significant bit to 31 for the most significant bit (which is just our convention here for making things unambiguous) then bits 0-15 (the "low-order" bits) are put into the i_uid field, but bits 16-31 (the "high-order" bits) don't fit and will have to go somewhere else: on Linux which uses 32-bit user IDs, they end up in the subfield l_i_uid_high of the osd2 field of the inode. In your analogy, if the cost is $129 but you have two-digit boxes, then 29 would fit into the low-order two-digit box, and 01 would go into the high-order two-digit box.
A couple of additional points: note that all the fields are "little-endian" - if the field consists of more than one byte (e.g. __le16 consists of two bytes), then the least-significant byte comes first and the most-significant byte comes second in the order, but they are adjacent. That is regardless of the endianness of the CPU of the system: that way, the way that the filesystem is laid out on disk is independent of the CPU that laid it out; you could read this filesystem on a different system with the opposite endianness if you wanted (with the caveat that the versions of ext4 running on the two systems should be compatible).
Note also that the low-order 16 bits (= 2 bytes) of the user ID and the high-order 16 bits of the user ID are stored at two places on the disk that are not adjacent: the first one is at offset 0x2 from the the beginning of the inode, but the second one is at offset 0x74 + 0x4 from the beginning of the inode: 0x74 is where the 12-byte i_osd2 field starts and 0x4 is the offset of l_i_uid_high from the beginning of the i_osd2 field. That probably came about because at some point, "all the world was 16-bit user ids", so early filesystems only reserved the first field for the user id. When the necessity to use 32-bit user ids arose, the second 16 bits could not be placed adjacently, since other fields were already there (in this case the i_size field, which was originally limited to 32-bits, but that too proved too small, so eventually an i_size_field was added to get another 32 bits of size - see offset 0x6C in the inode), so it was placed at (probably) the first location in the inode which was unused and available for use.
A lot of this complexity was necessitated by backward-compatibility considerations (ext4 wanted to be able to read ext3 filesystems without the user having to do anything special) and by the desire to accommodate future expansion. With 20/20 hindsight, all of the scattered pieces could be put together and you would see e.g. an i_uid of type __le32 instead of having to split it into two pieces. But that's the kind of thing you have to do to move forward without abandoning everything that has gone before.
| What does upper/lower bits mean? |
1,430,793,528,000 |
Is it safe to run tune2fs -l /dev/device on a mounted filesystem? That is, listing the current values (I'm trying to do this to see if the filesystem is marked as clean).
If it's ok, is there a definitive source where this is documented so that I can rest assured I won't corrupt something?
Thanks
|
The dumpe2fs command can be used on a mounted partition.
| Is it safe to run tune2fs -l /dev/device on a mounted filesystem? |
1,430,793,528,000 |
I have a ~450 gigabyte ext4 filesystem, located at /dev/sda5 on my computer. The partition it resided in, however, was about a gigabyte bigger. So I used the command e2image -ra -p -O XXXX to move the filesystem to the left, so I could use the extra gigabyte on a partition at /dev/sda6. I didn't run that exact command, of course, I forget the decimal value after -O (I didn't run these commands manually, this was done by GParted). e2image got forcefully killed (signal nine) about 100/170 gigabytes through the process. I mounted sda5 read-only, and got many errors about invalid inodes and bad structure files when I tried to access various files or list various directories. So I ran fsck (which I now realize probably damaged things further), and directories like /home and /run (which probably the contained the most data on the whole partition), which I wasn't able to run ls in succesfully, and got lots of error messages about, were non-existent. I ran grep on sda5, and found various files in /home still intact, but I couldn't access them normally because /home was deleted by fsck. How can I recover files without manually searching for them with grep, less, or a hex editor? I have very important files I need to get back.
|
I think the commenters are right that your file system is pretty well broken. I once accidentally wrote /dev/zero to my main hard drive (while operating) for about half a second before killing it. I was able to use Photorec (http://www.cgsecurity.org/wiki/PhotoRec) to recover my partition table (pretty much automagically) and I am still using that system.
I think you should take a look at Photorec, since it is open-source (yeah!), and because it reads directly from the disk (ignoring the filesystem), so I suspect it might easily save you many hours of searching for files manually.
Also, The commenter who says image your current, messed-up hard drive right now is definitely correct
| Corrupt ext4 filesystem after e2image interrupted |
1,430,793,528,000 |
1] i have installed kali linux on my laptop. but it converted all my hard disk partition into ext4 formate. so my old data on ntfs partition is not showing on ext4 partition. so, how i can get my old data back..
2] andthen i tried to install windows 10 back but it not showing any partition at installation process..
i want to install windows 10 back and want to get all my data back..
please help me
|
If you repartition a hard disk drive, all the data on that hard disk drive is gone.
That's why it's always recommended to make a backup of your data before making drastic changes (like changing operating systems) to your computer.
| how i can get ntfs data on ext4 partition? |
1,430,793,528,000 |
I've been using Linux on ext4 file systems for many years - before that I used Windows on NTFS for many years. The ext4 file system strikes me as much more sensitive to crashes than NTFS. If I had a crash on Windows, the NTFS file system was always able to restore operation almost without problems, whereas the ext4 file system - well, you always hear that you must "never, never, ever!" just pull the plug on an ext4 file system, or it WILL be damaged! And of course, I've experienced that myself a number of times - sometimes, an ext4 system WILL lose power due to unforeseen events.
Why has a more crash-resilient filesystem not been adopted by various Linux distributions?
|
Both NTFS and ext2/ext4 are damaged by an unclean shutdown.
This is caused by cached metadata, open files, file left open after being deleted, partially written files, and many other issues.
The resilience of the repair process in both has improved dramatically over the last 20 years. It's just that ext4 is very noisy about its repairs, while NTFS just takes longer to boot while it silently does its repairs.
Some of the ext4 repairs (like finalizing deleted files that were open when the system crashed) cause lots of scary messages during the filesystem check at next boot, but this is completely harmless, and is something that would have eventually happened (but silently) anyway if the system hadn't crashed.
I could say that ext4 damage is more likely than NTFS damage on unclean system shutdown because linux has typically more things running at once and using the filesystem than a windows machine could, but this would just be mean and possibly not even true.
So it only seems like ext4 is more sensitive than NTFS. In reality, it's just that ext4 tells you what it is doing while NTFS (obviously rightly) assumes that mere users don't need to know what is going on.
| Why is the ext4 file system so sensitive to crashes? [closed] |
1,326,231,956,000 |
There is often a need in the open source or active developer community to publish large video segments online. (Meet-up videos, campouts, tech talks...) Being that I am a developer and not a videographer I have no desire to fork out the extra scratch on a premium Vimeo account. How then do I take a 12.5 GB (1:20:00) MPEG tech talk video and slice it into 00:10:00 segments for easy uploading to video sharing sites?
|
$ ffmpeg -i source-file.foo -ss 0 -t 600 first-10-min.m4v
$ ffmpeg -i source-file.foo -ss 600 -t 600 second-10-min.m4v
$ ffmpeg -i source-file.foo -ss 1200 -t 600 third-10-min.m4v
...
Wrapping this up into a script to do it in a loop wouldn't be hard.
Beware that if you try to calculate the number of iterations based on the duration output from an ffprobe call that this is estimated from the average bit rate at the start of the clip and the clip's file size unless you give the -count_frames argument, which slows its operation considerably.
Another thing to be aware of is that the position of the -ss option on the command line matters. Where I have it now is slow but accurate. The linked article describes fast-but-inaccurate and slower-but-still-accurate alternative formulations. You pay for the latter with a certain complexity.
All that aside, I don't think you really want to be cutting at exactly 10 minutes for each clip. That will put cuts right in the middle of sentences, even words. I think you should be using a video editor or player to find natural cut points just shy of 10 minutes apart.
Assuming your file is in a format that YouTube can accept directly, you don't have to reencode to get segments. Just pass the natural cut point offsets to ffmpeg, telling it to pass the encoded A/V through untouched by using the "copy" codec:
$ ffmpeg -i source.m4v -ss 0 -t 593.3 -c copy part1.m4v
$ ffmpeg -i source.m4v -ss 593.3 -t 551.64 -c copy part2.m4v
$ ffmpeg -i source.m4v -ss 1144.94 -t 581.25 -c copy part3.m4v
...
The -c copy argument tells it to copy all input streams (audio, video, and potentially others, such as subtitles) into the output as-is. For simple A/V programs, it is equivalent to the more verbose flags -c:v copy -c:a copy or the old-style flags -vcodec copy -acodec copy. You would use the more verbose style when you want to copy only one of the streams, but re-encode the other. For example, many years ago there was a common practice with QuickTime files to compress the video with H.264 video but leave the audio as uncompressed PCM; if you ran across such a file today, you could modernize it with -c:v copy -c:a aac to reprocess just the audio stream, leaving the video untouched.
The start point for every command above after the first is the previous command's start point plus the previous command's duration.
| How can I use ffmpeg to split MPEG video into 10 minute chunks? |
1,326,231,956,000 |
I found something for videos, which looks like this.
ffmpeg -i * -c:v libx264 -crf 22 -map 0 -segment_time 1 -g 1 -sc_threshold 0 -force_key_frames "expr:gte(t,n_forced*9)" -f segment output%03d.mp4
I tried using that for an audio file, but only the first audio file contained actual audio, the others were silent, other than that it was good, it made a new audio file for every second. Does anyone know what to modify to make this work with audio files, or another command that can do the same?
|
This worked for me when I tried it on a mp3 file.
$ ffmpeg -i somefile.mp3 -f segment -segment_time 3 -c copy out%03d.mp3
Where -segment_time is the amount of time you want per each file (in seconds).
References
Splitting an audio file into chunks of a specified length
4.22 segment, stream_segment, ssegment - ffmpeg documentation
| How do I split an audio file into multiple? |
1,326,231,956,000 |
I have an FFmpeg command to trim audio:
ffmpeg -ss 01:43:46 -t 00:00:44.30 -i input.mp3 output.mp3
The problem I have with this command is that option -t requires a duration (in seconds) from 01:43:46. I want to trim audio using start/stop times, e.g. between 01:43:46 and 00:01:45.02.
Is this possible?
|
ffmpeg seems to have a new option -to in the documentation:
-to position (input/output)
Stop writing the output or reading the input at position. position
must be a time duration specification, see (ffmpeg-utils)the Time
duration section in the ffmpeg-utils(1) manual.
-to and -t are mutually exclusive and -t has priority.
Sample command with two time formats
ffmpeg -i file.mkv -ss 20 -to 40 -c copy file-2.mkv
ffmpeg -i file.mkv -ss 00:00:20 -to 00:00:40 -c copy file-2.mkv
This should create a copy (file-2.mkv) of file.mkv from the 20 second mark to the 40 second mark.
| Trim audio file using start and stop times |
1,326,231,956,000 |
I have two video clips. Both are 640x480 and last 10 minutes. One contains background audio, the other one a singing actor. I would like to create a single 10 minute video clip measuring 1280x480 (in other words, I want to place the videos next to each other and play them simultaneously, mixing audio from both clips). I've tried trying to figure out how to do this with ffmpeg/avidemux, but so far I came up empty. They all refer to concatenating when I search for merging.
Any recommendations?
|
ffmpeg \
-i input1.mp4 \
-i input2.mp4 \
-filter_complex '[0:v]pad=iw*2:ih[int];[int][1:v]overlay=W/2:0[vid]' \
-map '[vid]' \
-c:v libx264 \
-crf 23 \
-preset veryfast \
output.mp4
This essentially doubles the size of input1.mp4 by padding the right side with black the same size as the original video, and then places input2.mp4 over the top of that black area with the overlay filter.
Source: https://superuser.com/questions/153160/join-videos-split-screen
| Merge two video clips into one, placing them next to each other |
1,326,231,956,000 |
What command lines to use to convert from avi to mp4, but without destroying the framesize and making the file small as the original size or a little bit bigger, and same thing with mp4 to avi? Whenever I tried converting it became like 2 gb
|
Depending on how your original file was encoded, it may not be possible to keep the file size.
This command should keep frame sizes and rates intact while making an mp4 file:
ffmpeg -i infile.avi youroutput.mp4
And this command will give you information about your input file - the frame size, codecs used, bitrate, etc.:
ffmpeg -i infile.avi
You can also play with the acodec and vcodec options when you generate your output. Remember also that mp4 and avi files can use various codecs and your mileage may vary according to which codec you pick.
| Encode with ffmpeg using avi to mp4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.