date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,348,735,162,000
On a new external hard drive (Intenso 05-1204-18A), I made (with GParted) two partitions : Disk /dev/sdc: 931.5 GiB, 1000204883968 bytes, 1953525164 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xfa00a60d Device Boot Start End Sectors Size Id Type /dev/sdc1 2048 50794495 50792448 24.2G b W95 FAT32 /dev/sdc2 50794496 1953523711 1902729216 907.3G 83 Linux (I am using Linux 3.19.3-3-ARCH GNU/Linux) When I mount the first (by using file manager, but it works with terminal too), I can see : drwxr-x---+ 3 felicien felicien 60 Apr 16 16:31 . drwxr-xr-x 3 root root 60 Apr 16 15:58 .. drwxr-xr-x 4 felicien felicien 16384 Jan 1 1970 INTENSO WIN I can mkdir and everything in this directory. When I mount the second : drwxr-x---+ 4 felicien felicien 80 Apr 16 16:32 . drwxr-xr-x 3 root root 60 Apr 16 15:58 .. drwxr-xr-x 4 felicien felicien 16384 Jan 1 1970 INTENSO WIN drwxr-xr-x 3 root root 4096 Apr 16 16:02 Intenso Linux I have to chown the directory to be able to write into it. Why have I permissions with FAT32 and not EXT4 ? Thanks.
The fat32 filesystem has no notion of ownership or permissions. The man page for mount lists these options that help make it look closer to what Unix users expect: uid=value and gid=value Set the owner and group of all files. (Default: the uid and gid of the current process.) umask=value Set the umask (the bitmask of the permissions that are not present). The default is the umask of the current process. So when you mounted it, it was mounted with your userid, groupid, and umask (which I'm guessing is 022). All files and directories will be owned by you, and will have permissions rwxr-xr-x. ext4, on the other hand, is a classic Unix filesystem that stores userid, groupid, and permissions information. If you create a directory while running as root, it will be owned by root, until you use chown to change it. You can change the group or other permissions, using chmod, to make an object be writable by multiple users.
Can't write on recently created EXT4 partition
1,348,735,162,000
I copied some files from a data DVD to /home/emma (ext4), and all of the files are read only. This is what all of the files are like: emma@emma-W54-55SU1-SUW:~$ stat cd/Drivers/Drivers_List.rtf File: ‘cd/Drivers/Drivers_List.rtf’ Size: 28120 Blocks: 56 IO Block: 4096 regular file Device: 801h/2049d Inode: 656521 Links: 1 Access: (0400/-r--------) Uid: ( 1000/ emma) Gid: ( 1000/ emma) Access: 2014-01-17 05:34:46.000000000 +0000 Modify: 2014-01-17 05:34:46.000000000 +0000 Change: 2015-02-01 23:11:04.226865424 +0000 Birth: - When I try to delete them, I get rm: cannot remove ‘cd/Drivers/Drivers_List.rtf’: Permission denied, even though I'm the owner. Changing the mode to 777 doesn't work either. The only thing that works is deleting them as root, using sudo. I thought only an i attribute made files unable to be deleted by their owner, so what's going on? I'm using Xubuntu 14.10. Results of various commands: (Please note: I created directory cd myself, and then copied directory Drivers to it from the DVD.) emma@emma-W54-55SU1-SUW:~$ ls -dlh cd drwxrwxr-x 3 emma emma 4.0K Feb 3 01:44 cd emma@emma-W54-55SU1-SUW:~$ ls -dlh cd/Drivers dr-x------ 11 emma emma 4.0K Feb 3 02:15 cd/Drivers emma@emma-W54-55SU1-SUW:~$ ls -l cd/Drivers/Drivers_List.rtf -r-------- 1 emma emma 28120 Jan 17 2014 cd/Drivers/Drivers_List.rtf emma@emma-W54-55SU1-SUW:~$ rm cd/Drivers/Drivers_List.rtf rm: cannot remove ‘cd/Drivers/Drivers_List.rtf’: Permission denied emma@emma-W54-55SU1-SUW:~$ chmod 660 cd/Drivers/Drivers_List.rtf emma@emma-W54-55SU1-SUW:~$ ls -l cd/Drivers/Drivers_List.rtf -rw-rw---- 1 emma emma 28120 Jan 17 2014 cd/Drivers/Drivers_List.rtf emma@emma-W54-55SU1-SUW:~$ rm cd/Drivers/Drivers_List.rtf rm: cannot remove ‘cd/Drivers/Drivers_List.rtf’: Permission denied emma@emma-W54-55SU1-SUW:~$ chmod 777 cd/Drivers/Drivers_List.rtf emma@emma-W54-55SU1-SUW:~$ ls -l cd/Drivers/Drivers_List.rtf -rwxrwxrwx 1 emma emma 28120 Jan 17 2014 cd/Drivers/Drivers_List.rtf emma@emma-W54-55SU1-SUW:~$ rm cd/Drivers/Drivers_List.rtf rm: cannot remove ‘cd/Drivers/Drivers_List.rtf’: Permission denied emma@emma-W54-55SU1-SUW:~$ lsattr cd/Drivers/Drivers_List.rtf -------------e-- cd/Drivers/Drivers_List.rtf emma@emma-W54-55SU1-SUW:~$ ls -alh cd/Drivers total 48K dr-x------ 11 emma emma 4.0K Feb 3 02:15 . drwxrwxr-x 3 emma emma 4.0K Feb 3 01:44 .. dr-x------ 7 emma emma 4.0K Jan 14 2014 01Chipset dr-x------ 3 emma emma 4.0K Jan 14 2014 02Video dr-x------ 9 emma emma 4.0K Jan 14 2014 03Lan dr-x------ 9 emma emma 4.0K Jan 14 2014 04CReader dr-x------ 3 emma emma 4.0K Jan 17 2014 05Touchpad dr-x------ 3 emma emma 4.0K Jan 14 2014 06Airplane dr-x------ 2 emma emma 4.0K Jan 17 2014 07Hotkey dr-x------ 12 emma emma 4.0K Jan 14 2014 08IME dr-x------ 7 emma emma 4.0K Jan 14 2014 09Audio -r-------- 1 emma emma 162 Feb 24 2012 ~$ivers_List.rtf (I've already deleted cd/Drivers/Drivers_List.rtf using sudo as a test.)
I've found the answer myself here. Because cd/Drivers is read-only, only root can delete from it.
Why can't I delete my files?
1,348,735,162,000
How would I write isolinux to an ext4 filesystem on an 1 GB SD card so that it would boot at startup?
You wouldn't as a SD card normally doesn't look like a ISO image. Instead have a look at EXTLINUX. In short: Mount your sd card run extlinux --install MOUNTPOINT/boot
How to write isolinux to an ext4 filesystem
1,348,735,162,000
yesterday night my laptop executes a hard shutdown because my battery was out of charge. After the reboot Xorg hangs and the troubleshooting was pretty hard. A first file system check of my ext4 partition yields no error. Further I began to check the related logs and found nothing irregular. Since i use xdm i eventually looked in /var/log/xdm.log where i found the following line, that i had overlooked several times before /usr/bin/X: symbol lookup error: /usr/lib/libpciaccess.so.0: undefined symbol: gzopen64 Then i invoked apt-get install --reinstall libpciaccess and after a reboot everything was fine again. I know that a hard shutdown can corrupt data because the disc cache can't be physically written anylonger. Since i have no deeper understanding of the file systems interna i wonder why /usr/lib/libpciaccess.so.0 was harmed by the shutdown? Particularly given that the system only reads from a shared library so probably corruption is less likely to occur in those sectors where those are located. Furthermore i would like to know which filesystems are more resistant to hard shutdowns and which are less. Thanks for your time and best regards
It is not that much a question of the "right" filesystem, but how you use it and how you mount it. In case of ext3 and ext4 you can use the ro, sync, dirsync options. Filesystems with intent-write-log are normally better, if the metadata is being synced before the actual write. In your case it might have been that the library-cache (/etc/ld.so.cache) was corrupted - a simple ldconfig might have fixed the problem. Sometimes you need to force a full filesystem-check to find and correct errors. Sometimes you need a rescue boot via Netboot or CD/DVD (image) to do so. fsck -f -y ... - write the output to a log-file for later review. After that go through every file/directory that has been reported as buggy and look of the package is still ok (on rpm-based-systems: rpm -V) - Debian should have a comparable mechanism to determine which package a file belongs to and to verify the integrity of the package.
corrupted library after hard shutdown
1,348,735,162,000
I hope that this is not a duplicate question. I have seen several similar questions, where the answer was to blacklist the respective device or partition. But in my case, I can't do that (see below). Having said this: On a debian buster x64 host, I have created a VM (based on QEMU). The VM runs on a block device partition, let's say /dev/sdc1. I have installed the debian system on that partition basically like that (some steps omitted): #> mkfs.ext4 -j /dev/sdc1 #> mount /dev/sdc1 /mnt/target #> debootstrap ... bullseye /mnt/target Then I bind-mounted the necessary directories (/dev, /sys etc.), chrooted into /mnt/target, completed the guest OS installation and booted the VM. The VM first started without issues. But with every VM reboot, the VM got more problems, which I was repairing at the GRUB and initramfs prompts, until repairing was not possible any more because obviously the ext4 file system had been damaged. Because I originally thought that I had done something wrong, e.g. forgot to unmount the ext4 partition before starting the VM, I repeated the whole installation from scratch multiple times. The result was the same in every case: After a few restarts, the ext4 file system was so damaged that I couldn't repair it. Accidentally, I have found the reason for this, but have no idea how to solve the problem. I noticed that e2fsck refused to operate on that partition, claiming that is was in use although it was not mounted and the VM was not running. Further investigation showed that there existed a kernel thread jbd2/sdc. That means that the host kernel accesses the journal on that partition / file system. When I start the VM, the guest kernel of course does the same. I am nearly sure that the corruption of the file system is due to both kernels accessing the file system, notably the journal, at the same time. How can I solve the problem? I cannot blacklist the respective disk or the respective partition on the host, because I need to mount them there to prepare or complete the guest OS installation in a chroot. On the other hand, it doesn't seem possible to tell the host kernel to release the journal as soon as the VM starts. I have installed a lot of VMs in the past years exactly the same way, but did not turn on the journal when creating their ext4 file system. Consequently, I didn't have that issue with those VMs. Edit 1 In case it is relevant, when mounting the partition and chrooting into it to complete the guest OS installation, I use the following commands: cd /mnt mkdir target mount /dev/sdc1 target mount --rbind /dev target/dev mount --make-rslave target/dev mount --rbind /proc target/proc mount --make-rslave target/proc mount --rbind /sys target/sys mount --make-rslave target/sys LANG=C.UTF-8 chroot target /bin/bash --login When unmounting, I just do umount -R target The umount command does not report any error.
By passing -o norecovery to mount, you could mount the filesystem without making use of the journal at all. Man page for mount, ext3 section: norecovery/noload Don't load the journal on mounting. Note that if the filesystem was not unmounted cleanly, skipping the journal replay will lead to the filesystem containing inconsistencies that can lead to any number of problems.
How to keep the kernel from accessing the journal on an ext4 partition?
1,348,735,162,000
Trying to format this LV /dev/mapper/nvmeVg-var which is not mounted. See findmnt below. mkfs.ext3 /dev/mapper/nvmeVg-var mke2fs 1.45.6 (20-Mar-2020) /dev/mapper/nvmeVg-var contains a ext4 file system last mounted on /var on Mon Oct 11 23:18:35 2021 Proceed anyway? (y,N) y /dev/mapper/nvmeVg-var is apparently in use by the system; will not make a filesystem here! [root@localhost-live snapshots]# findmnt TARGET SOURCE FSTYPE OPTIONS / /dev/mapper/live-rw │ ext4 rw,relatime,seclabel ├─/proc proc proc rw,nosuid,nodev,noexec,relatime │ └─/proc/sys/fs/binfmt_misc systemd-1 autofs rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direc │ └─/proc/sys/fs/binfmt_misc binfmt_misc binfmt_mis rw,nosuid,nodev,noexec,relatime ├─/sys sysfs sysfs rw,nosuid,nodev,noexec,relatime,seclabel │ ├─/sys/kernel/security securityfs securityfs rw,nosuid,nodev,noexec,relatime │ ├─/sys/fs/cgroup cgroup2 cgroup2 rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_rec │ ├─/sys/fs/pstore pstore pstore rw,nosuid,nodev,noexec,relatime,seclabel │ ├─/sys/firmware/efi/efivars efivarfs efivarfs rw,nosuid,nodev,noexec,relatime │ ├─/sys/fs/bpf none bpf rw,nosuid,nodev,noexec,relatime,mode=700 │ ├─/sys/fs/selinux selinuxfs selinuxfs rw,nosuid,noexec,relatime │ ├─/sys/kernel/debug debugfs debugfs rw,nosuid,nodev,noexec,relatime,seclabel │ │ └─/sys/kernel/debug/tracing tracefs tracefs rw,nosuid,nodev,noexec,relatime,seclabel │ ├─/sys/kernel/tracing tracefs tracefs rw,nosuid,nodev,noexec,relatime,seclabel │ ├─/sys/fs/fuse/connections fusectl fusectl rw,nosuid,nodev,noexec,relatime │ └─/sys/kernel/config configfs configfs rw,nosuid,nodev,noexec,relatime ├─/dev devtmpfs devtmpfs rw,nosuid,seclabel,size=32845836k,nr_inodes=8211459,mode=755,i │ ├─/dev/shm tmpfs tmpfs rw,nosuid,nodev,seclabel,inode64 │ ├─/dev/pts devpts devpts rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000 │ ├─/dev/mqueue mqueue mqueue rw,nosuid,nodev,noexec,relatime,seclabel │ └─/dev/hugepages hugetlbfs hugetlbfs rw,relatime,seclabel,pagesize=2M ├─/run tmpfs tmpfs rw,nosuid,nodev,seclabel,size=13150860k,nr_inodes=819200,mode= │ ├─/run/initramfs/live /dev/sdf1 iso9660 ro,relatime,nojoliet,check=s,map=n,blocksize=2048 │ ├─/run/media/liveuser/c90f13b9-f228-4051-a586-7b6083f50105 │ │ /dev/sdb1 ext4 rw,nosuid,nodev,relatime,seclabel │ ├─/run/media/liveuser/Anaconda /dev/mapper/live-base │ │ ext4 ro,nosuid,nodev,relatime,seclabel │ ├─/run/user/1000 tmpfs tmpfs rw,nosuid,nodev,relatime,seclabel,size=6575428k,nr_inodes=1643 │ │ └─/run/user/1000/gvfs gvfsd-fuse fuse.gvfsd rw,nosuid,nodev,relatime,user_id=1000,group_id=1000 │ ├─/run/media/liveuser/d52b3913-2ed2-4142-9309-3fdf641141f0 │ │ /dev/md127 ext4 rw,nosuid,nodev,relatime,seclabel,stripe=256 │ ├─/run/media/liveuser/disk /dev/loop0 squashfs ro,nosuid,nodev,relatime,seclabel │ └─/run/media/liveuser/66a1a58a-c06f-4407-8d47-1fd4266c6b75 │ /dev/mapper/centos-root │ xfs rw,nosuid,nodev,relatime,seclabel,attr2,inode64,logbufs=8,logb ├─/var/lib/nfs/rpc_pipefs rpc_pipefs rpc_pipefs rw,relatime ├─/tmp tmpfs tmpfs rw,nosuid,nodev,seclabel,size=32877144k,nr_inodes=409600,inode ├─/var/tmp vartmp tmpfs rw,relatime,seclabel,inode64 └─/mnt /dev/mapper/centos-home xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noqu [liveuser@localhost-live ~]$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 1.8G 1 loop loop1 7:1 0 7.5G 1 loop ├─live-rw 253:6 0 7.5G 0 dm / └─live-base 253:7 0 7.5G 1 dm loop2 7:2 0 32G 0 loop └─live-rw 253:6 0 7.5G 0 dm / sda 8:0 0 447.1G 0 disk ├─sda1 8:1 0 200M 0 part ├─sda2 8:2 0 1G 0 part └─sda3 8:3 0 445.9G 0 part ├─centos-swap 253:0 0 31.4G 0 lvm ├─centos-home 253:1 0 364.5G 0 lvm └─centos-root 253:2 0 50G 0 lvm sdb 8:16 0 447.1G 0 disk └─sdb1 8:17 0 447.1G 0 part sdc 8:32 0 1.8T 0 disk └─md127 9:127 0 3.6T 0 raid5 sdd 8:48 0 1.8T 0 disk └─md127 9:127 0 3.6T 0 raid5 sde 8:64 0 1.8T 0 disk └─md127 9:127 0 3.6T 0 raid5 sdf 8:80 1 3.6G 0 disk ├─sdf1 8:81 1 1.9G 0 part /run/initramfs/live ├─sdf2 8:82 1 9.9M 0 part └─sdf3 8:83 1 20.9M 0 part sr0 11:0 1 2K 0 rom zram0 252:0 0 8G 0 disk [SWAP] nvme1n1 259:0 0 953.9G 0 disk ├─nvme1n1p1 259:1 0 953M 0 part ├─nvme1n1p2 259:2 0 46.6G 0 part │ ├─nvmeVg-var 253:3 0 44G 0 lvm │ └─nvmeVg-home 253:4 0 181G 0 lvm ├─nvme1n1p3 259:3 0 46.6G 0 part │ ├─nvmeVg-home 253:4 0 181G 0 lvm │ └─nvmeVg-root 253:5 0 100G 0 lvm ├─nvme1n1p4 259:4 0 46.6G 0 part │ └─nvmeVg-home 253:4 0 181G 0 lvm ├─nvme1n1p5 259:5 0 46.6G 0 part │ └─nvmeVg-home 253:4 0 181G 0 lvm ├─nvme1n1p6 259:6 0 46.6G 0 part │ └─nvmeVg-root 253:5 0 100G 0 lvm ├─nvme1n1p7 259:7 0 46.6G 0 part │ └─nvmeVg-root 253:5 0 100G 0 lvm ├─nvme1n1p8 259:8 0 46.6G 0 part │ └─nvmeVg-home 253:4 0 181G 0 lvm ├─nvme1n1p9 259:9 0 46.6G 0 part ├─nvme1n1p10 259:10 0 46.6G 0 part ├─nvme1n1p11 259:11 0 46.6G 0 part └─nvme1n1p12 259:12 0 1G 0 part nvme0n1 259:13 0 931.5G 0 disk --- Logical volume --- LV Path /dev/nvmeVg/var LV Name var VG Name nvmeVg LV UUID 9WAde0-jcOC-ymG3-petc-cqjX-dBdS-fi4fXM LV Write Access read/write LV Creation host, time orcacomputers.orcainbox, 2021-01-25 18:37:42 -0500 LV Status available # open 0 LV Size 44.00 GiB Current LE 11264 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3 --- Logical volume --- LV Path /dev/nvmeVg/home LV Name home VG Name nvmeVg LV UUID zdQoid-kIS8-98bk-BncS-eLvf-fTD8-t8cVQ9 LV Write Access read/write LV Creation host, time orcacomputers.orcainbox, 2021-01-25 22:53:20 -0500 LV Status available # open 0 LV Size 181.00 GiB Current LE 46336 Segments 7 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:4 --- Logical volume --- LV Path /dev/nvmeVg/root LV Name root VG Name nvmeVg LV UUID NcQmu9-17Kn-yBlu-PrzZ-xcyP-kDjm-afgKYI LV Write Access read/write LV Creation host, time orcacomputers.orcainbox, 2021-01-27 00:34:57 -0500 LV Status available # open 0 LV Size 100.00 GiB Current LE 25600 Segments 3 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:5 [root@localhost-live liveuser]# vgdisplay --- Volume group --- VG Name centos System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 6 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 0 Max PV 0 Cur PV 1 Act PV 1 VG Size 445.93 GiB PE Size 4.00 MiB Total PE 114159 Alloc PE / Size 114158 / <445.93 GiB Free PE / Size 1 / 4.00 MiB VG UUID h3Rhh8-1jGr-ylLe-Hagr-vJ8h-fibH-PxYOye --- Volume group --- VG Name nvmeVg System ID Format lvm2 Metadata Areas 7 Metadata Sequence No 11 VG Access read/write VG Status resizable MAX LV 0 Cur LV 3 Open LV 0 Max PV 0 Cur PV 7 Act PV 7 VG Size <325.94 GiB PE Size 4.00 MiB Total PE 83440 Alloc PE / Size 83200 / 325.00 GiB Free PE / Size 240 / 960.00 MiB VG UUID sM2ZQz-ke7H-543U-EylK-pO25-0G6S-jhV57f [root@localhost-live liveuser]# pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name centos PV Size 445.93 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 114159 Free PE 1 Allocated PE 114158 PV UUID OjAFDa-Il7s-Vj0h-Lian-culw-97um-9GYjOo --- Physical volume --- PV Name /dev/nvme1n1p2 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 11920 Free PE 240 Allocated PE 11680 PV UUID M1em0l-TY0y-ZuIt-DK2i-0yJp-OHNz-7RfupC --- Physical volume --- PV Name /dev/nvme1n1p3 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID qkaPsI-FLzs-wt4Y-bnhm-BpGK-aOcR-fheulP --- Physical volume --- PV Name /dev/nvme1n1p4 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID CTkIFV-Ebvf-Ps5w-rysY-s7U0-VLhs-6jLVRV --- Physical volume --- PV Name /dev/nvme1n1p5 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID Sjii2Q-zkwB-9Nhb-0g6o-4rt3-O9gy-4CMtEI --- Physical volume --- PV Name /dev/nvme1n1p6 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID QLUYbk-TzNY-RZHz-ck60-gbqA-kPtk-QT2Tm4 --- Physical volume --- PV Name /dev/nvme1n1p7 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID nQg41G-8A3m-wMog-LBzJ-U09n-W1md-lgVEdQ --- Physical volume --- PV Name /dev/nvme1n1p8 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID D5HOGp-nLA3-zypn-edIj-uPon-Pzrj-N6JcB5 "/dev/nvme1n1p1" is a new physical volume of "953.00 MiB" --- NEW Physical volume --- PV Name /dev/nvme1n1p1 VG Name PV Size 953.00 MiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID CjuOUt-h2bH-EjCp-ALwd-c8BW-ZckJ-cpB322 "/dev/nvme1n1p10" is a new physical volume of "<46.57 GiB" --- NEW Physical volume --- PV Name /dev/nvme1n1p10 VG Name PV Size <46.57 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID 0XEQEc-pHGc-2B02-d4lp-581f-ZMYv-vKTgpG "/dev/nvme1n1p11" is a new physical volume of "<46.57 GiB" --- NEW Physical volume --- PV Name /dev/nvme1n1p11 VG Name PV Size <46.57 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID NF82AB-ZUaP-D9FF-PLVP-HMuA-pWFz-NIZFRG [root@localhost-live liveuser]# pvdisplay --- Physical volume --- PV Name /dev/sda3 VG Name centos PV Size 445.93 GiB / not usable 0 Allocatable yes PE Size 4.00 MiB Total PE 114159 Free PE 1 Allocated PE 114158 PV UUID OjAFDa-Il7s-Vj0h-Lian-culw-97um-9GYjOo --- Physical volume --- PV Name /dev/nvme1n1p2 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 11920 Free PE 240 Allocated PE 11680 PV UUID M1em0l-TY0y-ZuIt-DK2i-0yJp-OHNz-7RfupC --- Physical volume --- PV Name /dev/nvme1n1p3 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID qkaPsI-FLzs-wt4Y-bnhm-BpGK-aOcR-fheulP --- Physical volume --- PV Name /dev/nvme1n1p4 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID CTkIFV-Ebvf-Ps5w-rysY-s7U0-VLhs-6jLVRV --- Physical volume --- PV Name /dev/nvme1n1p5 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID Sjii2Q-zkwB-9Nhb-0g6o-4rt3-O9gy-4CMtEI --- Physical volume --- PV Name /dev/nvme1n1p6 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID QLUYbk-TzNY-RZHz-ck60-gbqA-kPtk-QT2Tm4 --- Physical volume --- PV Name /dev/nvme1n1p7 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID nQg41G-8A3m-wMog-LBzJ-U09n-W1md-lgVEdQ --- Physical volume --- PV Name /dev/nvme1n1p8 VG Name nvmeVg PV Size <46.57 GiB / not usable 3.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 11920 Free PE 0 Allocated PE 11920 PV UUID D5HOGp-nLA3-zypn-edIj-uPon-Pzrj-N6JcB5 "/dev/nvme1n1p1" is a new physical volume of "953.00 MiB" --- NEW Physical volume --- PV Name /dev/nvme1n1p1 VG Name PV Size 953.00 MiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID CjuOUt-h2bH-EjCp-ALwd-c8BW-ZckJ-cpB322 "/dev/nvme1n1p10" is a new physical volume of "<46.57 GiB" --- NEW Physical volume --- PV Name /dev/nvme1n1p10 VG Name PV Size <46.57 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID 0XEQEc-pHGc-2B02-d4lp-581f-ZMYv-vKTgpG "/dev/nvme1n1p11" is a new physical volume of "<46.57 GiB" --- NEW Physical volume --- PV Name /dev/nvme1n1p11 VG Name PV Size <46.57 GiB Allocatable NO PE Size 0 Total PE 0 Free PE 0 Allocated PE 0 PV UUID NF82AB-ZUaP-D9FF-PLVP-HMuA-pWFz-NIZFRG
I unplugged everything for the night, installed a backup battery and this worked flawlessly this morning sudo mkfs.ext4 /dev/mapper/nvmeVg-var
"will not make a filesystem here!"
1,348,735,162,000
As SSD drives have limited writes, I would like to know whether disabling access time logging still plays a significant role in 2021. Most websites I see on the subject are from 2015 and before, and SSD might be more robust nowadays. I don't really realise how SSD writes are managed on Linux systems with respect to cacheing, nor do I know how many files are actually concerned by those logs or whether all or only some of the accessed files are updated to include access times. My final question concern the disadvantages of disabling access time logging. What services use access times? Will something break? Is there something I should know? Thanks in advance! PS: I am using Ubuntu 21.04 for daily usage on a 300 GB partition on SSD. My computer model was released mid-2020.
There are a number of optimizations in the kernel and ext4 to reduce the overhead of atime updates, such as relatime (only update atime when it is older than mtime or more than a day old) and lazytime (delay atime updates and aggregate writes of multiple inodes in a single block only when needed or if more than a day old). The cheapest consumer-grade flash device are rated at 1 full Drive Write Per Day (DWPD) for 3 years. Inodes are typically 1/32 or less of the blocks in the filesystem, so the atime updates of inodes (limited to one atime write per day) are not going to be the deciding factor for exceeding the DWPD of the device.
How useful is it to disable access time logging on SSD and are there disadvantages doing so?
1,348,735,162,000
The inode structure of some filesystems includes a list of pointers to the blocks used to store the file contents. This list should exist for ext2/3/4, as specified in the first comment to this question. The addresses of the blocks used by a file can be obtained with istat, one of the Sleuthkit tools: but this is not exactly a list of the pointers inside the inode, which should be 15 at most, while in this example they are more. How to obtain such a list, for a given inode number?
If you have a file entry pointing to the inode, you can use debugfs: $ debugfs /path/to/filesystem debugfs: inode_dump -b fileentry 0000 0004 0000 0104 0000 0204 0000 0304 0000 ................ 0020 0404 0000 0504 0000 0604 0000 0704 0000 ................ 0040 0804 0000 0904 0000 0a04 0000 0b04 0000 ................ 0060 2902 0000 2a02 0000 0000 0000 )...*....... The -b flag causes inode_dump to only output i_block values, so these can be interpreted directly. Here the block numbers are 0x0400 through 0x040B (file blocks), then the indirect block at 0x0229, and the double-indirect block at 0x022A.
inode, list block pointers
1,348,735,162,000
Problem: I have two external hard disks where most partitions are formatted as Ext4 to be used with my Linux workstation. But I also have a macbook and it seems that there is almost nothing to support Ext4 file systems on MacOS. So I thought I could create a VirtualBox virtual machine (or a docker image?) containing a Linux system such small that it should just have these components: capability of mounting Ext4 partitions on external USB drives; internet connection; capability to make the Ext4 partitions accessible by creating a server, e.g. an SSH server (to be used with SSH clients on terminal directly, or for mounting those partitions using SSHFS) or perhaps a SAMBA server. So the question is: How can I create or obtain such a minimal Linux system? Since I am only asking for the features listed above, it should be very small compared to a normal Linux distribution, perhaps just a couple hundreds megabytes or even less (I don't need any GUI, I don't need any service not directly related to the features above). And I guess it would not be resource-consuming if such a simple virtual machine runs constantly on my Mac (or at least when I need to use the external hard disks). Am I right? An attempt I am making I tried using this docker image, which points to this repository and is supposed to create an OpenSSH server. I thought that once this is running, I could connect to the server using SSH and I could have mounted the Ext4 partitions in the SSH session. I can run the docker image correctly and I can start the server, I am also able to run sudo commands during the SSH session (I modified the sudoers list in the docker image), but I cannot access any external USB disk (non of them, not even the non-Ext4 ones), they just do not appear in the /dev folder.
Install live linux on your VirtualBox and it should be enough, they are rather small for that very reason. There are plenty of them to choose, (here is small list with desriptions). I personally use slax on usb but there are others. Nothing prevents you from making your own live version that would be absolutely minimalistic after you get comfy with ready solution.
Using a small Linux as an "Ext4 server"
1,348,735,162,000
I read about the XFS filesystem and found that it is good at storing large files. Why are some filesystems (XFS) good at storing large files and others (ext4/ext3) are not? Is it because of the physical architecture of XFS?
The reason is the design of XFS. If you dig in to its history, you will see SGI was famous for workstations designed for audio and video editing. SGI created XFS to handle huge files (xxx MB or more) very well. They added the use of extents (with usual size of around 1MB) to improve good performance in handling big files. You can find more details here
Why do some filesystems perform better at storing large files?
1,348,735,162,000
Here's the situation - I have a 1TB drive mounted at /data. There are multiple local users on the desktop. All of them are in the localusers group I have a virtualbox VM with a 50 GB VDI dsik stored at /data/common/vms I would like the virtualbox VM to be available to all members of the localusers group. What I've done so far: As the primary user, create the VM Moved the vbox machine folder to /data/common/vbox [so if the machine is Win10Pro, then I have the folder at /data/common/vbox/Win10Pro] Group Perms - group of the folder to localusers and chmod -R g+rw /data/common/vbox /data/common/vms Copied over ~/.Virtualbox/Virtualbox.xml and adjusted Default machine folder and machine entry to point to /home/user/VirtualBox VMs. ln -sf /data/common/vbox/Win10Pro ~/VirtualBox VMs/Win10Pro for each user The problem This only works once... If as user X I open virtualbox and launch the machine, then the permissions on the /data/common/Win10Pro/* file(s) revert to rw only for the user after the Virtualbox GUI exits. PS: Earlier I used toe have the disk formatted as exfat and was able to achieve a shared disk/vm using the uid and gid masks but that doesn't work for ext4.
for those landing here with a similar predicament, I posted the question on Reddit and was quickly pointed in the right direction basically: Set the setgid bit on the shared folder /data/common Set default acl to rwx for user and group like so: setfacl -d -m u::rwx,g::rwx,o::r-x /data/common A more detailed walkthrough's available here http://brunogirin.blogspot.com/2010/03/shared-folders-in-ubuntu-with-setgid.html The article's from 2010 - so the only differences were that I did not have to install any packages or set mount options - ACLs were on by default
Share folder/files between multiple users on ext4 disk
1,348,735,162,000
I have a 128gb micro sd formatted as ext4 10gb for app2sd 2nd ext4 partition ( p2 ) on my Android 6 phone . All my apps ( 80%) stay on that ext4 partition . Now app2sd cant load this partition - "unable to mount ....kgrinvalid file" Also the 119 gb main fat ( exFat ) partition ( p1) used to hold my data is mounted as RO. I am rooted on a Sony experia Z3 A brief background : Was shooting a movie on my phone and it froze restarted twice . After that no luck mounting the ext4 partition. Questions : -- How can I get an fsck done on the ext4 file system and let the allocation table know there are bad blocks -- Is there some mechanism inbuilt in the card that detects bad blocks and corruption and will put the card in RO mode for 1 partition and other partition ext4 cannot be mounted on phone -- Some place I read this means the card is on its last legs and after some write cycles it becomes RO. Some tech support suggested this happened. I am not prepared to believe . It's only 4 months old bought from Amazon and I CAN write to it ( as will confirm ahead in the rest of the story ) -- I have ADB driver and get to # prompt on adb. I can possibly mount p1 as RW . I can confirm on adb that P2 is seen but it will not mount it -- How can I get app2sd to mount this and be back in biz. What I did on live linux CD -- I am able to mount both p1 and p2 as RW ( after explicit command ) and only as Su. Normal user - its denied RO. SO on live ubuntu I CAN RW to both -- Backed up P2 and then tried to DEL and format P2 ( gparted + fdisk manually ) - NO LUCK. Both give message its successful but after refresh they will show the SAME old partitions P1 and P2. Tried same on Window 7 - paragon partition manager - gives message it's successful but after refresh they will show old configuration P1 and P2. ---Fsck it@it:~$ sudo fsck.ext4 -v /dev/sdc2 e2fsck 1.43.4 (31-Jan-2017) app2sd: recovering journal Superblock needs_recovery flag is clear, but journal has data. Run journal anyway<y>? yes fsck.ext4: unable to set superblock flags on app2sd app2sd: ********** WARNING: Filesystem still has errors ********** -running the backup superblock fix does not help it@it:~$ sudo mke2fs -n /dev/sdc2 mke2fs 1.43.4 (31-Jan-2017) /dev/sdc2 contains a ext4 file system labelled 'app2sd' last mounted on /data/sdext2 on Sun Jan 7 07:21:35 2018 Proceed anyway? (y,N) y Creating filesystem with 2604538 4k blocks and 651520 inodes Filesystem UUID: 53138787-e743-4160-9041-ac9613d44db8 Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632 I tried various superblocks - NO LUCK . I used these guides - NO LUCK 1) Superblock restore from backup and 2 ) fsck.ext4 unable to set superblock flags after a bad unmount it@it:~$ sudo e2fsck -b 1605632 -B 4096 /dev/sdc2 e2fsck 1.43.4 (31-Jan-2017) Superblock needs_recovery flag is clear, but journal has data. Recovery flag not set in backup superblock, so running journal anyway. app2sd: recovering journal Superblock needs_recovery flag is clear, but journal has data. Recovery flag not set in backup superblock, so running journal anyway. Superblock needs_recovery flag is clear, but journal has data. Recovery flag not set in backup superblock, so running journal anyway. e2fsck: unable to set superblock flags on app2sd app2sd: ***** FILE SYSTEM WAS MODIFIED ***** app2sd: ********** WARNING: Filesystem still has errors ********** and I akso tried this with B=4096 which testdisk confirmed w: mke2fs -S -S Write superblock and group descriptors only. This is useful if all of the superblock and backup superblocks are corrupted, and a last- ditch recovery method is desired. It causes mke2fs to reinitialize the superblock and group descriptors, while not touching the inode table and the block and inode bitmaps. The e2fsck program should be run immediately after this option is used, and there is no guarantee that any data will be salvageable. It is critical to specify the correct filesystem blocksize when using this option, Here's some more stuff Testdisk log Partition table type (auto): Intel Disk /dev/sdb - 127 GB / 119 GiB - Generic- SD/MMC Partition table type: Intel Interface Advanced Geometry from i386 MBR: head=255 sector=63 test_FAT() 1 P FAT32 LBA 0 32 33 14247 69 30 228880384 sector_size 512 cluster_size 64 reserved 126 fats 2 dir_entries 0 sectors 0 media F8 fat_length 0 secs_track 16 heads 4 hidden 2048 total_sect 228880384 check_part_i386 failed for partition type 0C get_geometry_from_list_part_aux head=255 nbr=2 get_geometry_from_list_part_aux head=8 nbr=1 get_geometry_from_list_part_aux head=255 nbr=2 1 P FAT32 LBA 0 32 33 14247 69 30 228880384 2 * Linux 14248 0 1 15544 254 63 20836305 [app2sd] ext4 blocksize=4096 Large_file Sparse_SB Recover, 10668 MB / 10173 MiB search_superblock recover_EXT2: s_block_group_nr=0/79, s_mnt_count=83/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 2 (block=0, blocksize=4096) block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 262144 (block=32768, blocksize=4096) block_group_nr 3 recover_EXT2: "e2fsck -b 98304 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=3/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 786432 (block=98304, blocksize=4096) block_group_nr 5 recover_EXT2: "e2fsck -b 163840 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=5/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 1310720 (block=163840, blocksize=4096) block_group_nr 7 recover_EXT2: "e2fsck -b 229376 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=7/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 1835008 (block=229376, blocksize=4096) block_group_nr 9 recover_EXT2: "e2fsck -b 294912 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=9/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 2359296 (block=294912, blocksize=4096) block_group_nr 25 recover_EXT2: "e2fsck -b 819200 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=25/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 6553600 (block=819200, blocksize=4096) block_group_nr 27 recover_EXT2: "e2fsck -b 884736 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=27/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 7077888 (block=884736, blocksize=4096) block_group_nr 49 recover_EXT2: "e2fsck -b 1605632 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=49/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 12845056 (block=1605632, blocksize=4096) Linux 14248 0 1 15544 254 62 20836304 [app2sd] superblock 0, blocksize=4096 [app2sd] superblock 32768, blocksize=4096 [app2sd] superblock 98304, blocksize=4096 [app2sd] superblock 163840, blocksize=4096 [app2sd] superblock 229376, blocksize=4096 [app2sd] superblock 294912, blocksize=4096 [app2sd] superblock 819200, blocksize=4096 [app2sd] superblock 884736, blocksize=4096 [app2sd] superblock 1605632, blocksize=4096 [app2sd] To repair the filesystem using alternate superblock, run fsck.ext4 -p -b superblock -B blocksize device search_superblock recover_EXT2: s_block_group_nr=0/79, s_mnt_count=83/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 2 (block=0, blocksize=4096) block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed Sample dump2fs log . I cant simply attach the entire file on stack. Partition table type (auto): Intel Disk /dev/sdb - 127 GB / 119 GiB - Generic- SD/MMC Partition table type: Intel Interface Advanced Geometry from i386 MBR: head=255 sector=63 test_FAT() 1 P FAT32 LBA 0 32 33 14247 69 30 228880384 sector_size 512 cluster_size 64 reserved 126 fats 2 dir_entries 0 sectors 0 media F8 fat_length 0 secs_track 16 heads 4 hidden 2048 total_sect 228880384 check_part_i386 failed for partition type 0C get_geometry_from_list_part_aux head=255 nbr=2 get_geometry_from_list_part_aux head=8 nbr=1 get_geometry_from_list_part_aux head=255 nbr=2 1 P FAT32 LBA 0 32 33 14247 69 30 228880384 2 * Linux 14248 0 1 15544 254 63 20836305 [app2sd] ext4 blocksize=4096 Large_file Sparse_SB Recover, 10668 MB / 10173 MiB search_superblock recover_EXT2: s_block_group_nr=0/79, s_mnt_count=83/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 2 (block=0, blocksize=4096) block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 262144 (block=32768, blocksize=4096) block_group_nr 3 recover_EXT2: "e2fsck -b 98304 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=3/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 786432 (block=98304, blocksize=4096) block_group_nr 5 recover_EXT2: "e2fsck -b 163840 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=5/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 1310720 (block=163840, blocksize=4096) block_group_nr 7 recover_EXT2: "e2fsck -b 229376 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=7/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 1835008 (block=229376, blocksize=4096) block_group_nr 9 recover_EXT2: "e2fsck -b 294912 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=9/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 2359296 (block=294912, blocksize=4096) block_group_nr 25 recover_EXT2: "e2fsck -b 819200 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=25/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 6553600 (block=819200, blocksize=4096) block_group_nr 27 recover_EXT2: "e2fsck -b 884736 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=27/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 7077888 (block=884736, blocksize=4096) block_group_nr 49 recover_EXT2: "e2fsck -b 1605632 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=49/79, s_mnt_count=0/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 12845056 (block=1605632, blocksize=4096) Linux 14248 0 1 15544 254 62 20836304 [app2sd] superblock 0, blocksize=4096 [app2sd] superblock 32768, blocksize=4096 [app2sd] superblock 98304, blocksize=4096 [app2sd] superblock 163840, blocksize=4096 [app2sd] superblock 229376, blocksize=4096 [app2sd] superblock 294912, blocksize=4096 [app2sd] superblock 819200, blocksize=4096 [app2sd] superblock 884736, blocksize=4096 [app2sd] superblock 1605632, blocksize=4096 [app2sd] To repair the filesystem using alternate superblock, run fsck.ext4 -p -b superblock -B blocksize device search_superblock recover_EXT2: s_block_group_nr=0/79, s_mnt_count=83/20, s_blocks_per_group=32768, s_inodes_per_group=32320 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 2604538 recover_EXT2: part_size 20836304 Ext2 superblock found at sector 2 (block=0, blocksize=4096) block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed Loop through node list Another try: grab all backup supernode list from testdisk log and loop through, This is interesting: when I run esfsck -b <n> -B <N> /devpath I get same standard o/p below . No difference. + sudo e2fsck -b 163840 -B 4096 -y /dev/sdc2 e2fsck 1.43.4 (31-Jan-2017) Superblock needs_recovery flag is clear, but journal has data. Recovery flag not set in backup superblock, so running journal anyway. app2sd: recovering journal Superblock needs_recovery flag is clear, but journal has data. Recovery flag not set in backup superblock, so running journal anyway. Superblock needs_recovery flag is clear, but journal has data. Recovery flag not set in backup superblock, so running journal anyway. e2fsck: unable to set superblock flags on app2sd app2sd: ***** FILE SYSTEM WAS MODIFIED ***** app2sd: ********** WARNING: Filesystem still has errors ********** But when I loop through that list loaded from a file ( testdisk.log) - for certain nodes it will give a long o/p like it did some repair - a blink of hope. After that, I did a Linux REISUB shutdown and efsck again . NO LUCK efsck loop log with repair part. For those block numbers where it atttempted repair - on command line I tried to set those very backup superblocks - but it does NOT help- back to square one! + for i in $(grep e2fsck testdisk.log | uniq | cut -d " " -f 4) + sudo e2fsck -b 229376 -B 4096 -y /dev/sdc2 e2fsck 1.43.4 (31-Jan-2017) Superblock needs_recovery flag is clear, but journal has data. Recovery flag not set in backup superblock, so running journal anyway. app2sd: recovering journal Pass 1: Checking inodes, blocks, and sizes Inode 69709 extent tree (at level 1) could be shorter. Fix? yes Inode 97187 extent tree (at level 1) could be shorter. Fix? yes Inode 98194 extent tree (at level 1) could be shorter. Fix? yes Inode 98215 extent tree (at level 1) could be shorter. Fix? yes Inode 98646 extent tree (at level 1) could be shorter. Fix? yes Inode 99795 extent tree (at level 1) could be shorter. Fix? yes Inode 100170 extent tree (at level 1) could be shorter. Fix? yes Inode 100186 extent tree (at level 1) could be shorter. Fix? yes Inode 100825 extent tree (at level 1) could be shorter. Fix? yes Inode 129341 extent tree (at level 1) could be shorter. Fix? yes Inodes that were part of a corrupted orphan linked list found. Fix? yes Inode 129343 was part of the orphaned inode list. FIXED. Inode 129344 was part of the orphaned inode list. FIXED. Inode 129345 was part of the orphaned inode list. FIXED. Inode 129371 extent tree (at level 1) could be shorter. Fix? yes Inode 129414 extent tree (at level 1) could be shorter. Fix? yes Inode 129418 extent tree (at level 1) could be shorter. Fix? yes Inode 129437 extent tree (at level 1) could be shorter. Fix? yes Inode 162145 extent tree (at level 1) could be shorter. Fix? yes Inode 162147 extent tree (at level 1) could be shorter. Fix? yes Inode 162151 extent tree (at level 1) could be shorter. Fix? yes Inode 194325 extent tree (at level 1) could be shorter. Fix? yes Inode 194408 extent tree (at level 1) could be shorter. Fix? yes Inode 194464 extent tree (at level 1) could be shorter. Fix? yes Deleted inode 195640 has zero dtime. Fix? yes Deleted inode 196040 has zero dtime. Fix? yes Inode 235473 is in use, but has dtime set. Fix? yes Inode 235473 has imagic flag set. Clear? yes Inode 235473 has a extra size (25959) which is invalid Fix? yes Inode 235474 has INLINE_DATA_FL flag on filesystem without inline data support. Clear? yes Inode 235473, i_size is 7019251879657894515, should be 0. Fix? yes Inode 235473, i_blocks is 81858393236329, should be 0. Fix? yes Inode 388501 extent tree (at level 1) could be shorter. Fix? yes Inode 420685 extent tree (at level 1) could be shorter. Fix? yes Inode 452971 extent tree (at level 1) could be shorter. Fix? yes Inode 452978 extent tree (at level 1) could be shorter. Fix? yes Inode 452981 extent tree (at level 1) could be shorter. Fix? yes Inode 550513 extent tree (at level 1) could be shorter. Fix? yes Inode 550523 extent tree (at level 1) could be shorter. Fix? yes Inode 550524 extent tree (at level 1) could be shorter. Fix? yes Inode 550525 extent tree (at level 1) could be shorter. Fix? yes Inode 551843 extent tree (at level 1) could be shorter. Fix? yes Inode 582085 has an invalid extent node (blk 593131, lblk 0) Clear? yes Inode 582085 extent tree (at level 1) could be shorter. Fix? yes Inode 582085, i_blocks is 40, should be 0. Fix? yes Inode 582132 extent tree (at level 1) could be shorter. Fix? yes Pass 1E: Optimizing extent trees Pass 2: Checking directory structure Directory inode 97167, block #0, offset 0: directory corrupted Salvage? yes Missing '.' in directory inode 97167. Fix? yes Setting filetype for entry '.' in ??? (97167) to 2. Missing '..' in directory inode 97167. Fix? yes Setting filetype for entry '..' in ??? (97167) to 2. Directory inode 97176, block #0, offset 0: directory corrupted Salvage? yes Missing '.' in directory inode 97176. Fix? yes Setting filetype for entry '.' in ??? (97176) to 2. Missing '..' in directory inode 97176. Fix? yes Setting filetype for entry '..' in ??? (97176) to 2. Directory inode 97213, block #0, offset 0: directory corrupted Salvage? yes Missing '.' in directory inode 97213. Fix? yes Setting filetype for entry '.' in ??? (97213) to 2. Missing '..' in directory inode 97213. Fix? yes Setting filetype for entry '..' in ??? (97213) to 2. Directory inode 161950, block #0, offset 0: directory corrupted Salvage? yes Missing '.' in directory inode 161950. Fix? yes Setting filetype for entry '.' in ??? (161950) to 2. Missing '..' in directory inode 161950. Fix? yes Setting filetype for entry '..' in ??? (161950) to 2. Inode 235473 (/data/com.abhivyaktyapps.learn.sanskrit/app_Parse/CommandCache/CachedCommand_00000160cc4ef3d9_00000000_-1326099007) has invalid mode (0166654). Clear? yes If it's a total dead horse, then why can I mount and RW on live & Android backs off It looks like there is some kind of flag being set here if bad superblock or corrupt blocks are detected -to always load P1 as RO and P2 is not worth loading. How do I clear that flag ? I have # via ADB for my android phone. Why can I mount on android like to do on Live ?
-I could mount both partitions correctly after adding the required commands in an init.d script . If the former is not supported by your phone - there are alternatives magisk module ( x-posed is yet another option ) and magisk settings should not be at isolated namespace . For the mount to be propagated & su -mm ( Mount master ) ought to be used exFAT partition had gone corrupt I recover the data and now I am happily using ext4
EXT4 app2sd link2sd partition repair with bad superblock . Partition cannot be mounted on Android but will mount RW w/ su on Linux Live
1,485,714,442,000
I have on ext4 disc the following file: -r--r--r-- 1 root root 61440 20. pro 15.30 ldlinux.sys But rm, chmod and mv says permission denied even for root. Any ideas what could be the problem? FYI, it is file in boot sector of distro slax, but it is not used for booting. I just extracted the installation archive and I want to remove it.
You have probably mistake somewhere. Either: You're trying to remove the file as unprivileged user, The file has file attributes: see them with lsattr ldlinux.sys, The directory you're trying to remove file from has file attributes, see them: lsattr . (in directory containing ldlinux.sys). Other conditions may apply, for example readonly filesystem, but they usually generate errors other than permission denied. Superuser can override any permission checks in the kernel, and file mode does not matter.
Root cannot remove file on ext4
1,485,714,442,000
I've read the Ceph OS recommendations document, but still I have a question. Which file system is better for Ceph? XFS, ext4, or something else?
According to ceph documents,they recommend configuring Ceph to use the XFS file system in the near term, and btrfs in the long term once it is stable enough for production and at the end, ext4. document 1 document 2 document 3
Which file system is better for Ceph?
1,485,714,442,000
I have a latency sensitive application running on an embedded system, and I'm seeing some discrepancy between writing to a ext4 partition and an ext2 partition on the same physical device. Specifically, I see intermittent delays when performing many small updates on a memory map, but only on ext4. I've tried what seem to be some of the usual tricks for improving performance (especially variations in latency) by mounting ext4 with different options and have settled on these mount options: mount -t ext4 -o remount,rw,noatime,nodiratime,user_xattr,barrier=1,data=ordered,nodelalloc /dev/mmcblk0p6 /media/mmc/data barrier=0 didn't seem to provide any improvement. For the ext2 partition, the following flags are used: /dev/mmcblk0p3 on /media/mmc/data2 type ext2 (rw,relatime,errors=continue) Here's the test program I'm using: #include <stdio.h> #include <cstring> #include <cstdio> #include <string.h> #include <stdint.h> #include <sys/mman.h> #include <sys/stat.h> #include <sys/types.h> #include <unistd.h> #include <fcntl.h> #include <stdint.h> #include <cstdlib> #include <time.h> #include <stdio.h> #include <signal.h> #include <pthread.h> #include <unistd.h> #include <errno.h> #include <stdlib.h> uint32_t getMonotonicMillis() { struct timespec time; clock_gettime(CLOCK_MONOTONIC, &time); uint32_t millis = (time.tv_nsec/1000000)+(time.tv_sec*1000); return millis; } void tune(const char* name, const char* value) { FILE* tuneFd = fopen(name, "wb+"); fwrite(value, strlen(value), 1, tuneFd); fclose(tuneFd); } void tuneForFasterWriteback() { tune("/proc/sys/vm/dirty_writeback_centisecs", "25"); tune("/proc/sys/vm/dirty_expire_centisecs", "200"); tune("/proc/sys/vm/dirty_background_ratio", "5"); tune("/proc/sys/vm/dirty_ratio", "40"); tune("/proc/sys/vm/swappiness", "0"); } class MMapper { public: const char* _backingPath; int _blockSize; int _blockCount; bool _isSparse; int _size; uint8_t *_data; int _backingFile; uint8_t *_buffer; MMapper(const char *backingPath, int blockSize, int blockCount, bool isSparse) : _backingPath(backingPath), _blockSize(blockSize), _blockCount(blockCount), _isSparse(isSparse), _size(blockSize*blockCount) { printf("Creating MMapper for %s with block size %i, block count %i and it is%s sparse\n", _backingPath, _blockSize, _blockCount, _isSparse ? "" : " not"); _backingFile = open(_backingPath, O_CREAT | O_RDWR | O_TRUNC, 0600); if(_isSparse) { ftruncate(_backingFile, _size); } else { posix_fallocate(_backingFile, 0, _size); fsync(_backingFile); } _data = (uint8_t*)mmap(NULL, _size, PROT_READ | PROT_WRITE, MAP_SHARED, _backingFile, 0); _buffer = new uint8_t[blockSize]; printf("MMapper %s created!\n", _backingPath); } ~MMapper() { printf("Destroying MMapper %s\n", _backingPath); if(_data) { msync(_data, _size, MS_SYNC); munmap(_data, _size); close(_backingFile); _data = NULL; delete [] _buffer; _buffer = NULL; } printf("Destroyed!\n"); } void writeBlock(int whichBlock) { memcpy(&_data[whichBlock*_blockSize], _buffer, _blockSize); } }; int main(int argc, char** argv) { tuneForFasterWriteback(); int timeBetweenBlocks = 40*1000; //2^12 x 2^16 = 2^28 = 2^10*2^10*2^8 = 256MB int blockSize = 4*1024; int blockCount = 64*1024; int bigBlockCount = 2*64*1024; int iterations = 25*40*60; //25 counts simulates 1 layer for one second, 5 minutes here uint32_t startMillis = getMonotonicMillis(); int measureIterationCount = 50; MMapper mapper("sparse", blockSize, bigBlockCount, true); for(int i=0; i<iterations; i++) { int block = rand()%blockCount; mapper.writeBlock(block); usleep(timeBetweenBlocks); if(i%measureIterationCount==measureIterationCount-1) { uint32_t elapsedTime = getMonotonicMillis()-startMillis; printf("%i took %u\n", i, elapsedTime); startMillis = getMonotonicMillis(); } } return 0; } Fairly simplistic test case. I don't expect terribly accurate timing, I'm more interested in general trends. Before running the tests, I ensured that the system is in a fairly steady state with very little disk write activity occuring by doing something like: watch grep -e Writeback: -e Dirty: /proc/meminfo There is very little to no disk activity. This is also verified by seeing 0 or 1 in the wait column from the output of vmstat 1. I also perform a sync immediately before running the test. Note the aggressive writeback parameters being provided to the vm subsystem as well. When I run the test on the ext2 partition, the first one hundred batches of fifty writes yield a nice solid 2012 ms with a standard deviation of 8 ms. When I run the same test on the ext4 partition, I see an average of 2151 ms, but an abysmal standard deviation of 409 ms. My primary concern is variation in latency, so this is frustrating. The actual times for the ext4 partition test looks like this: {2372, 3291, 2025, 2020, 2019, 2019, 2019, 2019, 2019, 2020, 2019, 2019, 2019, 2019, 2020, 2021, 2037, 2019, 2021, 2021, 2020, 2152, 2020, 2021, 2019, 2019, 2020, 2153, 2020, 2020, 2021, 2020, 2020, 2020, 2043, 2021, 2019, 2019, 2019, 2053, 2019, 2020, 2023, 2020, 2020, 2021, 2019, 2022, 2019, 2020, 2020, 2020, 2019, 2020, 2019, 2019, 2021, 2023, 2019, 2023, 2025, 3574, 2019, 3013, 2019, 2021, 2019, 3755, 2021, 2020, 2020, 2019, 2020, 2020, 2019, 2799, 2020, 2019, 2019, 2020, 2020, 2143, 2088, 2026, 2017, 2310, 2020, 2485, 4214, 2023, 2020, 2023, 3405, 2020, 2019, 2020, 2020, 2019, 2020, 3591} Unfortunately, I don't know if ext2 is an option for the end solution, so I'm trying to understand the difference in behavior between the file systems. I would most likely have control over at least the flags being used to mount the ext4 system and tweak those. noatime/nodiratime don't seem to make much of a dent barrier=0/1 doesn't seem to matter nodelalloc helps a bit, but doesn't do nearly enough to smooth out the latency variation. The ext4 partition is only about 10% full. Thanks for any thoughts on this issue!
One word: Journaling. http://www.thegeekstuff.com/2011/05/ext2-ext3-ext4/ As you talk about embedded im assuming you have some form of flash memory? Performance is very spiky on the journaled ext4 on flash. Ext2 is recommended. Here is a good article on disabling journaling and tweaking the fs for no journaling if you must use ext4: http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html
Ext4 exhibits unexpected write latency variance vs. ext2
1,485,714,442,000
I have a hardware raid-1 on two WDC-SSDs with 500GB on each SSD. The raid controller is a Marvell-88SE9128 via two GSATA connections (the controller is directly on the motherboard and not an ePCI extension card). When I try to install a Linux distribution on this raid, every installer I tried so far fails, with a more or less unspecific error message. (Until now I tried Arch, Ubuntu, Ubuntu-Server, Debian, CentOS and Rocky) With GParted-Live I could see, that the creation of an ext4 partition fails. GParted reports a success, but directly after closing the success message the partion is gone and GParted don't detect it anymore on the raid-volume. The interesting thing is, a ntfs partition stays and can be accessed! (I did not find any hard ties between Marvell and Microsoft.) I also tried mkfs with a live system, but same result: success message, but no partition. I already read the documentation for the marvell chip in the manuall from my motherboard and the following datasheets from Marvell: https://www.marvell.com/content/dam/marvell/en/public-collateral/storage/marvell-storage-88se912x-product-brief-2010-08.pdf https://www.marvell.com/content/dam/marvell/en/public-collateral/storage/marvell-storage-88se9130-datasheet-2018-08.pdf None of them (and nobody/nothing else I found on the Internet) says, that it is only for ntfys. Is something similar happened to anyone else or does someone know how to apply an ext4-partiton on a raid like this? Many thanks in advance Edit: As suggested, I used a Ubuntu livesystem and looked up the dmesg. This is the result: [ 780.345795] raid6: sse2x4 gen() 16061 MB/s [ 780.413795] raid6: sse2x4 xor() 8934 MB/s [ 780.481797] raid6: sse2x2 gen() 17492 MB/s [ 780.549807] raid6: sse2x2 xor() 9685 MB/s [ 780.617795] raid6: sse2x1 gen() 13970 MB/s [ 780.685797] raid6: sse2x1 xor() 8287 MB/s [ 780.685801] raid6: using algorithm sse2x2 gen() 17492 MB/s [ 780.685802] raid6: .... xor() 9685 MB/s, rmw enabled [ 780.685804] raid6: using ssse3x2 recovery algorithm [ 780.687460] xor: automatically using best checksumming function avx [ 780.722675] Btrfs loaded, crc32c=crc32c-intel, zoned=yes [ 780.751206] JFS: nTxBlock = 8192, nTxLock = 65536 [ 780.804405] SGI XFS with ACLs, security attributes, realtime, quota, no debug enabled [ 938.260634] sdj: [ 938.723894] sdj: [ 938.799655] sdj: [ 952.400893] sdj: [ 952.524131] sdj: sdj1 [ 953.237100] sdj: sdj1 lspci: 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 05) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b5) 00:1c.2 PCI bridge: Intel Corporation 82801 PCI Bridge (rev b5) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b5) 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b5) 00:1c.5 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 (rev b5) 00:1c.6 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 7 (rev b5) 00:1c.7 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 8 (rev b5) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05) 00:1f.0 ISA bridge: Intel Corporation Z68 Express Chipset LPC Controller (rev 05) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port Desktop SATA AHCI Controller (rev 05) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05) 01:00.0 PCI bridge: NVIDIA Corporation NF200 PCIe 2.0 switch (rev a3) 02:00.0 PCI bridge: NVIDIA Corporation NF200 PCIe 2.0 switch (rev a3) 02:02.0 PCI bridge: NVIDIA Corporation NF200 PCIe 2.0 switch (rev a3) 03:00.0 VGA compatible controller: NVIDIA Corporation GF100 [GeForce GTX 470] (rev a3) 03:00.1 Audio device: NVIDIA Corporation GF100 High Definition Audio Controller (rev a1) 06:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) 07:00.0 PCI bridge: Integrated Technology Express, Inc. IT8892E PCIe to PCI Bridge (rev 10) 08:03.0 FireWire (IEEE 1394): Texas Instruments TSB43AB23 IEEE-1394a-2000 Controller (PHY/Link) 09:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) 0a:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) 0b:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 06) 0c:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9128 PCIe SATA 6 Gb/s RAID controller with HyperDuo (rev 11) 0d:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9128 PCIe SATA 6 Gb/s RAID controller with HyperDuo (rev 11) blkid: /dev/sdc1: LABEL="INTENSO" UUID="8C09-B4FF" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="c3072e18-01" /dev/sdi1: BLOCK_SIZE="2048" UUID="2021-07-19-22-56-21-00" LABEL="GParted-live" TYPE="iso9660" PARTUUID="11ddad15-01" /dev/loop0: TYPE="squashfs" /dev/md126: UUID="f32d4848-9784-4912-b0ab-ff250bf69dfc" TYPE="crypto_LUKS" /dev/md127: UUID="ec6af278-b3f8-4fc2-94c9-97029c302e14" TYPE="crypto_LUKS" /dev/sda: UUID="14e609d1-2017-96b5-4cf8-af002dd538ed" UUID_SUB="3646b87c-950c-e4b3-a855-f93f3e5a88d2" LABEL="ubuntu-server:1" TYPE="linux_raid_member" /dev/sdb: UUID="14e609d1-2017-96b5-4cf8-af002dd538ed" UUID_SUB="5a703c43-f3eb-203e-5bb4-3fa2a2acf223" LABEL="ubuntu-server:1" TYPE="linux_raid_member" /dev/sdh: UUID="bcd188a7-5c15-8245-8c7a-761d509bad19" UUID_SUB="a72121fc-ebe3-88e2-cba9-9b74f198bed1" LABEL="ubuntu-server:2" TYPE="linux_raid_member" /dev/sdj: UUID="bcd188a7-5c15-8245-8c7a-761d509bad19" UUID_SUB="419122c5-a305-a607-a0d9-9a5b2f1aed89" LABEL="ubuntu-server:2" TYPE="linux_raid_member" lsblk: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 341,2M 1 loop /usr/lib/live/mount/rootfs/filesystem.squashfs sda 8:0 0 10,9T 0 disk └─md127 9:127 0 10,9T 0 raid1 sdb 8:16 0 10,9T 0 disk └─md127 9:127 0 10,9T 0 raid1 sdc 8:32 1 967,5M 0 disk └─sdc1 8:33 1 966M 0 part /media/intenso sdh 8:112 0 14,6T 0 disk └─md126 9:126 0 14,6T 0 raid1 sdi 8:128 1 7,5G 0 disk └─sdi1 8:129 1 396M 0 part /usr/lib/live/mount/medium sdj 8:144 0 14,6T 0 disk └─md126 9:126 0 14,6T 0 raid1 sdk 8:160 0 465,7G 0 disk Some explanations: sda and sdb are combined to a Software Raid1 md127. sdh and sdj are combined to a Software Raid1 md126. Raids md126 and md127 are combined to a LUKS encrypted LVM. sdc and sdi are some USB sticks need for the live system. sdk is the hardware Raid1 whick won't hold a ext4-partition.
Thanks to the help of ljrk we were able to figure out, that Parted and GParted can not partioning the RAID-Disk. We still do not know what the exact Problem for Parted/GParted is, because no error message appears, like I told before. This made it very difficult to install Ubuntu or any other Debian derivate, but fdisk and gdisk are still working as expected. So we partitioned the RAID-Disk manually and installed Arch. (We tried Arch before, but with Parted, so it did not work out either). It is not exactly what we wanted, but also good, so I am closing this question.
Can't make an ext4-partition on a specific raid
1,485,714,442,000
I tried to test the write speed of some SSDs and when writing to the disk directly is somehow slower that writing to the disk when it is formatted as ext4. How does this work? Is this correct or am I measuring something wrong? for i in {1..5}; do dd if=/dev/zero of=/dev/sda1 bs=1G count=1 oflag=dsync; done 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.18148 s, 150 MB/s 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.18312 s, 149 MB/s 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.1938 s, 149 MB/s 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.15976 s, 150 MB/s 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 7.2125 s, 149 MB/s If i now format the disk as ext4 mkfs.ext4 /dev/sda1 mount /dev/sda1 /tmp/test mount -ls /dev/sda1 on /tmp/test type ext4 (rw,relatime,data=ordered) for i in {1..5}; do dd if=/dev/zero of=/tmp/test/test.txt bs=1G count=1 oflag=dsync; done 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.66437 s, 230 MB/s 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.60112 s, 233 MB/s 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.58899 s, 234 MB/s 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.61334 s, 233 MB/s 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.60241 s, 233 MB/s Thanks Johannes edit: When activating /proc/sys/vm/block_dump like frostschutz suggested and then copying to the ext4 drive it becomes obvious that the data is split up differently by the kernel. for i in {1..5}; do dd if=/dev/zero of=/tmp/test/test.txt bs=1G count=1 oflag=dsync; done [ 922.895200] dd(2571): READ block 74112 on unknown-block(8,0) (8 sectors) [ 922.903712] dd(2571): READ block 8448 on unknown-block(8,0) (8 sectors) [ 923.724470] dd(2571): dirtied inode 12 (test.txt) on sda [ 923.729762] dd(2571): dirtied inode 12 (test.txt) on sda [ 923.735005] dd(2571): dirtied inode 12 (test.txt) on sda [ 924.543323] kworker/u8:0(2560): READ block 8320 on unknown-block(8,0) (8 sectors) [ 924.553112] kworker/u8:0(2560): WRITE block 278528 on unknown-block(8,0) (2048 sectors) [ 924.561496] kworker/u8:0(2560): WRITE block 280576 on unknown-block(8,0) (2048 sectors) [ 924.570013] kworker/u8:0(2560): WRITE block 282624 on unknown-block(8,0) (2048 sectors) [ 924.578534] kworker/u8:0(2560): WRITE block 284672 on unknown-block(8,0) (2048 sectors) for i in {1..5}; do dd if=/dev/zero of=/dev/sda bs=1G count=1 oflag=dsync; done [ 1504.428021] kworker/u8:0(2560): WRITE block 0 on unknown-block(8,0) (8 sectors) [ 1504.435320] kworker/u8:0(2560): WRITE block 8 on unknown-block(8,0) (8 sectors) [ 1504.442589] kworker/u8:0(2560): WRITE block 16 on unknown-block(8,0) (8 sectors) [ 1504.449955] kworker/u8:0(2560): WRITE block 24 on unknown-block(8,0) (8 sectors) [ 1504.457342] kworker/u8:0(2560): WRITE block 32 on unknown-block(8,0) (8 sectors) [ 1504.464720] kworker/u8:0(2560): WRITE block 40 on unknown-block(8,0) (8 sectors)
mkfs TRIM / discard the entire device, thus providing optimal benchmark conditions. Also with /proc/sys/vm/block_dump enabled (warning - TONS of output), I'm seeing writes of 8 sectors (dd on raw block device) vs. writes of 16384 sectors (dd on ext4) so it might be due to how the kernel decides to split things up, since you can't literally send 1G block writes out? dd on ext4: dd(12080): dirtied inode 12 (test.txt) on loop0 dd(12080): dirtied inode 12 (test.txt) on loop0 dd(12080): dirtied inode 12 (test.txt) on loop0 kworker/u8:4(10318): READ block 2056 on loop0 (8 sectors) kworker/u8:4(10318): WRITE block 278528 on loop0 (16384 sectors) kworker/u8:4(10318): WRITE block 294912 on loop0 (16384 sectors) kworker/u8:4(10318): WRITE block 311296 on loop0 (16384 sectors) kworker/u8:4(10318): WRITE block 327680 on loop0 (16384 sectors) kworker/u8:4(10318): WRITE block 344064 on loop0 (16384 sectors) kworker/u8:4(10318): WRITE block 360448 on loop0 (16384 sectors) ... dd directly: dd(12116): WRITE block 0 on loop0 (8 sectors) dd(12116): WRITE block 8 on loop0 (8 sectors) dd(12116): WRITE block 16 on loop0 (8 sectors) dd(12116): WRITE block 24 on loop0 (8 sectors) dd(12116): WRITE block 32 on loop0 (8 sectors) dd(12116): WRITE block 40 on loop0 (8 sectors) dd(12116): WRITE block 48 on loop0 (8 sectors) dd(12116): WRITE block 56 on loop0 (8 sectors) dd(12116): WRITE block 64 on loop0 (8 sectors) dd(12116): WRITE block 72 on loop0 (8 sectors) dd(12116): WRITE block 80 on loop0 (8 sectors) dd(12116): WRITE block 88 on loop0 (8 sectors) dd(12116): WRITE block 96 on loop0 (8 sectors) dd(12116): WRITE block 104 on loop0 (8 sectors) dd(12116): WRITE block 112 on loop0 (8 sectors) dd(12116): WRITE block 120 on loop0 (8 sectors) dd(12116): WRITE block 128 on loop0 (8 sectors) ... Now I only tested a loop device, not a real SSD, so... it might not be accurate.
Writing to EXT4 faster than writing to disk directly?
1,485,714,442,000
I have encrypted my home partition via cryptsetup. Inside the mapper /dev/mapper, i have created a ext4 fs via mkfs.ext4 /dev/mapper/home Now, my home partiotion only has 91 GB. My luks partition has ~ 270 GB. Is this normal? How can i resize my ext4 home partition? Some fdisc -l data: Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 195314547 195312500 93,1G 83 Linux /dev/sda2 781252606 928055295 146802690 70G 5 Extended /dev/sda3 195315712 781250559 585934848 279,4G 83 Linux /dev/sda5 781252608 894498815 113246208 54G 83 Linux /dev/sda6 894500864 928055295 33554432 16G 82 Linux swap / Solaris /dev/sda3 is the luks encrypted partition. Disk /dev/mapper/home: 279,4 GiB, 299981864960 bytes, 585902080 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes So in fdisk /dev/mapper/home and /dev/sda3 have the same size. Some df -h /home data: Filesystem Size Used Free Used % mounted /dev/sda1 92G 47G 41G 54% / So here we only have 92 GB. Some parted /dev/mapper/home data: GNU Parted 3.2 Using /dev/mapper/home Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: Linux device-mapper (crypt) (dm) Disk /dev/mapper/home: 300GB Sector size (logical/physical): 512B/512B Partition Table: loop Disk Flags: Number Start End Size File system Flags 1 0,00B 300GB 300GB ext4 Here its seems to have the 300 GB size?
To make an encrypted partition mount automatically, you'll first need /etc/crypttab set up properly. In your case, that means a line like this: home /dev/sda3 none luks (Here I'm assuming you used LUKS; if you used some other mode of cryptsetup, the two last parameters on the line may need to be different.) This should cause the system to prompt for encryption passphrase at boot time to unlock the encrypted volume and create the /dev/mapper/home device to access it through the encryption layer. Before proceeding further, boot once to verify that this actually works. Then you'll need a line in /etc/fstab to mount it: /dev/mapper/home /home ext4 defaults 0 2 Both in /etc/crypttab and in /etc/fstab you'll have the option of using the UUID= syntax instead of the corresponding device name. Please note that the UUID of the /home filesystem (as viewed through /dev/mapper/home for mounting) is extremely unlikely to be the same as the UUID of the encrypted container /dev/sda3.
Wrong EXT4 Partition size in luks encrypted partition
1,485,714,442,000
We have installed Debian 8 on a new Dell PowerEdge T330, there are two partitions, / and /var, in a RAID1 array using mdadm. During testing the primary applications: mysql and tomcat were stopped. We are getting abysmal write performance from both partitions although the read performance is adequate. This is the observations from one of two identical servers setup the same way. Any help would be appreciated. Performance root@bcmdit-519:/home/bcmdit# FILE=/tmp/test_data && dd bs=16k \ count=102400 oflag=direct if=/dev/zero of=$FILE && \ rm $FILE && FILE=/var/tmp/test_data && dd bs=16k \ count=102400 oflag=direct if=/dev/zero of=$FILE && rm $FILE 102400+0 records in 102400+0 records out 1677721600 bytes (1.7 GB) copied, 886.418 s, 1.9 MB/s 102400+0 records in 102400+0 records out 1677721600 bytes (1.7 GB) copied, 894.832 s, 1.9 MB/s root@bcmdit-519:/home/bcmdit# hdparm -t /dev/sda ; hdparm -t /dev/sdb ; hdparm -t /dev/md0 ; hdparm -t /dev/md1 /dev/sda: Timing buffered disk reads: 394 MB in 3.00 seconds = 131.15 MB/sec /dev/sdb: Timing buffered disk reads: 394 MB in 3.01 seconds = 131.05 MB/sec /dev/md0: Timing buffered disk reads: 398 MB in 3.00 seconds = 132.45 MB/sec /dev/md1: Timing buffered disk reads: 318 MB in 3.00 seconds = 106.00 MB/sec References https://severfault.com/questions/832117/how-increase-write-speed-of-raid1-mdadm https://wiki.mikejung.biz/Software_RAID Write access time slow on RAID1 https://bbs.archlinux.org/viewtopic.php?id=173791 et al... Configuration Encryption was setup using: root@bcmdit-519:/home/bcmdit# cryptsetup luksDump UUID=1e7b64ac-f187-4fac-9712-8e0dacadfca7|grep -E 'Cipher|Hash' Cipher name: aes Cipher mode: xts-plain64 Hash spec: sha1 Config snippets root@bcmdit-519:/home/bcmdit# facter virtual productname lsbdistid \ lsbdistrelease processor0 blockdevice_sda_model \ blockdevice_sdb_model bios_version && uname -a && uptime ---------- bios_version => 2.4.3 blockdevice_sda_model => ST1000NX0423 blockdevice_sdb_model => ST1000NX0423 lsbdistid => Debian lsbdistrelease => 8.10 processor0 => Intel(R) Xeon(R) CPU E3-1230 v6 @ 3.50GHz productname => PowerEdge T330 virtual => physical Linux bcmdit-519 3.16.0-4-amd64 #1 SMP Debian 3.16.51-3 (2017-12-13) x86_64 GNU/Linux 14:45:58 up 2:49, 2 users, load average: 0.06, 0.17, 0.44 root@bcmdit-519:/home/bcmdit# grep GRUB_CMDLINE_LINUX_DEFAULT /etc/default/grub GRUB_CMDLINE_LINUX_DEFAULT="quiet erst_disable elevator=deadline" root@bcmdit-519:/home/bcmdit# free -m total used free shared buffers cached Mem: 32202 1532 30670 9 17 369 -/+ buffers/cache: 1145 31056 Swap: 0 0 0 root@bcmdit-519:/home/bcmdit# parted /dev/sda print Model: ATA ST1000NX0423 (scsi) Disk /dev/sda: 1000GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 500GB 500GB primary boot, raid 2 500GB 1000GB 500GB extended 5 500GB 1000GB 500GB logical raid root@bcmdit-519:/home/bcmdit# parted /dev/sdb print Model: ATA ST1000NX0423 (scsi) Disk /dev/sdb: 1000GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 500GB 500GB primary raid 2 500GB 1000GB 500GB extended 5 500GB 1000GB 500GB logical raid ---------- root@bcmdit-519:/home/bcmdit# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 sda5[0] sdb5[1] 488249344 blocks super 1.2 [2/2] [UU] bitmap: 3/4 pages [12KB], 65536KB chunk md0 : active raid1 sda1[0] sdb1[1] 488248320 blocks super 1.2 [2/2] [UU] bitmap: 2/4 pages [8KB], 65536KB chunk unused devices: <none> root@bcmdit-519:/home/bcmdit# mdadm --query --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Apr 16 13:46:51 2018 Raid Level : raid1 Array Size : 488248320 (465.63 GiB 499.97 GB) Used Dev Size : 488248320 (465.63 GiB 499.97 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Tue May 15 14:26:47 2018 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : bcmdit-519:0 (local to host bcmdit-519) UUID : afd3968c:2e8b191d:4504f21e:255b6470 Events : 1703 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 root@bcmdit-519:/home/bcmdit# mdadm --query --detail /dev/md1 /dev/md1: Version : 1.2 Creation Time : Mon Apr 16 13:47:06 2018 Raid Level : raid1 Array Size : 488249344 (465.63 GiB 499.97 GB) Used Dev Size : 488249344 (465.63 GiB 499.97 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Tue May 15 14:15:11 2018 State : active Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Name : bcmdit-519:1 (local to host bcmdit-519) UUID : e46f968a:e8fff775:ecee9cfb:4ad88574 Events : 2659 Number Major Minor RaidDevice State 0 8 5 0 active sync /dev/sda5 1 8 21 1 active sync /dev/sdb5 root@bcmdit-519:/home/bcmdit# cat /etc/crypttab crypt1 UUID=1e7b64ac-f187-4fac-9712-8e0dacadfca7 /root/.crypt1 luks root@bcmdit-519:/home/bcmdit# grep -v '^#' /etc/fstab UUID=c6baa173-8ea6-4598-a965-eee728a93d69 / ext4 defaults,errors=remount-ro 0 1 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/mapper/crypt1 /var ext4 defaults,errors=remount-ro 0 2 /var/swapfile1 none swap sw,nofail 0 0 root@bcmdit-519:/home/bcmdit# smartctl -a /dev/sda|head -n 20 smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build) Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: ST1000NX0423 Serial Number: W4713QXE LU WWN Device Id: 5 000c50 0abb06247 Add. Product Id: DELL(tm) Firmware Version: NA07 User Capacity: 1,000,204,886,016 bytes [1.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Form Factor: 2.5 inches Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-3 (minor revision not indicated) SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Tue May 15 14:29:03 2018 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled root@bcmdit-519:/home/bcmdit# smartctl -a /dev/sdb|head -n 20 smartctl 6.4 2014-10-07 r4002 [x86_64-linux-3.16.0-4-amd64] (local build) Copyright (C) 2002-14, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: ST1000NX0423 Serial Number: W4714VDQ LU WWN Device Id: 5 000c50 0abf99927 Add. Product Id: DELL(tm) Firmware Version: NA07 User Capacity: 1,000,204,886,016 bytes [1.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Form Factor: 2.5 inches Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-3 (minor revision not indicated) SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Tue May 15 14:29:11 2018 PDT SMART support is: Available - device has SMART capability. SMART support is: Enabled Update 1 With 16M root@bcmdit-519:/tmp# FILE=/tmp/test_data \ && dd bs=16M count=102 oflag=direct if=/dev/zero of=$FILE \ && rm $FILE \ && FILE=/var/tmp/test_data \ && dd bs=16M count=102 oflag=direct if=/dev/zero of=$FILE \ && rm $FILE 102+0 records in 102+0 records out 1711276032 bytes (1.7 GB) copied, 16.6394 s, 103 MB/s 102+0 records in 102+0 records out 1711276032 bytes (1.7 GB) copied, 17.8649 s, 95.8 MB/s Update 2 Seagate drive serial number found SMART indicates an Enterprise grade drive: https://www.cnet.com/products/seagate-enterprise-capacity-2-5-hdd-v-3-1tb-sata-512n/specs/ Update 3 I found that drive write cacheing was off, but by setting it to on: hdparm -W1 /dev/sd* I get much better results with bs=16k now root@bcmdit-519:/home/bcmdit# FILE=/tmp/test_data && dd bs=16k count=102400 oflag=direct if=/dev/zero of=$FILE && rm $FILE 102400+0 records in 102400+0 records out 1677721600 bytes (1.7 GB) copied, 14.0708 s, 119 MB/s Update 4 root@ecm-oscar-519:/home/bcmdit# cryptsetup benchmark # Tests are approximate using memory only (no storage IO). PBKDF2-sha1 1394382 iterations per second PBKDF2-sha256 923042 iterations per second PBKDF2-sha512 728177 iterations per second PBKDF2-ripemd160 804122 iterations per second PBKDF2-whirlpool 313569 iterations per second # Algorithm | Key | Encryption | Decryption aes-cbc 128b 1149.9 MiB/s 3655.8 MiB/s serpent-cbc 128b 99.6 MiB/s 743.4 MiB/s twofish-cbc 128b 219.0 MiB/s 400.0 MiB/s aes-cbc 256b 867.5 MiB/s 2904.5 MiB/s serpent-cbc 256b 99.6 MiB/s 742.6 MiB/s twofish-cbc 256b 218.9 MiB/s 399.8 MiB/s aes-xts 256b 3615.1 MiB/s 3617.3 MiB/s serpent-xts 256b 710.8 MiB/s 705.0 MiB/s twofish-xts 256b 388.1 MiB/s 394.5 MiB/s aes-xts 512b 2884.9 MiB/s 2888.1 MiB/s serpent-xts 512b 710.7 MiB/s 704.7 MiB/s twofish-xts 512b 388.0 MiB/s 394.3 MiB/s
When you ask dd for bs=16K and oflag=direct you are asking for many small writes, this is what HDDs are bad at and what SSDs are good at. You can use LVMCache to get the benefit of both (up to the SSD size) If you use bs=16M or no oflag then the writes are split/combined/cached in RAM and written at an optimal size. Why is dd using direct slower writing to disk than to a file for example; > dd if=/dev/zero of=test.bin bs=16k count=1000 oflag=direct 1000+0 records in 1000+0 records out 16384000 bytes (16 MB, 16 MiB) copied, 3.19453 s, 5.1 MB/s > dd if=/dev/zero of=test.bin bs=16M count=1 oflag=direct 1+0 records in 1+0 records out 16777216 bytes (17 MB, 16 MiB) copied, 0.291366 s, 57.6 MB/s > dd if=/dev/zero of=test.bin bs=16k count=1000 1000+0 records in 1000+0 records out 16384000 bytes (16 MB, 16 MiB) copied, 0.0815558 s, 201 MB/s > uname -r 4.14.41-130
Slow Write Speed(<2 MB/s) on Unencrypted and LUKS Encrypted ext4 filesystems using mdadm Software RAID1 Debian 8 Dell PowerEdge T330 Server
1,485,714,442,000
I just resized a .vdi from my host from 15.5G to 120G. I tried to resize the partition from the guest (ubuntu server) using resize2fs root@ubuntu:~# sudo resize2fs /dev/sda2 115G resize2fs 1.42.13 (17-May-2015) resize2fs: Attempt to read block from filesystem resulted in short read while trying to open /dev/sda2 Couldn't find valid filesystem superblock. Now, according to my understanding of the situation /dev/sda2 is corrupt. However, my server-VM still works fine and has no problems running on the partition. fdisk -l /dev/sda outputs: Disk /dev/sda: 120 GiB, 128849018880 bytes, 251658240 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x32955267 Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 999423 997376 487M 83 Linux /dev/sda2 1001470 33552383 32550914 15.5G 5 Extended /dev/sda5 1001472 33552383 32550912 15.5G 8e Linux LVM Now to my question: Is this normal and healthy for the server and if not, how do I fix it?
resize2fs and other tools in the e2fsprogs suite assume that the read system call either returns the whole requested size of encounters an error. This is not true in general: read is allowed to return less, you're supposed to call this in a loop. I think the Linux kernel guarantees that read returns all the data from block devices in some circumstances, but I've been bitten by this in the past, with e2fsprogs making wrong assumptions on the kernel. The fact that e2fsprogs don't loop around read is really a bug. At best it's a limitation: maybe the way the code is written is correct for some versions of the Linux kernel when accessing a block device. But that limitation is not documented anywhere. The code is definitely buggy when accessing an image file. Check the kernel logs for errors. If the kernel reports an error, then the problem is not a bug in resize2fs (or at least, nothing more than poor error reporting). If the kernel doesn't report disk errors, run strace -o resize2fs.strace sudo resize2fs /dev/sda2 115G and check the read system calls. read(3, "…", REQUESTED) = READ There is a short read if REQUESTED ≠ READ. If you observe this, and you can reproduce it, it would be worth making a bug report. Explain exactly how you triggered this: exact kernel version, how the kernel was compiled, exact version of resize2fs, what hardware driver manages /dev/sda, what virtual machine software it's running on. I recommend reporting the bug to Ubuntu rather than to upstream, as upstream is often not good at talking to non-experts.
Short read while trying to open /dev/sda2
1,485,714,442,000
I'm formatting a disk with following command switches, I can format the disk to ext4. sudo mke2fs -F -E lazy_itable_init=0,lazy_journal_init=0,discard -t ext4 -b 4096 ... However, once I added this switch: -O ^has_journal It will be formatted to ext2. Could you explain why?
Because ext4 is an extension of the ext2 and ext3 filesystems; one of the features that it extended was the use of a journal. References: https://ext4.wiki.kernel.org/index.php/Frequently_Asked_Questions#What_is_the_difference_between_ext2.2C_ext3.2C_and_ext4.3F https://unix.stackexchange.com/a/60757/117549
Why this switch will effectively format disk to ext2 instead of ext4?
1,485,714,442,000
I'm quite sure it didn't happen before I started using LUKS and LVM on my disk (2 or 3 weeks ago), but now root user is constantly (every 2-3 seconds) writing (not reading, just writing) on disk and I can't figure out why. The disk has one LUKS partition with one LVM group, which contains both root (ext4) and home (ext4) logical partitions (besides the swap one). I've used "iotop" command to check what processes are accessing the disk and I've seen it's "jbd2/dm-X-8", executed by root, the process which is writing constantly to the disk. It only happens with the disk containing the LVM; I have two more ext4 disks mounted (just for storage purposes; they also use LUKS encryption but not LVM) and they "stay quite" while no file operations are made on them. I've checked log files to see if this writing activity could be due to some kind of logging, but it doesn't seem to be the case. I've also read questions like this one: LVM keeping harddisk awake? But I can't understand why would the system keep on writing the disk even when no changes are being made to it. On the other hand, I've had some issues with the disk which I though that could be related to other programs crashes, but now I don't know whether it could be related to this constant writing to disk, so I'm quite worried about it. Is it normal? Can I do something to avoid it or is it something that just "comes with" LUKS/LVM? Or maybe it has nothing to do with LUKS/LVM and I should check some other thing?
I've finally managed to reduce dramatically this constant writing activity on the disk, though still I don't understand totally how all this is linked. I guess this is not a perfect solution to the problem, but fixed it to me some way, so for if it helps someone: I've edited the /etc/fstab file adding to the options column for the root partition "defaults", since it was set by default to "error..." (I was just experimenting, I don't understand why it makes any difference, but apparently it does). I've set swappiness to 0, but again, even when this seems to affect to the writing rate, I don't understand why it does in my case, since the computer has 16GB of RAM, so IMHO this (as with the fstab issue) shouldn't make any difference. With this new "low-frecuency-writing-rate" at least I've achieved to be less worried about the disk health, but I'm still concerned about the logic behind this behaviour, so any explanation and/or better solutions will be really welcomed.
Why is OS constantly writing to disk (ext4) on a Ubuntu 14.04 machine? Is it normal?
1,398,618,849,000
I have a problem with a filesystem here on a storage machine. We noticed, that many of the data that comes out of the systems seams to be corrupt, but only with minor problems like CRC errors with self-verifing installers or small picture errors in movies. While tracking down the problem, i endet up in a test with 3 files with about 900MB each. The ext4 filesystem is mounted read-only, but every time i do a md5sum on the files, the result differs: $ ls -l -rw-rw-r-- 1 samba samba 922789695 Jan 7 21:47 File1 -rw-rw-r-- 1 samba samba 939080225 Jan 7 21:54 File2 -rw-rw-r-- 1 samba samba 996515494 Jan 14 21:13 File3 $ md5sum * 9449c8e4fd2869a7969017db266451b0 File1 016b5c2e8b535ec922f5efb4ec9082bc File2 5576aeb34575e07171fa835a79fec147 File3 $ echo 3 > /proc/sys/vm/drop_caches # (clear file cache of the kernel) $ md5sum * 3f03edec64e22de384fd3d2cff0e3730 File1 32b53ee1dd3f5c9796322cabe4f8c0da File2 35af5c433d0725ab0892d4517faeceea File3 $ echo 3 > /proc/sys/vm/drop_caches $ md5sum * 593d83e084387a8d5bd9b445032a5669 File1 4f8b76249b96a1a29bdd748167c41bda File2 8b5bab8a153eb6e33dc3cd7d23362090 File3 $ echo 3 > /proc/sys/vm/drop_caches $ md5sum * d716d9c4acbd3ade450bab46903810d9 File1 68ede84d1396075ffe8a9228966cc148 File2 b8d75123b2d5b18c0d2827a448f53086 File3 $ echo 3 > /proc/sys/vm/drop_caches $ md5sum * c991bcca3bc2f39fdd143f8460935646 File1 73e6301b28c3b1b0bb95df52ea5794dd File2 a202e88343d6e7bc4dce808b885ad013 File3 First I let e2fsck check the whole disk. It found a few problems, but it finds other errors on every new run. I think it got other reads each time same as md5sum and the problem is on another layer. The whole thing is inside a xen vm, but i don't think that detail matters. The architecture is like: ext4 | dm-crypt | (xen blk between here) md-raid5 (softraid) | +---+-----------------------------+ | | mainboard sata +---------pcie---------+ | | | 3 disks sata controller(jbod) sata controller(jbod) (1 failed) | | 2 disks 2 disks lspci output of the sata controllers: 00:12.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB600 Non-Raid-5 SATA 02:00.0 RAID bus controller: Silicon Image, Inc. SiI 3132 Serial ATA Raid II Controller (rev 01) 03:00.0 SATA controller: JMicron Technology Corp. JMB363 SATA/IDE Controller (rev 03) While i was searching for the problem, one of the 7 disks failed and the raid is currently running with only 6 disks until the replacement arrives. Maybe this can be part of the problem? It existed definitly before the failure, but now the raid should be in a vulnerable but stable state...? Whats going on here?
I think I found the problem: After a while of plugging arround different setups I replaced the SiI controller with an old PCI one and the problem seems to be solved.
Same file with different content on every read [closed]
1,398,618,849,000
Possible Duplicate: ext4: How to account for the filesystem space? After googleing a little I find that ext4 reserve 5% for root: Reserved space for root on a filesystem - why?, ext2/3/4 reserved blocks percentage purpose. And if I use tune2fs to set the number of reserved blocks to 0, it will take them back. But I use that code on a 100MB volume and get half of the reserving space, it was 10.5MB, and after setting to 0 the used space became 5.5MB. Why is the used space still not zero? I used the following command to set the reserved space: tune2fs -m 0 /dev/mapper/truecrypt1
There is still space required for the filesystem's internal usage (superblocks, etc). This is merely the way ext4 works and defines "used space".
Why is some filesystem space used even though the filesystem is empty? [duplicate]
1,398,618,849,000
When playing with filesystems and partition, I realized that when I created a ext file system on my USB drive and plug it to Windows, I am forced to format it. On the other end, when building a FAT partition on Windows, and plugging it to my virtual machine, Linux is perfectly able to read and mount my FAT partition. 1 - Why can't Windows read Linux filesystems? 2 - What's the key difference that allows Linux to do it, yet Windows can't?
Windows can’t read “Linux” file systems (such as Ext4 or XFS) by default because it doesn’t ship with drivers for them. You can install software such as Ext2fsd to gain read access to Ext2/3/4 file systems. Linux can access FAT file systems because the kernel has a FAT file system driver, and most distributions enable it by default. There are cases where Linux distributions won’t be able to access a Windows-formatted USB key by default: large keys are typically formatted using ExFAT, and the Linux kernel doesn’t support that. You would have to install a separate ExFAT driver in this situation. There’s nothing inherent in Windows or Linux which limits their ability to support file systems; it’s really down to the availability of drivers. Linux supports Windows file systems because they are very popular; this then provides a common basis for file exchange, meaning that there is less need for Windows to support Linux file systems.
Why can't Windows read Linux filesystems? [closed]
1,398,618,849,000
As per my understanding, READ/WRITE etc are file system operations in linux. The file systems registers callbacks with the Kernel (VFS) and are in turn called by it when the particular FS is detected during a READ/WRITE operation. For example: EXT4_write: VFS write request -> ext4_writepages() F2FS_write: VFS write request -> f2fs_write_data_page() But what if the storage medium is not formatted. It does not have any file system. When a READ/WRITE operation is performed on it, which filesystem operation is selected by default ?
In order for the VFS layer to be able to do read/write operations on a file, then that file must be opened in one way or another. If you have a medium that has no filesystem, then you cannot mount it. If you cannot mount it, then you cannot have a path to it for use by open(). If you cannot open a file on it, then you cannot perform read/write operations on it. Thus, you cannot do read/write operations on a medium with no filesystem. You would have the block device (assuming all necessary drivers are available), which would enable you to do I/O on the device itself in order to format it.
Can VFS read/write operations be performed on an unformatted storage device in Linux?
1,398,618,849,000
Here is the origin of my question: I'm running Linux containers with LXD snap version at Ubuntu 22.04 on a VPS. The root file system of the VPS is Ext4 and there is not additional storage attached. So the default LXD storage pool is created by the dir option. When I'm taking a snapshots of these LXCs, the whole data is duplicated - i.e. the if the container is 6G the snapshot become another 6G. I think if it was LVM filesystem the snapshots will be created in a different way. So my question is: It possible to do something like fallocate -l 16G /lvm.fs, then format it as LVM, mount it and use it as storage pool for LXD? And of course, how can I do that if it is possible? Some notes: The solution provided by @larsks works as it is expected! Later I found, when we are using lxc storage create pool-name lvm without additional options and parameters, it does almost the same. I didn't test it before I published the question because I was thinking the lvm driver mandatory will require be in use a separate partition. However in both cases this approach, in my opinion, has much more cons than pros, for example: The write speed is decreased with about 10% in comparison of the cases when we are using the dir driver. Hard to recover situations when no space left on the disk, even when the overload data is located in /tmp... in contrast, when the dir driver is used, LXD prevents the consumption of the entire host's file system space, so your system and containers are still operational. This is much conviniant in my VPS case.
It possible to do something like fallocate -l 16G /lvm.fs, then format it as LVM, mount it and use it as storage pool for LXD? And of course, how can I do that if it is possible? Start by making your file. I like to place it in a directory other than /, so I created a /vol directory for this purpose: truncate -s 16G /vol/pv0 (As @LustreOne notes in comments, using truncate rather than fallocate doesn't preallocate blocks for the file, so it starts out using zero bytes and only consumes as much disk space as is written to it). Configure that file as a block device using losetup: losetup -fnP --show /vol/pv0 That will output the name of a loop device (probably /dev/loop0, but if not, adjust the following commands to match). Set up LVM on that device: pvcreate /dev/loop0 vgcreate vg0 /dev/loop0 lvcreate ... Congratulations, you have a filed-backed LVM VG! Unfortunately, if you were to reboot at this point, you would find that the VG was missing: loop devices aren't persistent, so we need to add some tooling to configure things when the system starts up. Put the following into /usr/local/bin/activate-vg.sh: #!/bin/sh losetup -fnP /vol/pv0 vgchange -ay And make sure it's executable: chmod a+x /usr/local/bin/activate-vg.sh Add a systemd unit to activate the service. Put the following into /etc/systemd/system/activate-vg.service: [Unit] DefaultDependencies=no Requires=local-fs.target local-fs-pre.target After=local-fs-pre.target Before=local-fs.target [Service] Type=oneshot ExecStart=/usr/local/bin/activate-vg.sh [Install] WantedBy=local-fs.target Enable the service: systemctl enable activate-vg Now your file-backed LVM VG should be available when you reboot.
Is it possible to use a file as Filesystem?
1,398,618,849,000
I have a problem with a very large number of files in one directory. The file system is ext4. I reached the 2**32 file limit and couldn't even write any file on this partition. The problem is very big. I don't know how I can move some of the files to other resources? The classic "ls" and "mv" are not working. The files is too much... Is there any way to quickly output any file in bash? One arbitrary file per directory, which is almost 2**32 files. If I can download one file quickly, I can write a script. Any ideas?
Please try running: ls --sort=none --no-group or limit to some number of files, e.g. ls --sort=none --no-group | head -500
Ext4 and Linux - very large number of files in one directory - operations
1,398,618,849,000
I'm currently looking at an issue with data loss and was using the tune2fs utility, and I was wondering what 'Last write time:' refers to. The volume is written to constantly and backups confirm that the data that I lost has been backed up, but I just want to understand what that field means as it can't be the last time data was written to disk (there are files on the disk with newer creation times than the last write time).
The “Last write time” in tune2fs’ output reflects the last time the super block was written. This doesn’t change when files are written to the device, only when certain pieces of information stored in the super block change — in particular, when the devices is mounted, or when its recovery status changes, or when an error is encountered.
Ext4 - Last write time
1,398,618,849,000
I know there are tools like df which show the disk space remaining on disks but I could not find any info on how this tool actually gets this info. I would imagine the filesystem keeps track of this information somehow but I can't find info on this either. Is there a plain explanation on how this information is gathered from filesystem (specifically ext4) or any terms that would help for finding this information?
You can see what df does using strace: $ strace df / |& grep -i ext statfs("/", {f_type=EXT2_SUPER_MAGIC, f_bsize=4096, f_blocks=4611519, f_bfree=864281, f_bavail=624269, f_files=1179648, f_ffree=620737, f_fsid={126240841, 1491846125}, f_namelen=255, f_frsize=4096, f_flags=ST_VALID|ST_RELATIME}) = 0 And from man 2 statfs: The statfs() system call returns information about a mounted filesystem. path is the pathname of any file within the mounted filesystem. buf is a pointer to a statfs structure defined approximately as follows: struct statfs { __fsword_t f_type; /* Type of filesystem (see below) */ __fsword_t f_bsize; /* Optimal transfer block size */ fsblkcnt_t f_blocks; /* Total data blocks in filesystem */ fsblkcnt_t f_bfree; /* Free blocks in filesystem */ fsblkcnt_t f_bavail; /* Free blocks available to unprivileged user */ fsfilcnt_t f_files; /* Total file nodes in filesystem */ fsfilcnt_t f_ffree; /* Free file nodes in filesystem */ fsid_t f_fsid; /* Filesystem ID */ __fsword_t f_namelen; /* Maximum length of filenames */ __fsword_t f_frsize; /* Fragment size (since Linux 2.6) */ __fsword_t f_flags; /* Mount flags of filesystem (since Linux 2.6.36) */ __fsword_t f_spare[xxx]; /* Padding bytes reserved for future use */ }; If you just want the freespace of a mount point, statfs seems to be way to go. Free blocks * block size = free space ( - reserved space, etc.). I imagine ext4 must keep the count of free blocks somewhere in the superblock, which you then use with the block size to get the free space.
How is free disk space on ext4 calculated?
1,398,618,849,000
I have mounted an ext4 file system on Dir directory and tweaked some code of directory read which requires having files in the directory inodes with non-sequential inode numbers for its testing. As I am creating files with the shell script, which allocates inodes to the files with sequential inode numbers. Because for the files being created at the same time, inodes are allocated sequentially from the inode freelist, which generally have inode numbers sequentially. I have used following shell script to create files in Dir, #! /bin/bash for n in {1..1000}; do dd if=/dev/urandom of=file$( printf %03d "$n" ).bin bs=1 count=$(( RANDOM + 1024 )) done ls -i Dir gives following o/p 567 file001.bin 568 file002.bin 569 file002.bin 570 file004.bin 571 file005.bin 572 file006.bin 573 file007.bin 574 file008.bin 575 file009.bin 576 file010.bin .. How can I make these files have nonsequential inodes?
Well a straightforward approach would be to just create a bunch of temporary files before an each .bin file: function randomFiles() { for (( i=1; i<=$[$RANDOM%$1+1]; i++ )) do mktemp -q --tmpdir=. done } for n in {1..1000}; do dd if=/dev/urandom of=file$( printf %03d "$n" ).bin bs=1 count=$(( RANDOM + 1024 )) randomFiles 10 done rm -f tmp.* This will create 1 to 10 temporary files after an each .bin file, shifting the next inode number forward.
How can I create files in the directory to have inodes allocated to files with non-sequential inode numbers?
1,398,618,849,000
I searched the Internet, but I was not able to find a satisfying answer to my problem. The Problem I'm encountering currently is, that I'm transitioning my data from a NTFS Partition to a ext4 partition. What surprised me was the fact, that I could store less data on the same harddrive with the ext4 filesystem. After investigating a little I found out that this might have something to do with the Inodes of ext4. me@server:/media$ LANG=C df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda1 3815552 31480 3784072 1% /media/storage /dev/sdb1 1905792 1452 1904340 1% /mnt When running the command me@server:~$ sudo find /mnt -type f | wc -l 1431 it tells me that I have 1431 files on the harddrive, each being around 4-8GB. So basically I have too much Inodes for very few files. My questions are: How can I change the number of Inodes now? Is there maybe a better filesystem for just storing files?
By default, ext2/ext3/ext4 filesystems have 5% of the space reserved for the root user. This makes sense for the root filesystem in a typical configuration: it means that the system won't grind to a halt if a user fills up the disk, critical functionality will still work and in particular logs can still be written. It doesn't make sense in most other scenarios. To avoid reserving 5% for the root user, pass -m 0 to mkfs when creating the filesystem, or call tune2fs with the option -m 0 afterwards. Though if your filesystem is 95% full, you should look into expanding it. Most filesystems (including both NFS and the ext? family) don't operate efficiently when they're very nearly full.
ext4 file system tuning for storage partition
1,398,618,849,000
Somehow my .vdi (Linux guest OS) file got corrupted. Now I have some files in it (Inside the vdi file) and I want to recover those files. How can I do that?
Assuming you're on a Linux host as well (you didn't mention that). You can always try the Network Block Device (NBD) option:- sudo modprobe nbd max_part=16 sudo qemu-nbd -c /dev/nbd0 <path to your vdi file> ls -lh /dev/nbd0* <lists all the partitions on the vdi> Choose which of the partitions you want to mount (eg 1st partition), then: sudo mount /dev/nbd0p1 /mnt That may work, depending on how corrupt your vdi file is. You can use normal filesystem tools on this mount and or dev node. When done, unmount it and:- sudo qemu-nbd -d /dev/nbd0 Note: You may have to install qemu-nbd depending on your distro. Package qemu-utils on Ubuntu, qeu-img on Fedora. If you're on Windows you may have some success by following this post. An alternative Windows way would be to quickly install another Linux VM and then add your vdi file as additional disk to that VM. You can then use the NBD procedure above on it.
Linux: Recover files from .vdi file
1,398,618,849,000
I ran df, and the output appears almost instantly: (FS Size Used Avail Use%) /dev/sda1 145G 8.4G 130G 7% sda1 is an ext4 partition. Without summing the size of all files, how can df give me the space information almost instantly?
Like traditional Unix File Systems, ext2, ext3 and ext4 have a segment of metadata called a superblock, which contains information about the configuration of the file system. The primary superblock is stored at a fixed offset from the start of the partition, and since the information it contains is so important, backup copies of the superblock are stored throughout the file system. The information the superblock contains includes the total number of inodes and blocks in the filesystem and how many are free. This information can be used to calculate the used and available space of the file system efficiently.
How does my partition (ext4) know its size of used/free space?
1,398,618,849,000
By the following question: Is there some universally recommended Reserved block count (for root) for large Ext4 drives? I specifically mean the following: Let us consider (almost) everyone has a rather large root drive (partition) nowadays. Let us consider for example 2TB drive with a 1.8TiB root partition, meaning the whole drive is used, except for the 1st boot partition. Further, let us assume only I have access to this computer and that I have direct access to HW, and OS. As an addition I have set in GRUB: GRUB_CMDLINE_LINUX_DEFAULT="rootflags=data=journal", the particular documentation I did not manage to find, feel free to add it here. To put all this into a working example, here is my NVMe drive in my laptop: # fdisk -l /dev/nvme0n1 Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors Disk model: Samsung SSD 970 EVO Plus 2TB Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 989573D5-37E7-437A-B680-9410F7234A94 Device Start End Sectors Size Type /dev/nvme0n1p1 2048 194559 192512 94M EFI System /dev/nvme0n1p2 194560 3907028991 3906834432 1.8T Linux filesystem The 2nd partition /dev/nvme0n1p2 is of Ext4 filesystem type, and here is the full list of values considering it: # tune2fs -l /dev/nvme0n1p2 tune2fs 1.46.5 (30-Dec-2021) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: f1fc7345-be7a-4c6b-9559-fc6e2d445bfa Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 122093568 Block count: 488354304 Reserved block count: 20068825 Free blocks: 388970513 Free inodes: 121209636 First block: 0 Block size: 4096 Fragment size: 4096 Group descriptor size: 64 Reserved GDT blocks: 817 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Sat Jun 16 11:26:24 2018 Last mount time: Thu Oct 26 09:14:38 2023 Last write time: Thu Oct 26 09:14:38 2023 Mount count: 102 Maximum mount count: -1 Last checked: Tue Sep 26 03:05:31 2023 Check interval: 0 (<none>) Lifetime writes: 43 TB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 32 Desired extra isize: 32 Journal inode: 8 First orphan inode: 134214 Default directory hash: half_md4 Directory Hash Seed: 48360d76-0cfb-4aed-892e-a8f3a30dd794 Journal backup: inode blocks Checksum type: crc32c Checksum: 0x58d12a63 I would like to evaluate if I need some reserved root space, and if yes, how much. I would like this question not to be opinion-based, so if you decide to add an answer please include some references, thank you.
So, there's few reasons that count exists, at all. The easiest reason is that you would want a root-run service that saves logs and or data to that volume to continue to work even if a user floods the drive with dat. This obviously is only relevant when the volume you're referring to actually is used by root-running processes, and it is also used by users who should not be able to deny operation of these services by filling the disk with files, and it is more important that services continue to function than that unprivileged users are able to write user data. Yours seems to be a desktop system, so at the very least, I'd say 3. is questionable, if not the exact opposite of what you need. So, I consider that a very weak argument, especially because many services these days don't run as root to begin with. Then, the other argument (here brought forward by the "main" developer of ext4) is that the not having much free space makes it harder for the file system to find contiguous areas of blocks to use for new or growing files. That leads to so-called fragmentation, which was a hardware performance issue on rotating disk storage (due to large seek times) and still is an overhead issue due to the much more complicated and redirective ways files that are fragmented are stored and read. The file system driver needs to make a file that is scattered all over your storage look contiguous to the application, and that at the end comes at the expense of ability to prefetch and cache; still a minor concern on SSDs. It also increases metadata size. I'm not versed enough in ext4's internal structures to tell you whether this increased need for metadata reduces the available space or just means more lookups when accessing a fragmented file. https://listman.redhat.com/archives/ext3-users/2009-January/msg00026.html If you set the reserved block count to zero, it won't affect performance much except if you run for long periods of time (with lots of file creates and deletes) while the filesystem is almost full (i.e., say above 95%), at which point you'll be subject to fragmentation problems […] If you are just using the filesystem for long-term archive, where files aren't changing very often (i.e., a huge mp3 or video store), it obviously won't matter. So, it really depends on your use case. Seeing that your file system was mounted at /, it's probably the file system where all your installed software resides – and large software updates are exactly these periods of mass deletion and creation of files. So, reserving enough space so that, on average, when the sizes of created and deleted files are in balance, you're free to chose from enough contiguous parts of your storage, makes sense. So, how much would that be? Hard to tell. But say, a large update process doing maybe 20 GB (10 GB of new files, 10 GB of old files getting deleted afterwards) of changes would seem realistically a sensible upper bound. So, that would seem to be a good value for reserved space. Your file system is 1.86 TB in size, which means your NVMe is probably a consumer/prosumer 1.92TB device. These currently run at 45 to 80€ per TB. I recommend mentally checking whether even thinking about "optimizing" the reserved space is worth your mental headspace, monetarily. Sure, 78 GB is probably much more than you'll need, but do you care enough to find out whether less actually suffices if this equates to less than 6.60€ in storage space?
Is there some universally recommended Reserved block count (for root) for large Ext4 drives?
1,398,618,849,000
I have a Toshiba HDD with an ext4 file system that I have been using extensively until yesterday. Suddenly, it has become a read-only file system, and when I run fdisk -l, it shows the type as HPFS/NTFS/exFAT. Reading Files system become suddently read only; how to debug this?, I tried dmesg and, among other lines it shows (as suggested in the answers there) 367.274847] EXT4-fs error (device sdb1): ext4_validate_block_bitmap:390: comm nextcloud: bg 7398: bad block bitmap checksum [ 367.285558] Aborting journal on device sdb1-8. [ 367.297425] EXT4-fs (sdb1): Remounting filesystem read-only [ 513.153456] EXT4-fs (sdb1): error count since last fsck: 5 [ 513.153491] EXT4-fs (sdb1): initial error at time 1685397473: ext4_validate_block_bitmap:390 [ 513.153509] EXT4-fs (sdb1): last error at time 1685418194: ext4_validate_block_bitmap:390 and it's true I unmounted it not cleanly probably the last time it worked. How can I solve this? UPDATE Output of sudo smartctl -a /dev/sdb is smartctl 7.2 2020-12-30 r5155 [x86_64-linux-5.10.0-23-amd64] (local build) Copyright (C) 2002-20, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Toshiba 2.5" HDD MQ04UBF... (USB 3.0) Device Model: TOSHIBA MQ04UBF100 Serial Number: Z0IKT0JIT LU WWN Device Id: 0 000000 000000000 Firmware Version: JU003U User Capacity: 1,000,204,886,016 bytes [1.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Form Factor: 2.5 inches Zoned Device: Device managed zones Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-3 T13/2161-D revision 5 SATA Version is: SATA 3.3, 3.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue May 30 16:22:45 2023 CEST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART Status not supported: Incomplete response, ATA output registers missing SMART overall-health self-assessment test result: PASSED Warning: This result is based on an Attribute check. General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 120) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 175) minutes. SCT capabilities: (0x003d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 2 Throughput_Performance 0x0005 100 100 050 Pre-fail Offline - 0 3 Spin_Up_Time 0x0027 100 100 001 Pre-fail Always - 2455 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 5725 5 Reallocated_Sector_Ct 0x0033 100 100 050 Pre-fail Always - 0 7 Seek_Error_Rate 0x000b 100 100 050 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 100 100 050 Pre-fail Offline - 0 9 Power_On_Hours 0x0032 093 093 000 Old_age Always - 2976 10 Spin_Retry_Count 0x0033 214 100 030 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 960 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 17 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 120 193 Load_Cycle_Count 0x0032 099 099 000 Old_age Always - 13534 194 Temperature_Celsius 0x0022 100 100 000 Old_age Always - 26 (Min/Max 15/57) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 253 000 Old_age Always - 0 220 Disk_Shift 0x0002 100 100 000 Old_age Always - 0 222 Loaded_Hours 0x0032 100 100 000 Old_age Always - 182 223 Load_Retry_Count 0x0032 100 100 000 Old_age Always - 0 224 Load_Friction 0x0022 100 100 000 Old_age Always - 0 226 Load-in_Time 0x0026 100 100 000 Old_age Always - 280 240 Head_Flying_Hours 0x0001 100 100 001 Pre-fail Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 1003 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Output of fdisk -l Disk /dev/sdb: 931.51 GiB, 1000204883968 bytes, 1953525164 sectors Disk model: External USB 3.0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x558ebb3c Device Boot Start End Sectors Size Id Type /dev/sdb1 * 2048 1953522863 1953520816 931.5G 7 HPFS/NTFS/exFAT
sudo umount /dev/sdb1 sudo e2fsck -v -C 0 -t /dev/sdb1
external HDD drive become Read-only file system with wrong file system type
1,398,618,849,000
We have servers which have been running for a long time. When they reboot, we see this message: kernel: EXT4-fs (sda3): warning: maximal mount count reached, running e2fsck is recommended My question is: what if you never ever run e2fsck? Man page does not shed enough light. The warning message says "is recommended" - but does not say it is mandatory. What are consequences of not running it? What does it mean to have maximal count reached?
An ext* filesystem has a couple of values in the metadata; how many times a filesystem can be mounted before it should be checked, and how long between checks should be allowed. These values can be checked with the dumpe2fs command; eg % sudo dumpe2fs -h /dev/vdb | egrep -i 'check|mount count' dumpe2fs 1.42.9 (28-Dec-2013) Mount count: 15 Maximum mount count: 25 Last checked: Sun Jan 2 22:03:00 2022 Check interval: 15552000 (6 months) Next check after: Fri Jul 1 23:03:00 2022 This says the filesystem has been mounted 15 times and needs to be checked after 25 mounts; a check should be run every 6 months; the last check was Jan 2022, so the next check should be Jul 2022. These values can be changed with the tunefs command (-i and -c) options. And they can be turned off. eg % sudo dumpe2fs -h /dev/vda3 | egrep -i 'check|mount count' dumpe2fs 1.42.9 (28-Dec-2013) Mount count: 138 Maximum mount count: -1 Last checked: Sun Jul 12 17:23:17 2015 Check interval: 0 (<none>) This basically says "the disks never should be checked". So now the question; should we run it regularly? Essentially the rationale for regular-ish checking is to try and discover filesystem inconsistencies and try and fix them. On a modern system that doesn't shut down abnormally (eg crash, power failure) there's little risk, so it may not need to be done. Indeed, on large filesystems or ones with large number of files this could take a long time! Potentially hours! Contrariwise, on small filesystems with the correct entries in /etc/fstab it can happen automatically on reboot and only slows the reboot down a small amount. So you might want to let small filesystems be checked via fstab but not allow large ones or ones with lots of files. Red Hat, for example, recommends "In general Red Hat does not suggest disabling the fsck except in situations where the machine does not boot, the file system is extremely large or the file system is on remote storage." (https://access.redhat.com/solutions/281123)
What happens if you never ever run e2fsck?
1,398,618,849,000
I found an older external hd I want to reuse for something else. I was doing an rsync of it to a NAS I run over the network but it was taking ages. So I decided to rsync to my local drive first (SSD) and do the final backup to NAS later. I ran rsync -avvz --progress /media/ubuntu/9AB4-7DB9/ubuntu/ bak. This seems to have terminated fine. But when I compare the two dirs for their sizes, they are VASTLY different. du -kh bak 29G bak du -kh /media/ubuntu/9AB4-7DB9/ubuntu/ 56G /media/ubuntu/9AB4-7DB9/ubuntu/ How is this possible? I first assumed that the vfat file system might be to blame - but to this extent? I can't believe it to be nearly doubling the size I also thought it could be the -z compress option of rsync, but that should only compress during transfer as I understand: -z, --compress compress file data during the transfer Any ideas? I am baffled, and just want to make sure my backup was complete. Thanks,
The du command measures file size in blocks, not bytes. Since vfat and ext4 use completely different block sizes, a size change of 2x or even 8x would not be even slightly surprising. ext4 typically uses 4k blocks but both ext4 and vfat use a variable block size set when the disk is formatted. vfat supports logical block sizes between 512b and 32k; ext4 supports block sizes between 1k and 4k. If it is an old disk, it could be 512b blocks or maybe 2k, depending on disk size and properties. If you have a lot of files that are below 2k, each of those could double in size on copy to a 4k block size ext4.
rsync from external vfat disk to local ext4 yields VASTLY different sizes [duplicate]
1,398,618,849,000
$sudo blkid /dev/sda1: UUID="F959-61DE" TYPE="vfat" PARTUUID="950b18a0-1501-48b4-92ef-ba1dd15aaf21" /dev/sda2: UUID="6dfcfc23-b076-4eeb-8fba-a1261b4ea399" TYPE="ext4" PARTUUID="ddc69ee8-40b0-49c9-9dcb-0b9064caca7d" /dev/sda3: UUID="fec0af18-d28e-4f2a-acb7-6380ddee3dc2" TYPE="ext4" PARTUUID="e19628dc-c04a-4c9d-a3c6-469511e89480" /dev/sda4: UUID="a6f7669b-6e86-432a-b91c-f39780c849ac" TYPE="swap" PARTUUID="e45cf647-3d78-4fea-a950-022a3ae9b4e0" /dev/sda5: UUID="5a75937f-8a83-44a9-b5c5-502b7e3884f2" TYPE="ext4" PARTUUID="3e086aff-105f-48b3-a384-1eb1d18c6fb3" /dev/sda6: UUID="04460cd2-a1bb-4a3e-94df-1ad10080f356" TYPE="ext4" PARTUUID="d37fdea8-a386-4f6f-8016-fa2764a71b60" $pwd /home/milad $touch a $ls -i a 3935203 a $sudo /sbin/debugfs/ -R 'stat 3935203' /dev/sda6 debugfs 1.44.5 (15-Dec-2018) 3935203: File not found by ext2_lookup How to get birth date my file in ext4 partition drive? Thanks for helping
debugfs’s stat command expects a path name, or an inode number “quoted” using angle brackets; you might as well use stat milad/a instead: sudo /sbin/debugfs -R 'stat milad/a' /dev/sda6 The file path is relative to the root of the file system; since that is mounted at /home, /home/milad/a becomes milad/a. If your version of the stat utility is recent enough, you can use that instead of debugfs: run stat a from your shell, and you’ll see its birth time (if your kernel is also recent enough to record it and make it available).
debugfs not working | file not found by ext2_lookup
1,398,618,849,000
My both hard drives where all my data is stored are failing. My system insconstently refuses to load the disks and mount the partitions. I moved one hard drive to other computer where it is recognized with less trouble but the partition has many errors, and I still get E/S errors on dmesg for that drive. The partition for start has a bad superblock but it can be read with an alternative superblock where it shows even more errors so i did first a master backup of the partition on an external hard drive. I did two passes on ddrescue for this reason and it exited with only one error of 512 bytes acccording to the log, which I think is promising. Listing the backup with lsblk looks even more promising: Where lsblk for the damaged partition shows: $lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT ... sda └─sda1 ... Where the now master shows: sdc ├─sdc1 ext4 new 8cab6f75-1ea7-4451-9f48-2bbcce167184 Now I did another backup from this master partition to the end of the same drive, so the actual output of lsblk would be: lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT fd0 loop0 squashfs /snap/anbox-installer/25 loop2 squashfs /snap/core/9669 loop3 squashfs /snap/core/10911 sda └─sda1 sdb ├─sdb1 ext4 Debian_copia ce2c8e8f-f3ef-4005-9cb1-0bb9d5870f43 / └─sdb2 swap d60a8ad0-5528-4bbc-af5e-092b96282df4 [SWAP] sdc ├─sdc1 ext4 new 8cab6f75-1ea7-4451-9f48-2bbcce167184 └─sdc2 ext4 new 8cab6f75-1ea7-4451-9f48-2bbcce167184 sr0 Now here it is where is missed up things, I mistaken option p of fsck for option f so i have done fsck -fy /dev/sdc2 which screwed it up some things and deleted some many nodes which after mounting it listed half of the files that should be, affortunately this is a copy of a copy of the damaged hard drive, so this time i will be more cautious. Could you tell me please some good practices? my all data is in a gamble right now so please be precise. Does lsblk make any changes to the partitions? can I mount a partition without doing any changes on it? I have this link handy btw: https://www.sans.org/blog/how-to-mount-dirty-ext4-file-systems/ How to safely do a fsck so i can win some time here? Does fsck -n still make changes to the partition? Does it make any difference where in the disk is a copy of a partition? Is it any way of recovering the files without dealing with filesystem? I have read about photorec but i have many audacity file it would not recognize. Isnt it there anything more generic?
Don't panic It appears you are trying to failing hard drives with dirty ext4 filesystems on them. Do you have backups? Restore from backups if you have them. If you don't have backups, you must tread very carefully here. The first thing to do is to take your hands away from the keyboard and develop a game plan. And make sure to fire up info or man for each command you're going to run, especially tools that touch the hard disk directly. Limit access to the damaged media If the hard disks are failing, you should cease any further attempt to access files directly off the disk. You should cease any attempt to run fsck. The more activity you throw at the hard disk, the more wear you are putting on the possibly-failing hard disks. If you are booting an OS off one of these disks, cease this activity as well. Boot from a live media such as GRML Linux. You should instead try to image your failing hard drives. This involves copying the hard disk bit-for-bit into a file on another storage device. Ideally that other storage device should be pretty large, so you can store multiple copies of the image. Once your recovery tool has completed recovering as much data as possible, mark this image as read-only. This will become the master copy. You don't touch this image. Instead, make a copy of the master copy and run fsck and mount on this working copy. If you make a mistake, it's not a big deal - you just create a new working copy from the master copy. Creating the master copy See also the unix SE answer that Pourko linked. GNU ddrescue is well suited to recovering data hard disks. Run it something like: ddrescue --idirect /dev/sdX /mnt/big-storage-filesystem/sdX.img /mnt/big-storage-filesystem/sdX.mapfile (The --idirect gives ddrescue more control over disk access.) Once ddrescue has finished I recommend running chmod a-w sdX.img sdX.mapfile. These shouldn't be modified afterwards. Attempting to recover from an working copy First make your working copy cp /mnt/big-storage-filesystem/sdX.img /mnt/big-storage-filesystem/work/work-sdX.img Then use losetup to map the image to a block device file: losetup -a /mnt/big-storage-filesystem/work/work-sdX.img You might need to run kpartx -a /dev/loopN where /dev/loopN is your loopback device indicated by the above command's output. Now you can access the image as if it were just another hard disk. Check lsblk, you should be able to do fsck -y /dev/loop0p1 or the like. If you're lucky, you can just do a mount /dev/loop0p1 /mnt/recovery then go from there. If you're not so lucky, you may need to use forensic tools to grab data off the corrupted filesystem. See this unix SE post for an example. Learn from this experience Make backups & verify your backups. Imagine what you could be doing if you weren't asking this question on unix SE and tearing your hair out trying to recover irreplaceable data. Technology is always changing, and technology does not age well, so it's a good idea to anticipate data loss.
Please help me rescueing a failing hard drive
1,398,618,849,000
we have rhel server version 7.5 and from lsblk we can see only the following disks , and all disks are with ext4 filesystem lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 278.9G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 278.4G 0 part ├─vgN-lv_root 253:0 0 50G 0 lvm / ├─vgN-lv_swap 253:1 0 16G 0 lvm [SWAP] └─vgN-lv_var 253:2 0 100G 0 lvm /var sdb 8:16 0 1.7T 0 disk /gr/sdb sdc 8:32 0 1.7T 0 disk /gr/sdc sdd 8:48 0 1.7T 0 disk /gr/sdd sde 8:64 0 1.7T 0 disk /gr/sde but the interesting thing is that: when we performed mount -a we get mount -a mount: special device /dev/sdf does not exist mount: special device /dev/sdg does not exist we not understand from where mount -a gives this disks because they not appears from lsblk and also not from /etc/fstab and not also from /etc/mtab so why mount -a is complaint about this disks , how we can fix this?
Perhaps your /etc/fstab specifies some mounts by either UUID= or LABEL= (causing mount to loop through all block devices it finds) and you have some garbage files as /dev/sdf and /dev/sdg that are not actual device nodes? Run ls -l /dev/sdf /dev/sdg. If it displays anything, and the letter in the very first column of the permissions string is not b, those are not real block devices. They might have been created by an accidentally mistyped command or two earlier.
mount + mount: special device /dev/sdX does not exist
1,398,618,849,000
Here I read: $ cd /media/mmcblk1p2 $ tar xf /media/sda1/mfg_images/st-image-bootfs-openstlinux-weston-stm32mp1-som.tar.xz but as source file I don't have a compressed file, instead I have an ext4 image for that partition. Should I use dd with of=/dev/mmcblk1p2 or I need to use another approach?
You can mount the filesystem image directly into your filesystem: mkdir -p /mnt/img mount -o ro,noload imagefile.img /mnt/img and then you can retrieve the file directly from the appropriate place underneath /mnt/img. (The ro,noload options mount the filesystem read-only. Omit them both if you want read/write access.) Unmount the file afterwards with umount /mnt/img
Extract file from ext4 image and copy file to device
1,398,618,849,000
What does +0200 mean after the Access/Modify/Change timestamps? File: task-system.md Size: 197 Blocks: 24 IO Block: 4096 regular file Device: 33h/51d Inode: 14155787 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/ tom) Gid: ( 1000/ tom) Access: 2018-08-26 15:19:07.047602175 +0200 Modify: 2018-08-26 15:18:59.531538750 +0200 Change: 2018-08-26 15:18:59.535538783 +0200 Birth: -
That’s the timezone. The times are given in a UTC+2 timezone (the timestamps are stored as seconds since the Unix epoch, and translated to whatever the current user’s timezone is for display).
Inode Timestamp Plus/Minus Interpretation
1,398,618,849,000
When I run the following: mv -v foo /mnt/bar The directory foo and all it's sub contents is moved into the directory /mnt/bar. The order that files are moved appears to be directory order (ls -U). Is there a good way to perform this same operation but ensure that the sub files/directories of foo are moved in alphabetical order? I realise I can use find -exec mv to iterate the sub contents in alphabetical order but this requires some annoying gymnastics to maintain the same subdirectory structure in the target. I was hoping for a flag on GNU mv but the man page shows nothing useful.
$ \ls foo | xargs -I% mv -v foo/% bar 'foo/one' -> 'bar/one' 'foo/sie' -> 'bar/sie' 'foo/two' -> 'bar/two' 'foo/uve' -> 'bar/uve' 'foo/wox' -> 'bar/wox' 'foo/zanzibar' -> 'bar/zanzibar' Use ls to list items alphabetically. To make sure you're running pure ls (with no additional characters added by an alias hidden away in your .bashrc or .bash_aliases), run the command as \ls. Send output of ls to xargs Give each item a variable name with the -I variable (this just gives you something to "see" in your mv command) Move your item (called %) from its location in foo to the new destination.
How can I move a directory (and all sub files/directories) but move items in alphabetical order?
1,398,618,849,000
I want to use MySQL 5.7's page compression feature, but this feature requires Linux's hole punching feature, and according to the documentation, this was introduced in 2.6.39. But my server's kernel version is 2.6.32, and I verified the page compression feature does work there, it's strange! So I want to be sure whether my server supports this hole punch feature.
You can test it by punching a hole yourself. $ dd if=/dev/zero of=punch bs=100M count=1 creates a 100MiB file, with no holes, as can be checked with du: $ du -h punch 100M punch Now punch a 10MiB hole in it: $ fallocate -p -o 2M -l 10M punch The file’s size won’t change (as indicated by ls -lh), but it will take less space on disk if your kernel and file system support the necessary system calls: $ du -h punch 90M punch man fallocate will tell you more; your 2.6.32-based system might well have a kernel where the relevant support has been back-ported.
How could I confirm if my server supports hole punching?
1,398,618,849,000
I have an ext4 partition containing rootfs. I need to implement a system update (uboot) which just extracts and writes the new rootfs image. This probably works like dd-ing the image into MMC flash to offset where the rootfs ext4 partition is said to be. We are doing first MMC erase and then MMC write. The erasing operation is very slow (1-2 minutes). I am thinking that it may not be necessary and just writing new rootfs will do the trick. The question is, suppose I am writing rootfs image that is smaller than the previous one: then there will be some residual data at the end right? Wouldn't this cause some problem when we for example run fsck?
If the image that you are writing to the mmc is a complete partition with the file allocation table, then NO you do not need to erase or zero out the old space. The old 'random' data left is not part of a file and will be over-written as the space is used. Remember that a mmc device has a finite number of writes in its life and that number of writes is much smaller that say a hard drive.
overwrite ext4 partition data without previous erase
1,398,618,849,000
Is there a way to restore a folder or better the containing files and folders of the folder which was replaced by a empty one with the same name? FileSystem: Ext4 OS: openSUSE 42.1 If it is possible, what is the easiest way? Can I do this from the running system itself?
Don't do it from the running system, you should run a live cd or usb, mount the hard drive in read only, then try ext undeleted, or don't mount anything and try foremost or photorec. The more you use the system the less likely it is that you will recover your data. good luck
Recover files inside a folder which was replaced by an empty one. (openSUSE, Ext4)
1,398,618,849,000
a part of out homework assignment is to recover deleted files from the partition with the ext4 file system. I've tried using the extundelete tool, by following this tutorial. The tool recovered a lot of files, which couldn't be opened, so I guess this doesn't do me any good. Is there any other tool I could try, to recover the deleted files, or is this tool the best there is? I didn't write anything to the partition, before the recovery process, and I recovered the files to another partition.
If you use software that risk modifying the deleted partition you should first make an image of the disk using dd, dd_rescue or the like. When it comes to tools you could try out TestDisk and PhotoRec. TestDisk, now at version 6.13 has had support for ext4 since version 6.11. TestDisk is gained towards partition recovery, whilst PhotoRec uses file carving. If platform is not an issue you could also have a look at e.g. Hiren's BootCD. Beside TestDisk and PhotoRec v 6.14b it has a long list of other nice tools. Look under e.g. "Hard Disk Tools" and "Recovery Tools" (expanded by See CD Contents » on above link.)
Recovering files from partition (ext4)
1,398,618,849,000
I use partimage to backup my ext4 partition, but during backup, the partition was detected as an ext3 partition. So I'm wondering if this can cause something bad.
http://www.partimage.org/Main_Page Limitations - Partimage does not support ext4 or btrfs filesystems. It is unwise to use it for ext4 as long as that message is on their website.
is it safe to backup ext4 partition with partimage , which is detected as a ext3 partition
1,398,618,849,000
I have an issue very similar to the question here: https://askubuntu.com/questions/1370421/restore-ext4-hd-after-creating-gpt-partition-table My problem seems to be that I had an ext4 filesystem which sat directly on a block device, and installing windows to an entirely different drive decided to mess with that device's partition table (or, seemingly, it's lack of a partition table). When I booted, this drive has a GPT partition table which looks like so: λ sudo fdisk -l /dev/nvme1n1 Disk /dev/nvme1n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: Samsung SSD 970 EVO 1TB Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 73405727-65E8-485F-99F8-C2D65E99D767 Device Start End Sectors Size Type /dev/nvme1n1p1 2048 1953525127 1953523080 931.5G Linux filesystem But this partition is unmountable and appears to have an invalid filesystem. I can, however, get all my data back by running fsck.ext4 /dev/nvme1n1 - but seemingly since this is the whole device rather than the partition, doing this then blows up the GPT table: λ sudo fdisk -l /dev/nvme1n1 The primary GPT table is corrupt, but the backup appears OK, so that will be used. ... I can re-write the table with gdisk, but then I'm back to having a broken file system. I can toggle back and forth like this, but I can't figure out how to do what I actually want: create a valid GPT partition table and recover my existing filesystem onto it. I have tried passing explicit superblocks, without good results: λ sudo fsck.ext4 -p -b 32765 -B 4096 /dev/nvme1n1p1 fsck.ext4: Bad magic number in super-block while trying to open /dev/nvme1n1p1
It's not possible (that way). You can't have a partition table, and a filesystem, on the same block device. When you create a partition on /dev/nvme1n1, it gives you a new block device /dev/nvme1n1p1 — you have to use the new block device for the filesystem. And that means shifting all data by the partition offset. Keeping the filesystem data at the old offset won't work. fsck won't fix that for you. So it can't be done (the way you're trying to do it). So your options are: keep using the bare drive as is and remove the msdos/gpt partition table headers entirely (use wipefs to remove msdos/gpt partition headers only) shrink the filesystem by 2MiB then move it by 1MiB (or whatever your partition offset is). Shrinking is necessary to make room for GPT headers at start and end of drive. backup all files, set it up properly from scratch with new partitions and filesystems, then restore files to it I recommend the last option. While shifting data offsets can be done in theory (and tools like gparted might help you), it's actually very risky to do so and when anything goes wrong, you're left with a device that is unusable and there is no trivial fix. Using bare drives directly is possible in theory but in practice, you run into this exact case that something else "helpfully" creates a partition table for you, damaging your data in the process. Thus having a partition table is not optional; it's mandatory.
Fixing an ext4 whole-device filesystem and corrupt GPT partition table
1,398,618,849,000
When I use the dumpe2fs command to look at the Block Group of the ext4 filesystem, I see "free inodes" and "unused inodes". I want to know the difference between them ? Why do they have different values in Group 0 ? Group 0: (Blocks 0-32767) [ITABLE_ZEROED] Checksum 0xd1a1, unused inodes 0 Primary superblock at 0, Group descriptors at 1-3 Reserved GDT blocks at 4-350 Block bitmap at 351 (+351), Inode bitmap at 367 (+367) Inode table at 383-892 (+383) 12 free blocks, 1 free inodes, 1088 directories Free blocks: 9564, 12379-12380, 12401-12408, 12411 Free inodes: 168 Group 1: (Blocks 32768-65535) [ITABLE_ZEROED] Checksum 0x0432, unused inodes 0 Backup superblock at 32768, Group descriptors at 32769-32771 Reserved GDT blocks at 32772-33118 Block bitmap at 352 (+4294934880), Inode bitmap at 368 (+4294934896) Inode table at 893-1402 (+4294935421) 30 free blocks, 0 free inodes, 420 directories Free blocks: 37379-37384, 37386-37397, 42822-42823, 42856-42859, 42954-42955, 44946-44947, 45014-45015 Free inodes:
The "unused inodes" reported are inodes at the end of the inode table for each group that have never been used in the lifetime of the filesystem, so e2fsck does not need to scan them during repair. This can speed up e2fsck pass-1 scanning significantly. The "free inodes" are the current unallocated inodes in the group. This number includes the "unused inodes" number, so that they will still be used if there are many (typically very small) inodes allocated in a single group.
Ext4 "unused inodes" "free inodes" diffrence?
1,398,618,849,000
I delete a file from disk, then do some write operations on the same disk, and then run a recovery program to recover the said file. Is there any way to check the integrity of the file? Let the recovered file be an bitmap image. In my understanding since the data blocks that store the image pixel data may be overwritten, some pixels of an image may have wrong information. If the file header data is corrupted the file simply won't open. But how can you tell if the pixel data is corrupted, if not visually inspecting each pixel individually? Same idea for checking integrity of text or video files.
Some file systems have metadata blocks with checksums. With a lot of luck, these might still be intact, but typically, the metadata would be gone, so all you have is the intrinsic ability of the file itself to detect error. First things first: images are relatively large files, and deleted files without remaining metadata that were fragmented basically can only through luck / try and error be rearranged in the same order again. Luckily, often when you write images, that happens in a very unfragmented manner. But if it happens in an unfragmented manner, than only a fragmented write to the middle of the existing image data would lead to a corruption as you describe, and it would not be "a few pixels", it'd be e.g. a 4 kB block of image data. Very rarely you store uncompressed imagery, so honestly, your "failure mode" is not that realistic. Now, we do see corrupted files, especially on SD cards from cameras and such. But these are different failure modes, and really only affect smaller parts of an image, or cut it short. But how can you tell if the pixel data is corrupted, if not visually inspecting each pixel individually? Teach an algorithm to do your visual inspection for you. Or use file formats with checksums. I bet a few of the medical image formats would make sense. Also, again, there's not gonna be corrupted individual pixels, 4 kB blocks will just be completely randomly broken. But nobody designs file formats to be "after-deletion recoverable"; that's nonsense, if you need that, you actually need to stop deleting things you don't have a backup of – and snapshots of modern file systems and storage subsystems make having a backup trivial and not very space intense. Same idea for checking integrity of text or video files. As said, if the only thing that can assess the content of a piece of data is a human, then that's it. For typical fotographic content, e.g. cognitive vision through deep/convolutional neural networks might be an appropriate way to detect 4 kB of "unsuitable" data being decoded e.g. by a JPEG decoder.
How to tell the integrity of a recovered file, specifically if pixel data of recovered image is corrupted
1,398,618,849,000
I installed debian strech through the installer in a software raid 10 configuration.There are 4 drives, each is 14TB. Partition was formatted by the installer with ext4. The inode ratio defaults to 16384. cat /proc/mdstat Personalities : [raid10] [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] md3 : active raid10 sdc4[1] sda4[0] sdb4[2] sdd4[3] 27326918656 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] bitmap: 5/204 pages [20KB], 65536KB chunk md2 : active raid1 sdd3[3] sdc3[1] sda3[0] sdb3[2] 976320 blocks super 1.2 [4/4] [UUUU] md1 : active raid10 sdd2[3] sdc2[1] sda2[0] sdb2[2] 15616000 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU] unused devices: mdadm --detail /dev/md3 /dev/md3: Version : 1.2 Creation Time : Sun Mar 8 16:21:02 2020 Raid Level : raid10 Array Size : 27326918656 (26060.98 GiB 27982.76 GB) Used Dev Size : 13663459328 (13030.49 GiB 13991.38 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Wed Apr 1 01:00:06 2020 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : aaaaaaa:2 (local to host aaaaaaa) UUID : xxxxxxxxxxxxxxxxxxxxxxxx Events : 26835 Number Major Minor RaidDevice State 0 8 4 0 active sync set-A /dev/sda4 1 8 36 1 active sync set-B /dev/sdc4 2 8 20 2 active sync set-A /dev/sdb4 3 8 52 3 active sync set-B /dev/sdd4 cat /etc/mke2fs.conf [defaults] base_features = sparse_super,large_file,filetype,resize_inode,dir_index,ext_attr default_mntopts = acl,user_xattr enable_periodic_fsck = 0 blocksize = 4096 inode_size = 256 inode_ratio = 16384 Now i run: tune2fs -l /dev/md3 tune2fs 1.43.4 (31-Jan-2017) Filesystem volume name: Last mounted on: / Filesystem UUID: xxxxxxxxxxxxxxxxxxxxxxxxxxx Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 426983424 Block count: 6831729664 Reserved block count: 341586483 Free blocks: 6803907222 Free inodes: 426931027 First block: 0 Block size: 4096 Fragment size: 4096 Group descriptor size: 64 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 2048 Inode blocks per group: 128 RAID stride: 128 RAID stripe width: 256 Flex block group size: 16 Filesystem created: Sun Mar 8 16:24:38 2020 Last mount time: Tue Mar 31 12:06:30 2020 Last write time: Tue Mar 31 12:06:21 2020 Mount count: 17 Maximum mount count: -1 Last checked: Sun Mar 8 16:24:38 2020 Check interval: 0 () Lifetime writes: 27 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 32 Desired extra isize: 32 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: xxxxxxxxxxxxxxxxxxxxxxxxxxx Journal backup: inode blocks Checksum type: crc32c Checksum: 0x30808089 bytes-per-inode = (blocks/inodes) * block_size In my case: bytes-per-inode = (6831729664/426983424) * 4096 = 16 * 4096 = 65536 Why is the ratio showing as 65536 in the tune2fs -l output. It should be 16384. I have the same debian strech distribution installed on my notebook and there is no discrepancy between /etc/mke2fs.conf and tune2fs -l.
Your file system is over 16 TiB in size, so mke2fs defaulted to the “huge” file system type, with an inode ratio of 65,536 bytes. See the -T option in the linked manpage, and the huge type in mke2fs.conf: huge = { inode_ratio = 65536 }
ext4 inode ratio discrepancy between /etc/mke2fs.conf and tune2fs
1,568,027,654,000
I'm trying to recover data from damaged 3TB drive. I'm using ddrescue to make an image of it, but it takes forever while there are lots of read errors. I was wondering if I can have some luck with first 200GB (with large wholes in it) that I already copied. I read the partition table with gdisk and found the offset of ext4 file system that I'm interested in reading. Then created loop device to have a nice way of interacting with the partition: sudo losetup -f --show -o $((xxxxxxxxxxx*512)) sudo tune2fs -l /dev/loop16 gives me some info so I think I'm on a right track. Unfortunately I can't mount it because of the file system errors and e2fsck won't fix anything as it's trying to read beyond the image file boundaries. I suppose there may be some important file system data in later areas of the partition. Do you have any advice on how I could trick the system to ignore the errors and try to work with incomplete inode structure and within the truncated image? Thanks.
If the image size is too small, you can use fallocate or truncate to make it larger, or use dmsetup to create a linear device mapping to create a virtual larger device. $ ls -lh somefile -rw-r--r-- 1 user user 200G Sep 9 13:27 somefile $ truncate -s 2T somefile $ ls -lh somefile -rw-r--r-- 1 user user 2.0T Sep 9 13:28 somefile To make ddrescue skip bad areas in the first pass, try something like --min-read-rate=10M. As for the loop device, it should be read-only, or read-write on a copy of the image, or use a copy-on-write overlay for experiments. Otherwise you might end up modifying the image and have to do it over which is a bad idea since the source drive is already dying.
Recover data from truncated partition image
1,568,027,654,000
Background The first block of an ext4 filesystem is called the superblock - it contains essential metadata. There are backup copies of the superblock scattered throughout the filesystem; they can be used to recover if original superblock gets corrupted. They can be located with dumpe2fs and repairs can be attempted with e2fsck. I've found a lot of info on the normal recovery process itself so this question isn't about that. Question What if all the superblock backups get corrupted? Does it make sense to manually create a backup superblock and to store it on a separate drive? How do you go about making such a copy? Or does it not make any sense because in the event of all backups being corrupt the filesystem is so far gone there is no point in trying to repair the superblock?
...in the event of all backups being corrupt the filesystem is so far gone there is no point in trying to repair... Exactly that, just use normal automatic offsite backups and ignore "superblocks".
Manually backup superblock - how to & does it make sense?
1,568,027,654,000
When using Linux in Virtualbox and dynamically-allocated disk, it keeps growing even though almost half of space is free: Filesystem Size Used Avail Use% Mounted on /dev/sda2 94G 12G 78G 13% / This disk takes >24G on disk and keeps growing, filesystem ext4.
In order to discard unused blocks on filesystem there is a command fstrim, part of util-linux package. But to use it on Virtualbox, it is needed to enable discard option on your virtual disk by stopping you VM and running the following command: VBoxManage storageattach <VM name> --storagectl "SATA" --port 0 --discard on where "SATA" and 0 are parameters of your disk controller, can be checked in VB settings for your specific VM. Then boot your machine and run # fstrim / To automate this process, add this command to cron, once in a week is usually enough.
Virtualbox dynamically-allocated disk *.vdi keeps growing
1,568,027,654,000
noinit_itable Do not initialize any uninitialized inode table blocks in the background. This feature may be used by installation CD's so that the install process can complete as quickly as possible; the inode table initialization process would then be deferred until the next time the file system is unmounted. Should I always use the noinit_itable option whenever I mount an ext4 device? If it's not ,Why?
I would interpret this as inode initialization being a task that can impose latencies and degraded throughput. The goal of the code would be to arrange for it to run during a relatively idle period. Initializing the inode tables in advance, would avoid a latency hit ("lag") when you actually need the inode tables. I think the suggestion is that it's better to have a quick install process, and then slightly degraded throughput for a while. While the install process is running, it's likely blocking you from doing useful things with the computer at the same time, for example checking your email reading the documentation from the packages installed on your system finding your favourite desktop theme configuring your professional workspace booting back in to Windows where you have all your stuff The ext4 mkfs option lazy_itable_init, which is now activated automatically when kernel support is detected, speeds up formatting ext4 filesystems during install. When the fs is mounted, the kernel begins zeroing the inode tables in the background. During install, this is a somewhat wasted effort and interferes with the copy process. Mounting the filesystem with the noinit_itable mount option disables the background initialization. This should help the install go a bit faster, and after rebooting when the fs is mounted without the flag, the background initialization will be completed. https://bugs.launchpad.net/ubuntu/+source/partman-ext3/+bug/733652 This also points to a thread consisting mainly of rants by Ted T'so. The main point seems to be that inode checksums hadn't been implemented yet, which meant that a filesystem with un-zeroed inode tables would be significantly less robust against errors. Fortunately inode checksums were implemented within a year or so of that comment.
What is risk or cost for the "noinit_itable" option of ext4?
1,568,027,654,000
I have a production ESXI server with loads of VMs. I had a power outage a few hours ago that was so long my UPS's battery drained. The automatic shutdown mechanism wasn't working for some reason so the power was cut off for the whole system. After the outage everything came up, except the mysql server VM. Now it spams the console with I/O errors. end_request: critical medium error, dev sda, sector X end_request: I/O error, dev sda, sector X .... EXT4-fs error (device dm-1): ext4_wait_block_bitmap:476 comm bounce: Cannot read block bitmap - block_group = X, block_bitmap = X Aborting journal on device dm-1-8 EXT4-fs (dm-1): Remounting filesystem read-only The VM is setup using encrypted LVM. What do these errors mean? Is it hardware? What can I do? I searched Google for hours, but can't figure out what to do. I booted from live CD, run fsck on the unmounted root partition, fixed it, rebooted, but the issue is the same. EDIT #1 I tried this, but nothing happened. root@ubuntu:~# sudo cryptsetup --key-file=/media/ubuntu/7b225e2d-9c0f-4bd4-a4de-1d2f7a0b4c58/keyfile luksOpen /dev/sda5 myvolume root@ubuntu:~# vgscan Reading all physical volumes. This may take a while... Found volume group "mysql-server-vg" using metadata type lvm2 root@ubuntu:~# tune2fs -O ^has_journal /dev/mysql-server-vg/root tune2fs 1.42.13 (17-May-2015) The needs_recovery flag is set. Please run e2fsck before clearing the has_journal flag. root@ubuntu:~# e2fsck -f /dev/mysql-server-vg/root \e2fsck 1.42.13 (17-May-2015) /dev/mysql-server-vg/root: recovering journal Pass 1: Checking inodes, blocks, and sizes Deleted inode 391687 has zero dtime. Fix? yes Inodes that were part of a corrupted orphan linked list found. Fix? yes Inode 391697 was part of the orphaned inode list. FIXED. Inode 391699 was part of the orphaned inode list. FIXED. Inode 391700 was part of the orphaned inode list. FIXED. Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Free blocks count wrong (5462594, counted=5462792). Fix? yes Inode bitmap differences: -391687 -391697 -(391699--391700) Fix? yes Free inodes count wrong for group #48 (7946, counted=7950). Fix? yes Free inodes count wrong (1854371, counted=1854370). Fix? yes /dev/mysql-server-vg/root: ***** FILE SYSTEM WAS MODIFIED ***** /dev/mysql-server-vg/root: 95870/1950240 files (0.8% non-contiguous), 2337016/7799808 blocks root@ubuntu:~# tune2fs -O ^has_journal /dev/mysql-server-vg/root tune2fs 1.42.13 (17-May-2015) root@ubuntu:~# e2fsck -f /dev/mysql-server-vg/root e2fsck 1.42.13 (17-May-2015) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information /dev/mysql-server-vg/root: 95870/1950240 files (0.8% non-contiguous), 2304248/7799808 blocks root@ubuntu:~# tune2fs -j /dev/mysql-server-vg/root tune2fs 1.42.13 (17-May-2015) Creating journal inode: done
OK, I figured it out and successfully fixed it. It cost me two days. First I verified that the storage controller, the datastore hardware (mechanical drive) and the cables are not faulty. Please note that I couldn't access the vmdk file on the filesystem properly. I tried to copy it locally, with scp and with the vSphere Client, but after a while all of them gave me Input/Output error. I even tried to clone the virtual disk to a separate datastore. cd /vmfs/volumes/ vmkfstools -i datastore1/vm/vm.vmdk datastore2/vm/vm.vmdk -d thin -a lsilogic It gave me Input/Output error after 16%. I figured the power outage caused some corruption, stale locks and whatnots on the vmfs filesystem (datastore). Using the vSphere On-disk Metadata Analyzer (VOMA) I checked the VMFS metadata consistency. Please note that the datastore have to be unmounted before running this command. voma -m vmfs -f check /vmfs/devices/disks/disk_name:1 It found 34 errors. The voma bundled in vSphere Hypervisor version 5.5 can only check the filesystem. I cloned the datastore to a new hard drive with clonezilla in rescue mode (cloning disk with bad sectors). After that I upgraded to VMware ESXi version 6.5, because it has a newer version of the voma command. It can fix errors, so I ran the following command: voma -m vmfs -f fix /vmfs/devices/disks/disk_name:1 It sure did something. Booted up the VM, but cannot get console connection because of the new vCenter vSphere WebClient nonsense and vSphere Client deprecation, so I went back to my original VMware ESXi 5.5 installation. I cloned the mentioned vmdk file successfully. I booted up the VM with the cloned disk, ran fsck once, rebooted and voila. It works like expected. The server came online with all of my data. It involved a lot of fiddling around, but I cannot figure out anything else. If somebody knows an easier way, please don't hesitate to leave a comment. I did have database backup taken 12 hours before the incident, but wanted to recover the live data if possible.
I/O error after power failure, filesystem remounting as read-only [closed]
1,568,027,654,000
I am prototyping a new embedded system that uses ext4 on Flash memory. These systems will be remotely deployed with no local sysadmin, so any diagnostics must also be done remotely via a network. The default mount option for ext4 is to set the FS to read only when it encounters an error. I think this is too severe for my case, as it can cause many operations to cease working and prevent remote logins. I would prefer to keep the system running (and tolerate some FS errors). So for my case the mount option "errors=continue" seems more appropriate. However, I would like my application to be notified when any FS errors occur so it can log these as high priority problems and send that info. back to our servers. Does anyone know if this can be done with the stock Linux kernel (4.8.1 on x86_64)?
I would prefer to keep the system running (and tolerate some FS errors) This is a contradiction in terms. When you get FS errors, your system won't be running for long. In fact, running with errors=continue is very likely to further damage a corrupt filesystem until there is not even any hope of sensible recovery. If you want your application to make a best stab at continuing operation even if there are FS errors, it should have a script that detects when / has gone read-only, and reboot with a forced fsck. At some point everything goes bust. It's the law of increase of entropy. There isn't anything you can really do about it, other than adhere to solid engineering principles and get high-quality parts for mission-critical use cases.
Any way to be notified of EXT4 errors when mounted with "errors=continue"
1,568,027,654,000
I create a 4 GByte sized LV on a RHEL6 machine. I created a 4 GByte sized EXT4 FS. [root@server ~]# lvcreate -n newlvnamehere -L 4096M rootvg Logical volume "newlvnamehere" created. [root@server ~]# mkdir /newfshere [root@server ~]# mkfs.ext4 /dev/mapper/rootvg-newlvnamehere mke2fs 1.41.12 (17-May-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=1 blocks, Stripe width=0 blocks 262144 inodes, 1048576 blocks 52428 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=1073741824 32 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 35 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@server ~]# mount /dev/mapper/rootvg-newlvnamehere /newfshere [root@server ~]# df -m /newfshere Filesystem 1M-blocks Used Available Use% Mounted on /dev/mapper/rootvg-newlvnamehere 3904 8 3691 1% /newfshere If I later use resise2fs it says nothing to do.. Question: Why doesn't the EXT4 FS has the exact same size as the LV? It is only 3904 MByte sized and the LV is 4096. Where did 192 MByte went? 4096-3904. PE size in the rootvg is 32 MByte. FS Journal size: 128M
You filesystem does have exactly the same size as the LV: mkfs.ext4 says 262144 inodes, 1048576 blocks which is 4GB. The missing 192MB are accounted for by the journal (128MB) and the filesystem data structures (superblocks and backups). Why are there so many different ways to measure disk usage? has lots more detail.
New FS size differs from LV size
1,568,027,654,000
I've been wondering, what are the explanation of the columns in the /proc/fs/ext4/device_name/mb_groups file. The columns are: #group: free frags first [ 2^0 2^1 2^2 2^3 2^4 2^5 2^6 2^7 2^8 2^9 2^10 2^11 2^12 2^13 ] What's the meaning of every column ?
The file contains information on the buddy group cache of that specific disk and it's useful for the fragmentation status of said disk. The fields which I found are for a slightly different output, but at least it's a little more info.: #group: free free frags first pa [ 2^0 2^1 2^2 2^3 2^4 2^5 2^6 2^7 2^8 2^9 2^10 2^11 2^12 2^13] #group number Available blocks in the group Blocks free on a disk Number of free fragments First free block in the group Number of preallocated chunks (not blocks) A series of available chunks of different sizes I got my info from here and here
Content Explanation Of: /proc/fs/ext4/device_name/mb_groups
1,568,027,654,000
I deleted a big extended partition containing an ntfs logical partition with high percentage of occupied space and from that extended partition I made a new, smaller extended part. In it I created an ext4 logical partition . The newly created ext4 logical partition however comes with 1.75 GB already occupied. I have tried deleting and recreating the partition but the occupied space just keeps coming back. I did the following to search for clues but no joy. sudo du -h -s /media/hrmount/ 20K /media/hrmount/ and sudo du -h -a /media/hrmount 16K /media/hrmount/lost+found 20K /media/hrmount/ and sudo du -h -a /media/hrmount/lost+found/ 16K /media/hrmount/lost+found/ the commands might seem redundant but I'm just blindly trying to figure this out. I also ran : fsck -V /dev/sdb5 fsck from util-linux 2.20.1 [/sbin/fsck.ext4 (1) -- /media/hrmount] fsck.ext4 /dev/sdb5 e2fsck 1.42 (29-Nov-2011) /dev/sdb5: clean, 11/6553600 files, 459349/26214400 blocks and the relevant output from df -h /dev/sdb5 100G 1.7G 94G 2% /media/hrmount I am quite sure that I will get rid of that occupied space by formatting the partition but what i want to know is what is causing that occupied space in also what it actually contains. Please help me find more clues to solve this puzzle. Thank you.
The used space reported by df is reserved space. This reserved space is used by ext filesystems to prevent data fragmentation as well as to allow critical applications such as syslog to continue functioning when the disk is "full". You can view information about the reserved space using the tune2fs command: # tune2fs -l /dev/mapper/newvg-root tune2fs 1.42.5 (29-Jul-2012) Filesystem volume name: <none> Last mounted on: /mnt/oldroot Filesystem UUID: d41eefc5-60d6-4e18-98e8-d08d9111fbe0 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 3932160 Block count: 15728640 Reserved block count: 786304 Free blocks: 11086596 Free inodes: 3312928 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1020 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Tue Feb 8 16:28:29 2011 Last mount time: Mon Dec 9 23:28:11 2013 Last write time: Mon Dec 9 23:48:24 2013 Mount count: 19 Maximum mount count: 20 Last checked: Tue Sep 3 23:00:06 2013 Check interval: 15552000 (6 months) Next check after: Sun Mar 2 22:00:06 2014 Lifetime writes: 375 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 80cf2748-584a-4fe8-ab8c-6abff528c2c2 Journal backup: inode blocks Here you can see that 786304 blocks are reserved and the block size is 4096. This means that 3220701184 bytes or 3GB is reserved. You can adjust the percentage of reserved blocks using the tune2fs commands (but is not recommended): tune2fs -m 1 /dev/sdb5
What is this data that keeps reappearing after partition delete + new partition creation?
1,568,027,654,000
I am playing with LVM and while doing lvreduce. I now get this error: [root@localhost raja]# e2fsck -f /dev/vg1/lvol2 e2fsck 1.41.12 (17-May-2010) e2fsck: Superblock invalid, trying backup blocks... e2fsck: Bad magic number in super-block while trying to open /dev/vg1/lvol2 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> How can I fix this?
If the filesystem is really on that device, running mkfs.ext4 with the same arguments plus a -n will give you a list of superblocks that you can use as alternates. Eg: # mkfs.ext4 -n /dev/vg1/lvol2 ... Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208 Then you can run e2fsck -b 32768 /dev/vg1/lvol2 or other backup superblock to see if it will fix it. PS: 32768 is a typical backup block while the other locations depend on the size of the partition.
e2fsck giving some error
1,568,027,654,000
From /. found this worrisome post by Theodore Ts'o. Turns out ext4 has some journalling problems. How can I quickly find out version numbers of susceptible kernels for this and other bugs?
You can track (and submit) kernel bugs in the Kernel Bug Tracker .
What is a generic way of finding out whether the kernel has ext4 (or other) bugs?
1,568,027,654,000
Given any file on an ext4 filesystem, it is possible (using filefrag -v) to get the list of real offsets+lengths where that file is located on the underlying block device. Is it safe to open the device and write to them, all that while the filesystem is mounted read-write? Can it cause fs corruption? I'm asking because I'm going to implement an alternative loop driver, which will bypass the filesystem layer completely, therefore having much better performance. If I remember correctly, swapfile is implemented exactly that way. Please correct me if told something wrong. Is the answer filesystem-dependent? What can happen if the file is suddenly deleted, and these offsets become reused for some metadata? Finally, is there a way to lock a file from being relocated by e4defrag or similar things? What is the best way to prevent a file from being deleted (being in kernel space)? Is there some kernel internals I can use to get the list of file's extents?
If the writes are only to the blocks of the file, then it wouldn't corrupt the ext4 filesystem. However, there is a definitely a bigger risk that some error in the code could corrupt the filesystem, which wouldn't happen with a regular loop device that is only using the file mapping. The question is whether writing directly to the block device will actually make a difference in performance? You can prevent the file from being deleted by marking it immutable with chattr +i FILENAME.
Is it safe to write to file's extents directly while the FS is r/w?
1,568,027,654,000
A question was given to us by a lecturer: How many data blocks are needed to collect all the data in an EXT4 file system using inodes if the file size is 54 KB and there is a block size of 4KB. Answer: 15 The only explanation I can find is 54/4 = 13.5, which is round up to 14 data blocks and we add 1 inode block, so 15 blocks in total. What confuses me is that the question asks explicitly for data blocks, not inode blocks. Does this mean that an inode block is the same as a data block? Regardless of that, is the statement each file gets one inode block true, and does that apply only to EXT4 filesystem? I have not yet gotten the explanation from a lecturer nor could I find one on the internet, thus I am asking it here. Please let me know, if it is not the right place to ask. I thank for the answer in advance.
It's hard to know what they're thinking exactly (you'd have to ask them), especially since they talk about "all data on the FS" (not just one file), and mention "using inodes" (in plural). But, one thing they might be referring to, would be the basic block addressing, which addresses the first 12 data blocks directly from the inode, and then allocates an extra block to contain the addresses of the next 1024 data blocks (assuming the usual 4 kB filesystem block size). For 14 data blocks, you'd need that one indirect block in addition to the inode itself, for a total of 15 data blocks. However, that's a bit dated, since AFAIK ext4 usually uses extent-based mappings nowadays, meaning it stores just one entry for each contiguous run of data blocks. That means the amount of metadata needed depends on how fragmented the file is, but I'd assume the common case is that there are only a few extents needed, and they can be stored directly in the inode: The root node of the extent tree is stored in inode.i_block, which allows for the first four extents to be recorded without the use of extra metadata blocks. See "The Contents of inode.i_block" in the Ext4 Disk Layout document on wiki.kernel.org.
Each file gets one inode block
1,568,027,654,000
I'm following along in this article to install arch in vmware on my m1 mac I'm able to do fdisk just fine, and get the following partition table: I then create the filesystem for partition 2 per the article with mkfs.ext4 /dev/nvme0n1p2. When I mount this with mount /dev/nvme0n1p2 /mnt, works fine But when I attempt to mount the efi filesystem, I get the following: dmesg shows: Anybody have any thoughts on where to go from here? Tried specifying mount -t ext4 ... but got VFS: can't find ext4 filesystem
You need to create a FAT filesystem on /dev/nvme0n1p1 before you can mount the partition: mkfs.fat -F 32 /dev/nvme0n1p1 The step is missing in in the linked tutorial.
Installing ArchLinux in VmWare Fusion on M1 Mac
1,568,027,654,000
My question is in reference to this excellent answer here. I need some more info, If I change the Root Reserve Blocks (RRB) to any amount other than default in some version of Linux, will that be consistent if the HDD is moved to another Linux build in a different machine? even for virtualized operations? Can anyone please indicate the directory if the RRB data is stored in any place on the hard drive? Due to my low reputation, I am unable to comment on the main question, apologies if any inconvenience is caused.
This value is not stored in any file but on the Ext4 filesystem's Super Block: Offset Size Name Description [...] 0x8 __le32 s_r_blocks_count_lo This number of blocks can only be allocated by the super-user. [...] 0x154 __le32 s_r_blocks_count_hi High 32-bits of the reserved block count. [...] (note: there are also an uid and gid values to override this reservation, and they default to root / uid/gid 0 though they can also be changed to an other user or group). So moving the disk moves along the filesystem with its superblock with this value which will be used by the other system mounting the filesystem too.
Is changing root reserve blocks effects the Hard disk or its a OS dependent operation?
1,568,027,654,000
we erased the disk signature as the following ( this is after we performed umount ) wipefs -a /dev/sde /dev/sde: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef then we check that disk is without file system as the following lsblk -f sde ext4 20eba791-c9c9-4462-aa23-74c40a41b8a0 but in spite we erase the filesystem , lsblk still show the ext4 filesystem on sde disk
From the man page of wipefs (emphasis mine): DESCRIPTION wipefs can erase filesystem, raid or partition-table signatures (magic strings) from the specified device to make the signatures invisible for libblkid. wipefs does not erase the filesystem itself nor any other data from the device. So, the only thing it guarantees is that after wipefs, the blkid command (or anything else that uses libblkid for identifying contents of block devices) will no longer detect that filesystem, RAID set, or partition table. lsblk does use libblkid, but apparently that is not its only way to detect filesystems.
wipefs + disk not cleaned
1,568,027,654,000
I have hundreds of folders with modification timestamps I'd like to preserve. Now I need to copy a single file into them. Aside from this way... timestamp=$(stat -c %y /foldername) cp /tmp/file.jpg /foldername/file.jpg touch -d "$timestamp" /foldername ...is there a better way to suppress the folder modification timestamp?
Another approach is to use touch -r. In zsh: () { touch -r $2:h -- $3 && cp -T -- $2 $3 && touch -r $3 -- $2:h } /tmp/file.jpg /foldername/file.jpg =(:) Where =(:) creates an empty temporary file that is deleted as soon as the anonymous function terminates. -T (to force the cp to be a copy-to and never a copy-into) is a GNU extension. Or make is a function, here allowing extra options to be passed along to cp: copy_while_preserving_mtime_of_target_directory() { # Usage: ... [cp options] source dest () { touch -r "$@[-1]:h" -- "$1" && cp -T "$@[2,-1]" && touch -r "$1" -- "$@[-1]:h" } =(:) "$@" } Another approach could be some function that takes arbitrary shell code as argument and wraps its execution inside something that saves and restores the directory's mtime: run_while_preserving_mtime_of() { # Usage: ... directory shell-code () { touch -r "$2" -- "$1" || return { eval -- "$@[3,-1]" } always { touch -r "$1" -- "$2" } } =(:) "$@" } To use as: run_while_preserving_mtime_of /foldername ' cp /tmp/file.jpg /foldername/file.jpg ' for instance.
Inserting a file without altering the folder's modification timestamp?
1,568,027,654,000
When I trace the function graph when calling write(), I find that within function ext4_file_write_iter() it locks the inode->i_rwsem by calling inode_lock(inode) at the beginning. After that call __generic_file_write_iter() to write data to file. And unlock the inode in the end. So is it the inode->i_rwsem used to protect concurrent write to the same file? But I write a program that concurrently writes data to the same region of a file (pwrite(fd,buf,SIZE,0)) and the result shows that writes are not serialized. And I found it has to use flock/fcntl to serialize concurrent writes which works deponded on inode->i_flctx. What I want to ask is that what's the purpose of the inode->i_rwsem. What is different among inode->i_rwsem, inode->i_flctx and inode->i_lock? Thanks.
inode->i_rwsem is used internally by the kernel to ensure that the kernel itself doesn't read or write from/to a file at the same time, to avoid any corruption or race conditions. It doesn't affect the userspace; you can still have the file opened for read/write by multiple processes at the same time. But if multiple processes try to read/write from/to the file simultaneously, the kernel will actually do it serially behind the scenes. In you case, if there are two processes that are trying to write to the same region with pwrite(fd,buf,SIZE,0), without an internal locking mechanism such as what i_rwsem is used for, the kernel might start writing some of the data from the first process, and at the same time start writing the data from the second processes, without the write operation of the first process completed. It will impact the integrity of the entire filesystem, and might even lead to the kernel crashing due to race condition. The internal locking in the kernel prevents those situations. The first write from the first process will complete, and only then the second write will be performed (and probably override the "write" from the first process, if they both write to exactly the same region in the file). inode->i_flctx, as you've already found out, is controlled by flock/fcntl calls from userspace, when the process itself wants to limit the number of processes the can have the file open at the same time. For instance, one process can lock the file for writing, and if another one wants to lock the same file before the other one releases it, it will be denied or blocked. Let's take this case of two processes that write to the same file, and perform different writes. Each process could override the data written by the other process. In order to avoid that in the userspace, the application itself could use flock/fcntl to prevent two processes opening the same file. Here's another example: One process writes to a file, and a second process reads from the same file. The second process could read partial data because the first one hasn't completed the write. In that case, to prevent this situation: The first process will have to acquire a lock the file to prevent other processes from opening it until it finishes the write. The second process will try acquire a lock to the same file, and will be blocked (or failed, depends on how it tried to lock the file) because it's already locked by another process. The first process finishes the write, releases the lock (again, explicitly in userspace by calling one of the system calls mentioned) Only then the second process could lock the file for reading. While the second process is reading the file, other processes that will try to acquire lock for the file will again get blocked until: The reading process finishes the reading. So with flock/fcntl you can handle those cases programmatically in the application's source code, and the kernel uses i_flctx to know if a certain process acquired a lock to the file, and to prevent other process to acquire another lock until the first process released it. inode->i_lock, just like inode->i_rwsem, is used only by the kernel to protect the kernel from race conditions when dealing with the inode's state in the kernel. i_rwsem is used to protect the writing, i_lock is used to protect changes in the inode state. In other words, unless you're a kernel developer, you shouldn't worry about inode->i_lock or inode->i_rwsem, which are only parts of the kernel's implementation mechanism of a inode, and also about inode->i_flctx which is part of the kernel's internal implementation mechanism of file locking from userspace.
What‘s different between inode->i_rwsem and i_flctx?
1,568,027,654,000
TL;DR I have a dd image disk.dd which has multiple partitions. The end goal is to reduce the size of this dd image file. After deleting and recreating a higher numbered partition with a start sector offset lower than it was before, (i.e expanding the partition to the left) I have a partition which has a filesystem in it and whose primary superblock is somewhere inside this partition, and I know the sector at which this primary superblock resides. How can I e2fsck this filesystem so that it moves to the beginning of the partition ? So that afterwards I can shrink this filesystem with resize2fs and then shrink this partition from right, i.e (recreating this partition with a lower end sector offset) Then I'll repeat this process with the partitions after that until the last partition, effectively shrinking all partitions and hence reducing the size of dd image Please do not suggest gparted . I'm looking for a command line solution Also, I know this would've been easier with LVM . But this legacy system Long version I have a dd image disk.dd that I took using the following dd if=/dev/sda of=/path/to/disk.dd of a system which has the following layout Disk /dev/loop15: 465.78 GiB, 500107862016 bytes, 976773168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x54093dd5 Device Boot Start End Sectors Size Id Type /dev/loop15p1 * 2048 81922047 81920000 39.1G 83 Linux /dev/loop15p2 81922048 143362047 61440000 29.3G 82 Linux swap / Solaris /dev/loop15p3 143362048 163842047 20480000 9.8G 83 Linux /dev/loop15p4 163842048 976773167 812931120 387.7G 5 Extended /dev/loop15p5 163844096 976773119 812929024 387.7G 83 Linux Now, on a different system, I'm accessing disk.dd through a loop device using losetup --find --partscan disk.dd I resized all of the ext4 filesystems with resize2fs -M /dev/loopNpartX resize2fs /dev/loopNpartX FSsize i.e the partitions p1, p3 and p5 With dumpe2fs, I can see logical block size of filesystem which is 4096 bytes for all ext4 filesystems which in my case as I shown above are hosted on 3 partitions Now if I'm verbally reading this correctly (correct me if I'm wrong here) The primary superblock of a filesystem is "usually expected" to be located at block 0 of the partition So, I can dump superblock information with dumpe2fs -h -o superblock=0 -o blocksize=4096 /dev/loopNpartX Now it's time to shrink partitions in order to reduce the size of disk.dd file I got the block count for each file system again using dumpe2fs fdisk works on physical block size OR sectors of device which in my case is 512 bytes So, in order to find how many sectors should be enough to accommodate the filesystem, I used the following formula Required Sectors = ( ( Block Count + 100 ) * Logical Block Size ) / Physical Block Size 100 acting as a buffer just in case I'm missing something about the organization of filesystem which should be enough I did this for every filesystem Now With lsblk -f, I get the UUIDs of existing filesystems With fdisk -l, I get which partition to keep the boot flag on Now to shrink partitions, I would delete and recreate them using fdisk -- First partition start sector offset = 2048 last sector offset = 2048 + "Required Sectors" for this filesystem -- Second partition Second partition on existing disk is swap, so I'll not shrink it, just move it left start sector offset = "last sector offset" of first partition + 1 last sector offset = "start sector offset" + Total sectors as as on existing partition I then change it's type to Swap And then with tune2fs -U change the UUID back to what was on dd image -- Third partition start sector offset = "last sector offset" of second partition + 1 last sector offset = "start sector offset" + "Required Sectors" for this filesystem Here is where I'm stuck After expanding third partition to the left, this partition has a filesystem whose starting sector I know (i.e sector having the primary superblock) But I don't know how to e2fsck this filesystem to correct it on the partition so that the filesystem is moved left to the beginning of the partition
It's not possible with fsck. In a filesystem, everything has offsets and if you change the start sector, all of these offsets change. fsck simply has no facility to re-write all offsets for everything (superblocks, journals, directories, file segments, etc.). And even if you could do that, it would only work if the new start sector aligns with internal filesystem structures. So this is not done. Instead, you'd have to shift all data to the left with dd (essentially what gparted does). Only by shifting the filesystem entirely, would the offsets within it remain intact. In principle the dd command could work like this. It reads and writes to the same device, at different offsets. This can only work for shifting to the left, so seek (write to) must be smaller than skip (read from). All units in 512b sectors (if you specify bs=1M, your partitions must be MiB aligned and all units in MiB instead) dd if=/dev/sdx of=/dev/sdx \ seek=newpartitionstart \ skip=oldpartitionstart \ count=filesystemsize However, this is very dangerous. Use it at your own risk. Do take the time to backup your data first. Shifting to the right would be more complicated. You'd have to work backwards, otherwise you overwrite data that has yet to be read, and corrupt everything in the process. The only tool I know that does it (more or less) without shifting data is blocks --lvmify, which achieves it by converting the existing filesystem partition to LVM. With LVM, you can logically expand to the right while it's physically stored on the left. Without LVM, you could also set up a linear device mapping manually, but then you are stuck with a non-standard solution. The most sensible approach to this type of problem (if you don't want to use gparted) would be to backup all data, then make new partitions and filesystems in any layout you like, and then restore your data. If this dd image is your approach to a backup solution, consider backing up files instead. Disk images can be hard to handle, especially if you want to transform them afterwards. If your main goal is reduce the storage requirement of the image file, what you could do is fstrim (for loop mounted filesystem - losing all free space), or blkdiscard (for loop swap partition - losing all data). Provided the filesystem that stores the image supports sparse files and hole punching, it would make the dd image use less storage space w/o changing any layout, as any free space within the image would also be freed for the backing filesystem. Similarly, this is dangerous, if you discard the wrong parts of the image file, the image file is irrecoverably damaged. The simple act of creating a loop device for an image file, and mounting it, already modifies/damages the image file. If the source disk is SSD, and it's already using fstrim regularly, and reads trimmed areas as binary ZERO, you can create an already sparse dd image in the first place using dd conv=sparse if=/dev/ssd of=ssd.img. This way any binary zero area would not take up space in the ssd.img file. Note that conv=sparse can lead to corrupt results when used in the other direction when restoring to a non-zero target drive.
Move filesystem to the left after expanding partition to the left
1,568,027,654,000
According to this Seagate presentation there are some ongoing (?) efforts targeted toward modification of ext4 file system introducing SMRFS -EXT4 - support of hmHDD. The goal is to provide layer that will hide specifics of ZAC commands from applications (I believe). There is also this document that claims that "As of kernel v 4.7... hm drives are exposed as SG node - No block device file". What does it mean? maybe these document are outdated and ext4 (or other common linux file system) has been added support for host aware HDD. What linux distro support HMHDD by file system? If such support exists - What steps are needed to get HMHDD up and running without changes in applications (where file system hides all specifics)? General applications like DB are my concern - not log style. Also there is such video (SDC2020: Improve Distributed Storage System TCO with SMR HDDs) that claims that starting from 4.10 linux kernel f2fs supports drives already - did you used such approach? Maybe f2fs is not best match for random operations but I hope f2fs can fulfill such tasks with acceptable performance (where reading is dominant)
"As of kernel v 4.7... Host managed drives are exposed as SG node - No block device file". What does it mean? You'll get only the /dev/sgX SCSI generic device, it's a character device which allows you to send SCSI commands to the drive. I'm not sure what is the correct use case in case only the SG node exists -- solutions mentioned below require the block device node to be present to work. I wasn't able to find any information about progress in zoned device support in ext4, f2fs claims to support it, calling mkfs.f2fs with -m should be all you need, but I have no personal experience with that. You can solve the zone "problem" on the block level with Device Mapper and dm-zoned target. Basically creating a "normal" block device on top of the drive that can be used by all filesystems because for them it's just a a regular block device. Looks like the only major distribution that packages the user space dm-zoned tools is SUSE, kernel support in various distributions is summarized here.
What linux distro support SMR HDD by file system?
1,568,027,654,000
I noticed poor I/O utilization when rsync'ing from an external HDD (connected with USB 3.0) to a RAID6 (4 HDDs) with ext4. iostat shows that reading from the USB HDD happens for most part at 110 MB/s (that's in line with specs). iostat also shows that for about 50% of the time, nothing is written to the RAID. At some point writing to the RAID starts and soon after reading from the USB HDD stops (0 MB/s). This goes on for a few seconds, then reading from the USB resumes and writing to the RAID stops. It seems like a write cache is blocking. How do I debug this issue? System is Ubuntu 18.04, kernel 4.15.0-136-generic
It turned out that the system was configured to minimise disk writes through a large write cache. The settings vm.dirty_ratio (% of cache that has to fill up before a blocking write-out happens) and vm.dirty_background_ratio (% of cache at which a non-blocking write-out starts) were set to 90 which queues data until it is either flushed or 90% of available memory is full. Setting vm.dirty_background_ratio to 1 solved the issue.
Poor I/O utilisation with rsync, RAID6 and ext4
1,568,027,654,000
In trying to understand why kworker flush uses 99% I/O and hangs file write on machine, I've disabled journalling on the ext4 data partitions using: tune2fs -O ^has_journal /dev/sdg1 After a reboot, automatic mounting of the partitions via /etc/fstab entries failed: # mount /mnt/das.f mount: /mnt/das.f: wrong fs type, bad option, bad superblock on /dev/sdf1, missing codepage or helper program, or other error. Odd. Amongst our machines, the older were formatted as ext3 (long ago) and the newer ones have ext4. Because of this mixture, the common admin scripts use parted to automatically create /etc/fstab based on the actual partition types present. However, after removing the journal on the ext4 partitions, parted reports it as ext2. Other tools still report it as ext4. Which is correct? Does removing the journal "transform" the filesystem from ext4 to ext2, or is this a bug in parted and file? # parted /dev/sdc1 p Model: Unknown (unknown) Disk /dev/sdc1: 7580GB Sector size (logical/physical): 512B/512B Partition Table: loop Disk Flags: Number Start End Size File system Flags 1 0.00B 7580GB 7580GB ext2 # file -sL /dev/sdc1 /dev/sdc1: Linux rev 1.0 ext2 filesystem data, UUID=8fde102f-1047-4b3b-83f9-43c40face046 (extents) (large files) # blkid /dev/sdc1 /dev/sdc1: UUID="8fde102f-1047-4b3b-83f9-43c40face046" TYPE="ext4" PARTUUID="8935788a-939d-4d2c-8495-dc38afc47164" (The volume mounts ok if /etc/fstab is manually changed to ext4...) Machine: Centos 8.1, 4.18.0-147.el8.x86_64
Looking at the code there is a difference in how libparted and libblkid detect ext version. The version is not written in superblock and both tools use supported features to distinguish between versions. For ext3 without journal both tools will report ext2 which makes sense because the difference between those two is basically only the journal support. For ext4 libblkid checks for ext4 specific features like large file or file type support and if these are present it will report the device as ext4. Libparted does similar checks but only if the journal is present so it will report every ext filesystem without journal as ext2. I'd say this is a bug but I guess it depends. Libblkid ext superblock scanning code is available here notice that probe_ext3 checks for journal support but probe_ext4 does not. Libparted ext code is available here and in _ext2_generic_probe it checks for ext4 only when ext3 check passes so it will never try to detect ext4 on devices without journal.
Inconsistent filesystem type reported after disabling journalling
1,568,027,654,000
A Passwords file was in use by KeePassXC. After restart, file is gone. Normal operation is: File is open and when system gets rebooted, it closes it safely (so far). Keepassxc always always autosaves. I've rarely seen "Save" available from the menu. Could KeePass have mishandled the file so badly that it just disappears?? Is there some other possibility? KeePass never screwed up a file before now.
Unix, unlike Windows, allows files to be deleted while they are open and in use by an application. It is even more likely that KeePass has just read the file into memory, and is not holding the file open because the on-disk data is encrypted, so the contents are decrypted into memory and then the file is closed. There may be any number of different reasons why the password file was deleted, independent of what the application was doing.
Can a file disappear from an ext4 partition if an application was using it?
1,568,027,654,000
I have heard some group discussing file systems (btrfs vs ext4 and something like that) to use in Linux on computers with less disk space (like 32GB ideapad, notebook etc.). Does the filesystem choice really affect the used by the same file? I mean can we have more available disk space by choosing a different file system? for exactly same result.
Yes, it can make a lot of difference... Usually it makes the most difference on file systems with a lot of smaller files. So it may not make a difference to your video collection (mostly GB files) or even you music collection (mostly MB files). But a file system filled with many files only a few KB will definitely see a difference. There are some difference on the meta data required per file. Here meta data means everything that's not contained in the file data such as the file's name, its permissions, timestamps, and custom file system properties. It really depends on the features of the file system and sometimes the way it's configured, but this data can be bytes to kilobytes per file irrespective of file size. Files are not stored as individual bytes but blocks of bytes. On ext4 a block is by default 4 KiB for filesystems over 512MB. So a file that's exactly 4096 bytes will take up exactly 4096 bytes on disk, but so will a file that's 4095 bytes or even just 1 byte. A file that's 4097 bytes will again take up 8192. This is known as padding. It is possible to format some many file systems with custom (smaller) block sizes. This can reduce the padding but there can also be side effects. Some modern hard drives perform badly with smaller blocks. Theoretically it could actually reduce capacity as more space needs to be used to mark which blocks are allocated, though I've never seen this happen myself. Now not all file systems will wast the block in padding. Some filesystems, including btrfs, will allocated more than one file to a block. See block suballocation. Then there's a more obvious feature. Some file systems can transparently compress the contents of files. There's no guarantee that this will successfully compress file contents but it can be very successful. An example file system here is zfs. See enabling compression in zfs.
Do file systems affect the available storage space?
1,568,027,654,000
I am using a NetGear ReadyNAS machine as a NAS for our server. The server is a linux CentOS 6.6. The server is run using Rocks cluster, with all our users' home directories located on the NAS. My understanding is that the home directories are automounted to /home when a user logs on. Recently we have been facing the infamous, intermittent 'no space left on device' error while our drive is nowhere near full. It is not a case of full virtual memory either. Yet, the issue usually gets resolved (temporarily) after deleting or compressing some files. I'd like to check if my inodes are full but for some reason the share where our user directories are located does not report inodes information and shows only 0's. Could someone please explain why this is the case, and how I can check the inodes on this share of my NAS? The NAS is a nfs file system in a RAID 10 configuration, while my linux cluster uses ext4. Below is output of df -h performed on our master node: Filesystem Size Used Avail Use% Mounted on /dev/sda2 20G 16G 2.5G 87% / tmpfs 7.9G 12K 7.9G 1% /dev/shm /dev/sda1 190M 103M 78M 57% /boot /dev/sda6 4.7G 12M 4.5G 1% /tmp /dev/sda3 12G 2.0G 9.0G 18% /var tmpfs 3.9G 63M 3.8G 2% /var/lib/ganglia/rrds nas-0-1:/nas/nas-home/user1 15T 8.4T 6.3T 58% /home/user1 nas-0-1:/nas/nas-home/user2 15T 8.4T 6.3T 58% /home/user2 and df -i: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda2 1281120 365426 915694 29% / tmpfs 2057769 4 2057765 1% /dev/shm /dev/sda1 51200 50 51150 1% /boot /dev/sda6 320000 797 319203 1% /tmp /dev/sda3 768544 20175 748369 3% /var tmpfs 2057769 596 2057173 1% /var/lib/ganglia/rrds nas-0-1:/nas/nas-home/user1 0 0 0 - /home/user1 nas-0-1:/nas/nas-home/user2 0 0 0 - /home/user2 Now if i ssh into the nas itself and repeat, here is the output of df -h performed on the nas: Filesystem Size Used Avail Use% Mounted on udev 10M 4.0K 10M 1% /dev /dev/md0 4.0G 578M 3.1G 16% / tmpfs 2.0G 0 2.0G 0% /dev/shm tmpfs 2.0G 5.9M 2.0G 1% /run tmpfs 978M 1.5M 977M 1% /run/lock tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup /dev/md127 15T 8.4T 6.3T 58% /nas /dev/md127 15T 8.4T 6.3T 58% /home /dev/md127 15T 8.4T 6.3T 58% /apps /dev/md127 15T 8.4T 6.3T 58% /var/ftp/nas-home and df -i performed on the nas: Filesystem Inodes IUsed IFree IUse% Mounted on udev 499834 446 499388 1% /dev /dev/md0 0 0 0 - / tmpfs 500472 1 500471 1% /dev/shm tmpfs 500472 593 499879 1% /run tmpfs 500472 22 500450 1% /run/lock tmpfs 500472 15 500457 1% /sys/fs/cgroup /dev/md127 0 0 0 - /nas /dev/md127 0 0 0 - /home /dev/md127 0 0 0 - /apps /dev/md127 0 0 0 - /var/ftp/nas-home The share on my nas in question is /nas, why is it shown to contain 0 inodes? Thank you in advance for any help you can offer. This problem has been driving me nuts and hindering our work.
The NAS is probably using a filesystem which doesn't use static inode tables. The most notable modern examples of such filesystems are BTRFS and ZFS, but most newer filesystems use dynamic inode allocation, and many (including BTRFS) have opted to just not report anything for inode usage because it just doesn't matter (since running out of inodes means your out of space on the filesystem itself, so you couldn't create a new file regardless).
Why is mounted nas showing 0 inodes on users partition?
1,568,027,654,000
Say I've got two dual-boot Linux systems on the same computer. Both share the same /home mount point. Amy, the only user on system 1, has a UID of 1000. She stored some files in /home/amy. Bill, the only user on system 2, also has a UID of 1000. Can Bill access /home/amy without any restrictions? Also, is this situation even worse on a portable HDD formatted to ext4?
Yes. Bill will access Amy's files with no restrictions. Unix security is based on UID, not user names. If your HDD ext4 partition contains sensitive data, you may want to encrypt it as any root user on a foreign machine may access it anyway.
Data accessible from other systems with the same UID?
1,480,039,976,000
I recently used Clonezilla to move from a very old HDD (160GB) to a new SSD (480gb). Clonezilla did a fine job but it left a lot of empty space unused. I made an attempt to extend the primary OS using GParted - but it didn't work. At this moment I don't have physical access to the server but I can remotely work on it (SSH). How would I go about increasing the primary partition size? This is how the partition table looks like at this moment: Disk /dev/sda: 480.1 GB, 480103981056 bytes 255 heads, 63 sectors/track, 58369 cylinders, total 937703088 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x000cd8c5 Device Boot Start End Blocks Id System /dev/sda1 * 2048 279676927 139837440 83 Linux /dev/sda2 279678974 312578047 16449537 5 Extended Partition 2 does not start on physical sector boundary. /dev/sda3 312578048 937701375 312561664 83 Linux /dev/sda5 279678976 312578047 16449536 82 Linux swap / Solaris df | grep -v tmpfs Filesystem 1K-blocks Used Available Use% Mounted on udev 8141076 12 8141064 1% /dev /dev/sda1 137512016 80994792 49508968 63% / none 4 0 4 0% /sys/fs/cgroup none 5120 0 5120 0% /run/lock none 8151916 144 8151772 1% /run/shm none 102400 32 102368 1% /run/user Server running Ubuntu 14.04lts.
Your problem is the swap partition in the middle of the drive. The good news is you have enough free space to make a swap file, so you don't have to go swapless: (as root) dd if=/dev/zero of=/swap bs=1M count=8192 chmod 0000 /swap mkswap /swap swapon /swap swapoff /dev/sda5 Now you can use the fdisk interface to reset the partition table and create a first partition, assigning blocks 2048 up to the rest of the disk to it. Commit these changes to disk. This should also reload the partition table. Check that blockdev --getsize64 /dev/sda1 returns the expected value, and then online resize the root partition with resize2fs /dev/sda1.
Resizing (increasing) primary OS partition
1,480,039,976,000
I run Ubuntu 16 Desktop as host and on VirtualBox running Ubuntu 16 Server as guest which is using raw partition on another disk different from the one used by the host. I am searching for a solution which will allow me to have safe read-write access to the guest's FS (or at least to some directory on the guest partition!). I'd like to know for each opportunity even if it will sacrifice some ext4 features (security/performance) and will result in actually unsafe FS on the guest side. I am not experienced in the Unix environment but I guess that it is achievable trough proper mounting configuration for the host partition (from fstab) and proper root mounting on the guest side. I have tried by mounting on both sides with "defaults" option but when I create file from the host it is not showing on the guest FS, however it is read-write accessible from the host! When file is edited it is not actually reflecting on the guest.
Don't do this... If two operating systems try to access the same raw block device at the same time then you should expect to see data corruption. Even if one of them is read-only, that read-only instance will cache data (eg directory contents, file contents) and won't know that the underlying data blocks have changed. At best this may result in perceived corruption inside the OS; at worst this may cause the OS to treat the filesystem as bad. If both OSes have write access to the device then the worst case scenerio is that you can expect the filesystem itself to be corrupted. (There are some filesystems that will allow multi-server access, but they are not common). Instead you should have one OS access the block device and then NFS export this to the other OS, which can then mount the filesystem over the network.
How to get read-write access (safe) to ext4 filesystem used by second OS running from virtualbox
1,480,039,976,000
netrw plugin in Vim allows one to see directory files. Fow example here I start Vim with vim .: " ============================================================================ " Netrw Directory Listing (netrw v155) " /root/vim/code/files " Sorted by name " Sort sequence: [\/]$,\<core\%(\.\d\+\)\=\>,\.h$,\.c$,\.cpp$,\~\=\*$,*,\.o$,\.obj$,\.info$,\.swp$,\.bak$,\~$ " Quick Help: <F1>:help -:go up dir D:delete R:rename s:sort-by x:special " ============================================================================== ../ ./ letters/ mvc/ .chapters a.txt b.txt mvc_paths.vim Is it really possible for Vim to see the content of the directory files? For example cat ., less ., hexdump ., etc all fail with . is a directory error message. Or does netrw plugin simply list the content of the directory and thus gives an impression that actual directory file is opened?
Reading a directory does not really happen as a file. It happens with the readdr system call. See man 2 readdir for the old implementation as a pure system call and man 3 readdir for the wrapper (please, don't use the old implementation). Yet, Vim's netrw does not perform anything like that. It simply calls ls or performs globbing (read below to understand when it does one and when the other) and parses the output of that. Configuring netrw you have options for how it calls ls on remote systems. You can set the listing command in your vimrc for SSH and FTP connections as follows (these are the defaults): let g:netrw_list_cmd = 'ssh HOSTNAME ls -Fa' let g:netrw_ftp_list_cmd = 'ls -lF' (You can even set that to something different from ls for FTP systems that do not have ls, yes there are some, rare ones, that don't have it.) For local listings netrw performs globbing and then calls getftype() to decorate the file (/ for directories, @ for links, etc.). In autoload/netrw.vim in the s:LocalListing() procedure the following is performed: let dirname = b:netrw_curdir let dirnamelen = strlen(b:netrw_curdir) let filelist = s:NetrwGlob(dirname,"*",0) " here is the globbing of `*` let filelist = filelist + s:NetrwGlob(dirname,".*",0) " and here `.*` And then getftype() is called on every file in in filelist. All in all, netrw relies on the fact that Vim has the glob() function, and Vim in turn performs a glob call (man 3 glob).
Does Vim netrw plugin actually display the content of directory file?
1,480,039,976,000
I have Win Xp on first Primary Partition. I am leaving another 2 primaries for a future possible Win install. I am planning my Logical partitions to install a Linux Mint. I read that a separate /home gives re-usability across future Linus'es. I thought it might be clever to keep /home as NTFS, to smoothly share data across Win and Linux too. I am anxious though as most of the forums declare NTFS inferior to ext4. But some forums said that performance of file system is not as much driven by its structure, but mainly by the kernel driver and algorithms for read and write. Questions If Linux has good algorithms [as implemented with ext4], wont it perform same on NTFS too? May be the inferiority of NTFS is only with windows, But same NTFS performs equally well on ext4? If not, Is the peformance low enough to stay away from NTFS, and think of other ways to share data across Win and Linux? Does ways exist to read ext4 from WinXp or 7? Any other recommendations from practical experiences are welcome.
The Linux implementation of NTFS is not very good. There is some write support, but it is slow. This is due to the fact that the best NTFS for Linux implementation, NTFS-3G, is a FUSE filesystem, where every filesystem call gets redirected to a userspace program, a strategy which carries with it a sever performance penalty. Apart from that, the NTFS filesystem is written to implement Windows security principles, rather than Linux ones. As a result, mapping Linux usernames and groups to Windows filesystem security properties is going to be complicated at best. You don't want to have that issue on a home directory. In all, NTFS-3G is useful as a way to share data between Windows and Linux, but beyond that I wouldn't use it. If you want to share your Linux home directory with a Windows operating system, rather than trying to use NTFS as your home directory's file system, it's better to install something like ext2fsd on your Windows machine, which supports reading the ext2, ext3 and ext4 file systems from Windows. Combined with NTFS-3G, this should allow you to easily share data between Windows and Linux without ever having to reboot to get at data from your other operating system. On a side note, if you're still on Windows XP, you should stop using it and upgrade now. Windows XP has not been receiving security updates in over a year, which means your XP machine is probably part of a botnet that is spamming me (and everyone else in the world) now. Additionally, Windows XP does not support several SSL algorithms that are going to be very much required going forward (e.g., the SHA2 set of hashing algorithms) if you want to still be able to use SSL-enabled websites.
Choosing File Format of /home between NTFS and Ext4, Understanding Trade Offs in Performance vs Data sharing with WinXP in Dual boot [closed]
1,480,039,976,000
[nathanb /mnt/work] sudo du -hs . 23G . [nathanb /mnt/work] df -h . Filesystem Size Used Avail Use% Mounted on /dev/sdb1 40G 38G 6.4M 100% /mnt/work Where is the other 15 GB? /dev/sdb1 on /mnt/work type ext4 (rw,nosuid,nodev,relatime,data=ordered) Updating to respond to comments [nathanb /mnt/work] sudo tune2fs -l /dev/sdb1 tune2fs 1.42.5 (29-Jul-2012) Last mounted on: /mnt/work Inode count: 2621440 Block count: 10485752 Reserved block count: 524287 Free blocks: 3955615 Free inodes: 2522921 First block: 0 Block size: 4096 Fragment size: 4096 And [nathanb /mnt/work] df -i . Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sdb1 2621440 29764 2591676 2% /mnt/work And [nathanb /mnt/work] sudo fsck -n /dev/sdb1 fsck from util-linux 2.20.1 e2fsck 1.42.5 (29-Jul-2012) Warning! /dev/sdb1 is mounted. Warning: skipping journal recovery because doing a read-only filesystem check. /dev/sdb1: clean, 98519/2621440 files, 6530137/10485752 blocks And [nathanb /mnt/work] sudo lsof | grep deleted [nathanb /mnt/work] There are no mount points below /mnt/work [nathanb /mnt/work] grep /mnt/work /proc/self/mountinfo 22 19 8:17 / /mnt/work rw,nosuid,nodev,relatime - ext4 /dev/sdb1 rw,data=ordered Well, of all the things...seems to be working again. And just like I have no idea what caused the problem, I have no idea what fixed it. [nathanb /mnt/work] df -h . Filesystem Size Used Avail Use% Mounted on /dev/sdb1 40G 22G 16G 59% /mnt/work I had unmounted a couple of the NFS clients hitting up the volume in preparation for umounting and fscking it, but I hadn't unmounted all of them...and I checked right after the unmount and the space hadn't gone down. But then I got back from doing some other work and noticed it was unwedged. Annoying and unfulfilling...wish I knew what the problem had been so I could award some points to some folks...thanks for all the help, though, and if it happens again I'll try to get more forensics.
Given that the filesystem was exported via NFS, there’s a fair chance that the discrepancy was due to deleted files... If files are deleted while open on NFS clients, lsof on the server won’t see them because there is no /proc/.../fd entry corresponding to them; but they will still occupy disk space as seen by df. Diagnosing this requires running lsof with the -N option on every client. (This doesn’t explain the delay you saw in recovering the space after unmounting the volume from the clients, but it’s the best explanation I can think of for the rest of the symptoms.)
15 GB of unaccounted-for space in filesystem
1,480,039,976,000
I have a problem and I appreciate if anyone can help me. 1: fdisk -l: Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000d89a5 Device Boot Start End Blocks Id System /dev/sda1 88086528 625141759 268527616 5 Extended /dev/sda2 * 2048 80273407 40135680 83 Linux /dev/sda4 80273408 88086527 3906560 82 Linux swap / Solaris Partition table entries are not in disk order 2: df -h: Filesystem Size Used Avail Use% Mounted on rootfs 38G 35G 1.1G 98% / udev 10M 0 10M 0% /dev tmpfs 397M 968K 396M 1% /run /dev/disk/by-uuid/bcc39c18-9057-488c-a281-68377e15ce7f 38G 35G 1.1G 98% / tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 1.6G 1.4M 1.6G 1% /run/shm 3: mount: sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=505836,mode=755) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=405884k,mode=755) /dev/disk/by-uuid/bcc39c18-9057-488c-a281-68377e15ce7f on / type ext4 (rw,relatime,errors=remount-ro,user_xattr,barrier=1,data=ordered) tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k) tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=1593060k) rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime) I've read the post "How can I expand ext4 partition size on debian" in which Stéphane Chazelas came up with a good solution, by using fdisk -u /dev/sda. But as you see exactly after Linux partition (/dev/sda2) the swap partition is placed, so means I can't extend sda2 by adding more space at the END cause it will overlap with swap. Now is it possible to extend sda2 by overlapping with swap and add more space, then START the swap exactly after the END of sda2!? In other word, the swap will be move forward and then the sda2 can grow as much as needed, so we can start the swap exactly after it!! If it's completely wrong, would anyone please help me? Thanks
First, back everything up, as you should always do when faffing about with partitions. Turn off the swap with swapoff /path/to/swap_partition (optional), boot up a GPartEd LiveCD or other live distro with GPartEd. Remove the swap partition, extend your sda2 partition as desired, and create a new swap partition in the remaining space if desired.
How to increase the size of Linux Partition (EXT4) without loosing data when the swap partition is exactly after it?
1,480,039,976,000
How can I check and mark bad blocks on Linux startup? There are a number of options available and most of them require accessing hardware and running live CD's. But I would like to avoid and perform everything using command line without going to the server room and talking to admins who have the keys. I have ArchLinux with / on ext4fs.
If your init scripts support it, you can add -c -c to /fsckoptions (then create /forcefsck and reboot). Unfortunately, this feature isn't available everywhere. You probably have to reboot to a LiveCD/LiveUSB instead. Perform Bad Blocks Scan on Root Partition in Linux
archlinux check disks on boot
1,480,039,976,000
I'm using ext4 filesystem in SSD,when mounting SSD,could I use flag "barrier=0"? I knew this flag is safe for BBU(battery backup unit),but is it safe for SSD?
You can use, but it's same safety as rotating rust. just because ssd has no moving parts doesn't make it invulnerable to power outage. A small caveat: depending on the model, it's possible the ssd has enough capacitors to finish transfer of any cached data to non-volatile storage, but this is never guaranteed. Of course, if the ssd has no cache, all writes are synchronous, so barriers has no meaning.
Could I set flag "barrier=0" when using SSD?
1,480,039,976,000
I know that it isn't possible to change the inode count of an ext filesystem after its creation, but I haven't been able to find any explanation on why it isn't. Can anyone enlighten me?
Why? Because no one has written a tool that does it. And that's probably because it's a not entirely trivial change to the filesystem metadata. There are other issues like this; for example you can't resize ext4 to >16TB. That needs 64bit structures which aren't used by default. Same with other filesystems, for example you can't shrink XFS. None of these things are impossible, but it seems that no tools exist to do it either, at least not directly. Someone would have to develop them... and that usually requires in depth knowledge of the specific filesystem.
Why is it impossible to change the inode count of an ext filesystem?
1,480,039,976,000
I'm recovering a PV on a mdadm RAID 1 with a single VG containing several LVs. The underlying devices have several bad sectors (one just a few, the other really much) and a silly typo made it necessary to restore the LVM configuration by grepping through the devices. Luckily I found it and the restored configuration looks like the original one. The only problem is that the logical volumes have no valid file system. With e2sl I found that one of the superblocks of my target fs is in the wrong logical volume. Sadly I have no idea how to correct or circumvent this issue. root@rescue ~/e2sl # ./ext2-superblock -d /dev/vg0/tmp | grep 131072000 Found: block 20711426 (cyl 1369, head 192, sector 50), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) root@rescue ~/e2sl # ./ext2-superblock -d /dev/vg0/home | grep 131072000 Found: block 2048 (cyl 0, head 32, sector 32), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 526336 (cyl 34, head 194, sector 34), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 1050624 (cyl 69, head 116, sector 36), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 1574912 (cyl 104, head 38, sector 38), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 2099200 (cyl 138, head 200, sector 40), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 6293504 (cyl 416, head 56, sector 56), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 6817792 (cyl 450, head 218, sector 58), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 12584960 (cyl 832, head 81, sector 17), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 20973568 (cyl 1387, head 33, sector 49), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 32507904 (cyl 2149, head 238, sector 30), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 63440896 (cyl 4195, head 198, sector 22), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) Found: block 89655296 (cyl 5929, head 139, sector 59), 131072000 blocks, 129988776 free blocks, 4096 block size, (null) ^C I'm feeling just an inch away from accessing my filesystem(s) again to recover some non-backed-up data. LVM Configuration: root@rescue ~ # pvs PV VG Fmt Attr PSize PFree /dev/md1 vg0 lvm2 a-- 2.71t 767.52g root@rescue ~ # vgs VG #PV #LV #SN Attr VSize VFree vg0 1 5 0 wz--n- 2.71t 767.52g root@rescue ~ # lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert backup vg0 -wi-a--- 500.00g container vg0 -wi-a--- 500.00g home vg0 -wi-a--- 500.00g root vg0 -wi-a--- 500.00g tmp vg0 -wi-a--- 10.00g VG Configuration: # Generated by LVM2 version 2.02.95(2) (2012-03-06): Sun Oct 13 23:56:33 2013 contents = "Text Format Volume Group" version = 1 description = "Created *after* executing 'vgs'" creation_host = "rescue" # Linux rescue 3.10.12 #29 SMP Mon Sep 23 13:18:39 CEST 2013 x86_64 creation_time = 1381701393 # Sun Oct 13 23:56:33 2013 vg0 { id = "7p0Aiw-pBpd-rn6Y-geFb-jyZe-gide-Anc9ag" seqno = 19 format = "lvm2" # informational status = ["RESIZEABLE", "READ", "WRITE"] flags = [] extent_size = 8192 # 4 Megabytes max_lv = 0 max_pv = 0 metadata_copies = 0 physical_volumes { pv0 { id = "GBIwI4-AxBa-6faf-aLfB-UZiP-iSS9-FaOrhH" device = "/dev/md1" # Hint only status = ["ALLOCATABLE"] flags = [] dev_size = 5824875134 # 2.71242 Terabytes pe_start = 384 pe_count = 711044 # 2.71242 Terabytes } } logical_volumes { root { id = "1e3gvq-IJnX-Aimz-ziiY-zucE-soCO-YU2ayp" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 128000 # 500 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 0 ] } } tmp { id = "px8JAy-JnkP-Amry-uHtf-lCUB-rfdx-Z8y11y" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 2560 # 10 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 128000 ] } } home { id = "e0AZbd-22Ss-RLrF-TgvF-CSDN-Nw6w-Gj7dal" status = ["READ", "WRITE", "VISIBLE"] flags = [] segment_count = 1 segment1 { start_extent = 0 extent_count = 128000 # 500 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 130560 ] } } backup { id = "ZXNcbK-gYKj-LJfm-f193-Ozsi-Rm3Y-kZL37c" status = ["READ", "WRITE", "VISIBLE"] flags = [] creation_host = "new.bountin.net" creation_time = 1341852222 # 2012-07-09 18:43:42 +0200 segment_count = 1 segment1 { start_extent = 0 extent_count = 128000 # 500 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 258560 ] } } container { id = "X9wheh-3ADB-Fiau-j7SR-pcH9-hXne-K2NVAc" status = ["READ", "WRITE", "VISIBLE"] flags = [] creation_host = "new.bountin.net" creation_time = 1341852988 # 2012-07-09 18:56:28 +0200 segment_count = 1 segment1 { start_extent = 0 extent_count = 128000 # 500 Gigabytes type = "striped" stripe_count = 1 # linear stripes = [ "pv0", 386560 ] } } } }
For anyone who has a similar problem: I used e2sl [1] to actually find candidates for the filesystems directly from one of the RAID devices and mounted the filesystem using a loop device [2] skipping LVM and the software RAID. I had to fiddle a bit with the offset (The superblock position has an offset of 1KB to the start of the partition!) but in the end I managed to do it. From there the rescue was easy as pie: Mounting the loopdevice to a mount point and everything was there to be copied. [1] http://schumann.cx/e2sl/ [2] mount --loop & see losetup
Restored VG has shifted volumes
1,480,039,976,000
I had an encrypted ext4 partition on a Samsung T7 1TB USB drive. It was LUKS (I believe this may be called a LUKS container?) The partition was 850GB. It had about 130GB in use. I also had an unencrypted 100GB NTFS partition and a very small amount of unallocated space. I tried to use KDE Partition Manager to shrink the encrypted partition to 550GB so that I could expand my 100GB NTFS partition by 300GB. I set up both operations and then clicked Apply. The operation began and was in progress for a couple of minutes (but the percentage did not progress). Then it reported that the operation had failed with an error. Stupidly, I did not save the log of the error, my reason being that as far as I could see, no details of, or reason for the error were provided and I made the assumption that no changes had been made. However, I can now no longer mount this encrypted 850GB ext4 partition, although it is visible in Dolphin and in KDE Partition Manager. When plugging the drive in, it correctly identifies it and gives me the password prompt, which is pre-filled with the password I have asked it to remember, so the system recognises the drive (I've tried re-entering this password, in case there is a problem there). The NTFS partition does not seem to be visible at all. In the log that I did not save from KDE Partition Manager, I believe it also showed progress as the operations were happening, and I believe the first operation was for it to initially shrink the ext4 partition by a small amount - by 0.04 GB, if I remember correctly. So let's say the size was initially 830.08 GB; I believe the first operation was to shrink it to 830.04 GB, which I believe it succeeded in doing. Please do not take these sizes or the 0.04 size to be the truth - it is only what I seem to remember. The error message in Dolphin is: An error occurred while accessing '830.0 GiB Encrypted Drive', the system responded: The requested operation has failed: Error mounting /dev/dm-1 at /media/wesley/WG-T7-E: wrong fs type, bad option, bad superblock on /dev/mapper/luks-5c9cfaa5-0576-4b47-8e65-05f7d8b52d39, missing codepage or helper program, or other error. In KDE Partition Manager I can see the two partitions and the unallocated space, with sizes and used space correctly reported (or near enough, based on what I know they were. I.e. They are not listed using the new sizes I had chosen). Properties of the encrypted partition (/dev/sdb1) in KDE Partition Manager show: Label: WG-T7-E (which is correct) Mount point: (none found) Partition type: primary Status: idle UUID: 9ffc3bef-5df8-4dd5-b4de-d2ff45aa6322 Partition Label: (none) Partition UUID: 75E6E7E1-FA4F-0F40-BAB4-85F5F4A5BD30 Size: 830.04 GiB Available: 84% - 699.21 GiB Used: 16% - 130.83 GiB First sector: 2,048 Last sector: 1,740,728,319 Number of sectors: 1,740,726,272 Flags: bios-grub and boot checkboxes shown, but neither checked. Properties of the unallocated space in KDE Partition Manager show: Label: Mount point: (none found) Partition type: unallocated Status: idle Partition Label: (none) Partition UUID: (none) Size: 36.00MiB First sector: 1,740,728,320 Last sector: 1,740,802,047 Number of sectors: 73,728 Properties of the unencrypted partition (/dev/sdb2) in KDE Partition Manager show: File system: ntfs Label: WG-T7-U (which is correct) Mount point: /media/wesley/WG-T7-U Partition type: primary Status: idle UUID: 05DBF9124869C198 Partition Label: (none) Partition UUID: 8CC612F8-30FA-6449-8FA2-754C82E8B0C3 Size: 101.43 GiB Available: 99% - 101.37 GiB Used: 1% - 67.61 MiB First sector: 1,740,802,048 Last sector: 1,953,523,711 Number of sectors: 212,721,664 Flags: bios-grub and boot checkboxes shown, but neither checked. dmesg after plugging in USB drive and entering password: [ 6049.158336] usb 2-4: new SuperSpeed USB device number 8 using xhci_hcd [ 6049.171380] usb 2-4: New USB device found, idVendor=04e8, idProduct=61fb, bcdDevice= 1.00 [ 6049.171394] usb 2-4: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 6049.171400] usb 2-4: Product: PSSD T7 Shield [ 6049.171405] usb 2-4: Manufacturer: Samsung [ 6049.171409] usb 2-4: SerialNumber: S6YJNS0TA00012H [ 6049.180592] scsi host2: uas [ 6049.181476] scsi 2:0:0:0: Direct-Access Samsung PSSD T7 Shield 0 PQ: 0 ANSI: 6 [ 6049.182964] sd 2:0:0:0: Attached scsi generic sg2 type 0 [ 6049.183565] sd 2:0:0:0: [sdb] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB) [ 6049.183710] sd 2:0:0:0: [sdb] Write Protect is off [ 6049.183717] sd 2:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 6049.183951] sd 2:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 6049.184282] sd 2:0:0:0: [sdb] Optimal transfer size 33553920 bytes [ 6049.207124] sdb: sdb1 sdb2 [ 6049.208449] sd 2:0:0:0: [sdb] Attached SCSI disk [ 6049.430150] audit: type=1107 audit(1708641903.652:169): pid=1274 uid=102 auid=4294967295 ses=4294967295 subj=unconfined msg='apparmor="DENIED" operation="dbus_signal" bus="system" path="/org/freedesktop/login1" interface="org.freedesktop.DBus.Properties" member="PropertiesChanged" name=":1.2" mask="receive" pid=2912 label="snap.firefox.firefox" peer_pid=1318 peer_label="unconfined" exe="/usr/bin/dbus-daemon" sauid=102 hostname=? addr=? terminal=?' [ 6049.430683] ntfs3: Unknown parameter 'windows_names' [ 6049.494120] audit: type=1107 audit(1708641903.716:170): pid=1274 uid=102 auid=4294967295 ses=4294967295 subj=unconfined msg='apparmor="DENIED" operation="dbus_signal" bus="system" path="/org/freedesktop/login1" interface="org.freedesktop.DBus.Properties" member="PropertiesChanged" name=":1.2" mask="receive" pid=2912 label="snap.firefox.firefox" peer_pid=1318 peer_label="unconfined" exe="/usr/bin/dbus-daemon" sauid=102 hostname=? addr=? terminal=?' [ 6052.079215] audit: type=1107 audit(1708641906.301:171): pid=1274 uid=102 auid=4294967295 ses=4294967295 subj=unconfined msg='apparmor="DENIED" operation="dbus_signal" bus="system" path="/org/freedesktop/login1" interface="org.freedesktop.DBus.Properties" member="PropertiesChanged" name=":1.2" mask="receive" pid=2912 label="snap.firefox.firefox" peer_pid=1318 peer_label="unconfined" exe="/usr/bin/dbus-daemon" sauid=102 hostname=? addr=? terminal=?' [ 6054.523583] audit: type=1107 audit(1708641908.745:172): pid=1274 uid=102 auid=4294967295 ses=4294967295 subj=unconfined msg='apparmor="DENIED" operation="dbus_signal" bus="system" path="/org/freedesktop/login1" interface="org.freedesktop.DBus.Properties" member="PropertiesChanged" name=":1.2" mask="receive" pid=2912 label="snap.firefox.firefox" peer_pid=1318 peer_label="unconfined" exe="/usr/bin/dbus-daemon" sauid=102 hostname=? addr=? terminal=?' [ 6054.598701] audit: type=1107 audit(1708641908.820:173): pid=1274 uid=102 auid=4294967295 ses=4294967295 subj=unconfined msg='apparmor="DENIED" operation="dbus_signal" bus="system" path="/org/freedesktop/login1" interface="org.freedesktop.DBus.Properties" member="PropertiesChanged" name=":1.2" mask="receive" pid=2912 label="snap.firefox.firefox" peer_pid=1318 peer_label="unconfined" exe="/usr/bin/dbus-daemon" sauid=102 hostname=? addr=? terminal=?' [ 6054.599943] EXT4-fs (dm-1): bad geometry: block count 217599488 exceeds size of device (217590272 blocks) [ 6054.600795] audit: type=1107 audit(1708641908.822:174): pid=1274 uid=102 auid=4294967295 ses=4294967295 subj=unconfined msg='apparmor="DENIED" operation="dbus_signal" bus="system" path="/org/freedesktop/login1" interface="org.freedesktop.DBus.Properties" member="PropertiesChanged" name=":1.2" mask="receive" pid=2912 label="snap.firefox.firefox" peer_pid=1318 peer_label="unconfined" exe="/usr/bin/dbus-daemon" sauid=102 hostname=? addr=? terminal=?' [ 6054.798116] audit: type=1107 audit(1708641909.020:175): pid=1274 uid=102 auid=4294967295 ses=4294967295 subj=unconfined msg='apparmor="DENIED" operation="dbus_signal" bus="system" path="/org/freedesktop/login1" interface="org.freedesktop.DBus.Properties" member="PropertiesChanged" name=":1.2" mask="receive" pid=2912 label="snap.firefox.firefox" peer_pid=1318 peer_label="unconfined" exe="/usr/bin/dbus-daemon" sauid=102 hostname=? addr=? terminal=?' [ 6054.804803] EXT4-fs (dm-1): bad geometry: block count 217599488 exceeds size of device (217590272 blocks) [ 6054.809500] audit: type=1107 audit(1708641909.031:176): pid=1274 uid=102 auid=4294967295 ses=4294967295 subj=unconfined msg='apparmor="DENIED" operation="dbus_signal" bus="system" path="/org/freedesktop/login1" interface="org.freedesktop.DBus.Properties" member="PropertiesChanged" name=":1.2" mask="receive" pid=2912 label="snap.firefox.firefox" peer_pid=1318 peer_label="unconfined" exe="/usr/bin/dbus-daemon" sauid=102 hostname=? addr=? terminal=?' Financial reward offered to the person, or split between the persons, who can help me successfully recover this. Thank you.
To recap, after resizing partitions, you got this mount error: wrong fs type, bad option, bad superblock on /dev/mapper/luks-5c9cfaa5-0576-4b47-8e65-05f7d8b52d39, missing codepage or helper program, or other error. Getting this far already means there is no issue with the LUKS header itself. According to dmesg, the real error message is: EXT4-fs (dm-1): bad geometry: block count 217599488 exceeds size of device (217590272 blocks) This seems to be a standard case of shrinking partition without shrinking the filesystem first and the discrepancy is just a few thousand blocks of data (9216 4K-blocks = 73728 512-byte sectors). And it seems you have exactly the necessary unallocated space in your partition table. According to your partition manager outputs, in parted it should roughly look like this. # parted /dev/sdb unit s print free Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 34s 2047s 2014s Free Space 1 2048s 1740728319s 1740726272s luks 1740728320s 1740802047s 73728s Free Space 2 1740802048s 1953523711s 212721664s ntfs Partition numbers, names, flags may be different - Start End Size should match. If all assumptions are correct then the fix is to reclaim free space behind partition 1. Make sure the LUKS container is closed first and the drive unmounted or re-reading partition table might fail (alternatively, reboot). # cryptsetup close luks-5c9cfaa5-0576-4b47-8e65-05f7d8b52d39 # umount /dev/sdb* Since there is unallocated space in the partition table, you can reclaim it using parted's resizepart command. # parted /dev/sdb resizepart 1 1740802047s This will only alter the partition table without altering any data. With any luck, the filesystem will just work normally afterwards.
How do I rescue an encrypted LUKS partition after failed shrink
1,480,039,976,000
Can you make a trustworthy EXT4 filesystem-scan of an online filesystem, e.g. root, by taking a LVM snapshot and then do a scan on the snapshot, something like: Make a snapshot: lvcreate --snapshot --size 1G --name lv_root_SS --chunksize 4k /dev/VG1/lv_root EXT4 scan: e2fsck -f /dev/dm-3 (device-name of the new shapshot is dm-3) Remove the snapshot: lvremove --yes VG1/lv_root_SS Will that work? e2fsck doesn't complain and seem to do the scan just fine.
Yes, you can do this, and there’s even a tool for that: lvcheck. This follows the same approach as your description, with some additions: it lists all active LVs (which can be checked using a snapshot) it checks how long it’s been since the last check for each LV for each LV, snapshot it, run fsck, delete the snapshot LVs which pass the check have their last check timestamp updated (in the real volume) LVs which fail can be listed in an email You can set this up in a periodic job (using cron or a systemd timer for example), and it will make sure your file systems are checked and updated as appropriate.
Scan online EXT4 using LVM snapshot?
1,480,039,976,000
On my server, I had an SSD as the boot drive with 11 6TB HDDs in a RAID6 setup as additional storage. However, after running into some issues with the motherboard, I switched the motherboard to one with only 4 SATA ports, so I reduced the size of the RAID6 setup from 11 to 4 drives. With <6TB of actual data being stored on the array, the data should be able to fit in the reduced storage space. I believe I used the instructions on the following pages to shrink the array. Since it was quite a while ago, I don't actually remember if these were the pages or instructions used, nor do I remember many of the fine details: https://superuser.com/questions/834100/shrink-raid-by-removing-a-disk https://delightlylinux.wordpress.com/2020/12/22/how-to-remove-a-drive-from-a-raid-array/ On the 7 unused drives, I believe I zeroed the superblocks: sudo mdadm --zero-superblock. For the 4 drives I want to use, I am unable to mount. I do not believe I used any partitions on the array. sudo mount /dev/md127 /mnt/md127 mount: /mnt/md127: wrong fs type, bad option, bad superblock on /dev/md127, missing codepage or helper program, or other error. From /var/log/syslog: kernel: [ 1894.040670] EXT4-fs (md127): bad geometry: block count 13185878400 exceeds size of device (2930195200 blocks) Since 13185878400 / 2930195200 = 4.5 = 9 / 2, I assume there is a problem with shrinking the file system or something similar. Since the RAID6 has 2 spare drives, going from 11 (9 active, 2 spare) to 11 (2 active, 9 spare)? to 4 (2 active, 2 spare) would explain why the block count is much higher than the size of the device by an exact multiple of 4.5. Other information from the devices: sudo mdadm --detail /dev/md127 /dev/md127: Version : 1.2 Creation Time : Wed Nov 24 22:28:38 2021 Raid Level : raid6 Array Size : 11720780800 (10.92 TiB 12.00 TB) Used Dev Size : 5860390400 (5.46 TiB 6.00 TB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sun Apr 9 04:57:29 2023 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 512K Consistency Policy : bitmap Name : nao0:0 (local to host nao0) UUID : ffff85d2:b7936b45:f19fc1ba:29c7b438 Events : 199564 Number Major Minor RaidDevice State 9 8 16 0 active sync /dev/sdb 1 8 48 1 active sync /dev/sdd 2 8 32 2 active sync /dev/sdc 10 8 0 3 active sync /dev/sda sudo mdadm --examine /dev/sd[a-d] /dev/sda: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : ffff85d2:b7936b45:f19fc1ba:29c7b438 Name : nao0:0 (local to host nao0) Creation Time : Wed Nov 24 22:28:38 2021 Raid Level : raid6 Raid Devices : 4 Avail Dev Size : 11720780976 sectors (5.46 TiB 6.00 TB) Array Size : 11720780800 KiB (10.92 TiB 12.00 TB) Used Dev Size : 11720780800 sectors (5.46 TiB 6.00 TB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=176 sectors State : clean Device UUID : 07f76b7f:f4818c5a:3f0d761d:b2d0ba79 Internal Bitmap : 8 sectors from superblock Update Time : Sun Apr 9 04:57:29 2023 Bad Block Log : 512 entries available at offset 32 sectors Checksum : 914741c4 - correct Events : 199564 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 3 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdb: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : ffff85d2:b7936b45:f19fc1ba:29c7b438 Name : nao0:0 (local to host nao0) Creation Time : Wed Nov 24 22:28:38 2021 Raid Level : raid6 Raid Devices : 4 Avail Dev Size : 11720780976 sectors (5.46 TiB 6.00 TB) Array Size : 11720780800 KiB (10.92 TiB 12.00 TB) Used Dev Size : 11720780800 sectors (5.46 TiB 6.00 TB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=176 sectors State : clean Device UUID : 3b51a0c9:b9f4f844:68d267ed:03892b0d Internal Bitmap : 8 sectors from superblock Update Time : Sun Apr 9 04:57:29 2023 Bad Block Log : 512 entries available at offset 32 sectors Checksum : 294a8c37 - correct Events : 199564 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 0 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdc: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : ffff85d2:b7936b45:f19fc1ba:29c7b438 Name : nao0:0 (local to host nao0) Creation Time : Wed Nov 24 22:28:38 2021 Raid Level : raid6 Raid Devices : 4 Avail Dev Size : 11720780976 sectors (5.46 TiB 6.00 TB) Array Size : 11720780800 KiB (10.92 TiB 12.00 TB) Used Dev Size : 11720780800 sectors (5.46 TiB 6.00 TB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=176 sectors State : clean Device UUID : 0fcca5ee:605740dc:1726070d:0cef3b39 Internal Bitmap : 8 sectors from superblock Update Time : Sun Apr 9 04:57:29 2023 Bad Block Log : 512 entries available at offset 32 sectors Checksum : 31472363 - correct Events : 199564 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 2 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) /dev/sdd: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : ffff85d2:b7936b45:f19fc1ba:29c7b438 Name : nao0:0 (local to host nao0) Creation Time : Wed Nov 24 22:28:38 2021 Raid Level : raid6 Raid Devices : 4 Avail Dev Size : 11720780976 sectors (5.46 TiB 6.00 TB) Array Size : 11720780800 KiB (10.92 TiB 12.00 TB) Used Dev Size : 11720780800 sectors (5.46 TiB 6.00 TB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=176 sectors State : clean Device UUID : e1912abb:ba98a568:8effaa66:c1440bd8 Internal Bitmap : 8 sectors from superblock Update Time : Sun Apr 9 04:57:29 2023 Bad Block Log : 512 entries available at offset 32 sectors Checksum : 82a459ba - correct Events : 199564 Layout : left-symmetric Chunk Size : 512K Device Role : Active device 1 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) After looking online, I tried to use fsck, e2fsck, and resize2fs to try to resolve the issue. However, I did not make any progress by trying this, and I may have made the problem worse by accidentally changing the data on the disk. With resize2fs, sudo resize2fs /dev/md127 resize2fs 1.46.5 (30-Dec-2021) Please run 'e2fsck -f /dev/md127' first. Since I could not use resize2fs to actually do anything, I used e2fsck which ran into many errors. Since there were thousands of errors, I quit before the program was able to finish. sudo e2fsck -f /dev/md127 e2fsck 1.46.5 (30-Dec-2021) The filesystem size (according to the superblock) is 13185878400 blocks The physical size of the device is 2930195200 blocks Either the superblock or the partition table is likely to be corrupt! Abort<y>? no Pass 1: Checking inodes, blocks, and sizes Error reading block 3401580576 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes Force rewrite<y>? yes Error reading block 3401580577 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes Force rewrite<y>? yes Error reading block 3401580578 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes Force rewrite<y>? yes Error reading block 3401580579 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes Force rewrite<y>? yes Error reading block 3401580580 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes Force rewrite<y>? yes Error reading block 3401580581 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes Force rewrite<y>? yes Error reading block 3401580582 (Invalid argument) while getting next inode from scan. Ignore error<y>? yes Force rewrite<y>? My hypothesis is that there is probably some inconsistency in the reported size of the drives. I do not believe I had any partitions on the RAID nor any LVM volumes. sudo fdisk -l ... Disk /dev/sda: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors Disk model: WDC WD60EZAZ-00S Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdb: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors Disk model: WDC WD60EZAZ-00S Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdc: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors Disk model: WDC WD60EZAZ-00S Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdd: 5.46 TiB, 6001175126016 bytes, 11721045168 sectors Disk model: WDC WD60EZAZ-00S Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md127: 10.92 TiB, 12002079539200 bytes, 23441561600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes The data on the 4 currently in use may or may not be altered by fsck / e2fsck, but the data should also be on the other 7 unused drives with the zeroed superblocks. It is not important to me which drives I recover the data from, so working solutions to recover from any grouping of the drives would be highly appreciated! If any additional information is needed, I would be more than happy to provide it.
Your ext4 filesystem is (much) larger than your block device (54TB filesystem on a 12TB block device). e2fsck and resize2fs can be quite uncooperative in this situation. Filesystems hate it when huge chunks are missing. For a quick data recovery, you can try your luck with debugfs in catastrophic mode: # debugfs -c /dev/md127 debugfs 1.47.0 (5-Feb-2023) debugfs: ls -l | (this should list some files) | (damaged files usually show with 0 bytes and 1-Jan-1970 timestamp) debugfs: rdump / /some/recovery/dir/ This should copy out files (use an unrelated HDD for recovery storage) but some files might result in errors such as Attempt to read block from filesystem resulted in short read or similar. In order to actually fix the filesystem, it's usually best to restore the original device size, and then go from there. Sometimes, shrinking a block device is reversible. But in your case, it's not reversible. You could grow the RAID back to 11 devices but even with the correct drive order, it would not give back any of the missing data and even overwrite any that might have been left on the leftover disks. mdadm shifts offsets in every grow operation, so the layout would be all wrong. So anything beyond the cutoff point is lost. Furthermore it would take ages to reshape all this data (again) and the result won't be any better than just tacking on some virtual drive capacity (all zeroes with loop devices and dm-linear, or LVM thin volumes, or similar). At best you could reverse it partially, by re-creating (using mdadm --create on copy-on-write overlays) your original 11 drive RAID 6 with 4 drives missing (as drives fully zeroed out). But at most this would give you disconnected chunks of data with many gaps in between them, since this is beyond what RAID 6 can recover from. It's even more complicated since you no longer have the metadata (need to know the original offset, which was already changed on your current raid, as well as the drive order). If you could manage to do it, you could stitch your current RAID (0-12TB) and restored raid (12TB-54TB) together with dm-linear (all on top of copy-on-write overlays) and see what can be found. But this process is complicated and probability of success is low. For any data that was stored outside those 12TB that were kept by your shrink operation, some smaller than chunk/stripe files could have survived, while larger files would all be damaged.
RAID6 unable to mount EXT4-fs: bad geometry: block count exceeds size of device
1,480,039,976,000
Recently my computer's PSU died. I am not aware how exactly this happened, but I restored my computer's functionality. I run PopOS 22.04 LTS Linux, and my OS also became unbootable. I repaired it with a bootable USB. When booting in though, I discovered my 2TB Data HDD is also gone. Its partitions have been replaced with nearly 2TB of ''Unallocated Space'' and a FAT12 Windows partition of 18MB (I forgot the actual name it had, but I think it was something like ''Windows Disk Management''). I did boot into Windows once on accident before repairing Linux as I had a dual-boot setup, I don't know if Windows did anything to it. My previous partition was a ext4 partition of 2TB, nothing else. It was full of data, and I think that data is still there. What I've tried since is to (perhaps foolishly, I know nothing about this) add a partition, but upon seeing I had to format the partition to have it be functional I stopped. I also tried testdisk, which finds a bunch of old Linux partitions tagged [Data] but ultimately fails when it reports that the partitions are too big to fit on the disk. The partitions have the notice ''this partition ends aftrer the disk limits.''. Here is a hastebin of my testdisk log file: https://www.toptal.com/developers/hastebin/ahetuwuquv.yaml Is there any way I can recover my files, or get testdisk to be able to recover my partition or read the files? Thanks in advance, and please let me know if I need to add any logs or information.
I decided to use R-Studio for Linux to recover my drives. I purchased another 2TB to move them onto, and to later use as a RAID 1 array. The software was amazingly able to find all my files with the exception of a few filenames. I guess recovering through TestDisk would've been possible but I am not knowledgeable enough to do so, and R-Studio greatly simplified this process for me.
ext4 partition on 2TB HDD gone and replaced with Windows FAT12 partition
1,480,039,976,000
I'm trying to boot an STM32MP157a-dk1 using an image that i made with buildroot but when i boot i got this message : Unable to write "/uboot.env" from mmc0:4 Help me please !! H.M
I changed in my configuration via the menuconfig and in the EXT2_MKFS_OPTIONS I used:-O ^metadata_csum,^64bit.
Failed to boot STM32mp157a-dk1 using a buildroot image
1,594,968,503,000
The output of df -h as below: Filesystem Size Used Avail Use% Mounted on udev 3.9G 0 3.9G 0% /dev tmpfs 796M 1.7M 794M 1% /run /dev/sda7 85G 6.2G 74G 8% / tmpfs 3.9G 0 3.9G 0% /dev/shm tmpfs 5.0M 4.0K 5.0M 1% /run/lock tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup /dev/loop0 1.0M 1.0M 0 100% /snap/gnome-logs/61 /dev/loop2 43M 43M 0 100% /snap/gtk-common-themes/1313 /dev/loop1 150M 150M 0 100% /snap/gnome-3-28-1804/67 /dev/loop3 89M 89M 0 100% /snap/core/7270 /dev/loop5 4.2M 4.2M 0 100% /snap/gnome-calculator/406 /dev/loop4 15M 15M 0 100% /snap/gnome-characters/296 /dev/loop6 55M 55M 0 100% /snap/core18/1066 /dev/loop7 3.8M 3.8M 0 100% /snap/gnome-system-monitor/100 /dev/sda1 453M 113M 313M 27% /boot /dev/sda6 9.4G 993M 7.9G 11% /home tmpfs 796M 16K 796M 1% /run/user/121 tmpfs 796M 0 796M 0% /run/user/1001 So, /home/ is mounted on /dev/sad6, and / is mounted on /dev/sda7. As you see, my /home is very small. Then I execute parted /dev/sda -l and here is the output: Model: VMware Virtual disk (scsi) Disk /dev/sda: 183GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 500MB 499MB primary ext4 boot 2 501MB 107GB 107GB extended 5 501MB 4596MB 4095MB logical linux-swap(v1) 6 4597MB 14.8GB 10.2GB logical ext4 7 14.8GB 107GB 92.5GB logical ext4 So, 1 is /boot, 6 is /home, 7 is /. 2 is large enough, it seems that 5 which is SWAP is used 2 but the size of 5 is only 4095M. There are a large unused space at 2. I've tried to format the 2 with the command mkfs.ext4 but I got an error: mke2fs 1.44.1 (24-Mar-2018) Found a dos partition table in /dev/sda2 Proceed anyway? (y,N) y mkfs.ext4: inode_size (128) * inodes_count (0) too big for a filesystem with 0 blocks, specify higher inode_ratio (-i) or lower inode count (-N). Is this because the 2 contains the 5? How could I use the 2 as my /home directory?
the output of parted shows that 2 is built up of 5, 6 and 7. Maybe that would be clearer if the start end end adresses of the sectors would not be in human readable form (compare it to the output of fdisk -l /dev/sda). That means it is no solution to use 2 for your homedirectories, since they are already part of it. When you setup a new computer or filesystem, consider using lvm, that makes dealing with this kind of problems much easier. Changes to the partitions are easier to implement.
How to format and use the extended partition
1,594,968,503,000
On the same model of industrial PCs, I see UUID of the main SSD changed. Those 2 IPCs are restored from 2 similar but different Linux disk images. Question is as per the title. UUID of the main disk /dev/sda2 is different. Both Ubuntu 16.04. Linux disk image A: Kernel 4.15.0-65. UUID bc96e844-27c1-4ccb-af66-053cce7cecdb. User m, n exist. User n's home folder is encrypted. Linux disk image B: Kernel 4.15.0-96 UUID 19e10365-d0b9-44c1-ac5d-a7acd5941bae. User m only exists. Some packages are newer. Btw, we manufactured many IPCs with disk image A. While I haven't checked all IPCs, I just randomely checked some and they all show the same UUID. On one host that was restored from image A, /var/log/syslog output this UUID: Apr 16 13:59:03 poodle_noodle kernel: [ 0.000000] Command line: BOOT_IMAGE=/boot/vmlinuz-4.15.0-58-generic root=UUID=bc96e844-27c1-4ccb-af66-053cce7cecdb ro quiet splash vt.handoff=7 : (In fact, in the log above, I was doing some experiment so Kernel version was 4.15.0-58, not even 4.15.0-65, but the UUID is the same. So this Kernel version is ruled out) On a host restored from the image B: $ sudo blkid : /dev/sda2: UUID="19e10365-d0b9-44c1-ac5d-a7acd5941bae" TYPE="ext4" PARTUUID="d1cf8631-f3f7-4b8d-baba-86c6fcebe232" :
Update: Here's what's happening. The images are themselves copies of raw disks with partitions and filesystems. The disk layout, filesystems, contents and all will then be written to the disk on the computer you're imaging. At some point someone ran mkfs to create the filesystems used on the image and a UUID was generated. The images have different UUIDs from each other because they were generated from the contents of different filesystem. This makes sense because you'd normally do a clean install and thus repartition/reformat to generate the image. This will only happen on image based installs, when you do a normal (install-root/debootstrap/pacstrap/etc/) install you'd typically reformat to delete old contents and thus generate a new UUID for the new filesystem. Old: I'm not 100% sure I understand the question but the way I'm parsing is: I have two of the same model of PC why are the UUIDs of the "same" partitions different? UUID stands for Universal Unique Identifier. They are, as it says on the tin designed to be universally unique. The UUIDs are randomly generated at creation time and you'd have to take some kind of affirmative action to make them the same. As for what would cause a UUID to change? filesystem formatting for example would cause them to change. So yeah the partitions should have different UUIDs this is what we'd expect.
What triggers disk's UUID to change?
1,594,968,503,000
I'm trying to install Linux on a 20-year old Compaq HP Pavilion ze4300 with 256 MB RAM (I think) without internet connection. I can boot from CD but not from USB. Unfortunately I ran into this excruciating problem. Basically, my attempt at installing lubuntu resulted in ext4 hard disk file system. Lubuntu doesn't have enough memory. So then I tried Puppy Linux. Everything is great, I could boot live and install, but the bootloader GRUB4DOS can't read ext4. The instruction in the link to use GParted doesn't work, because I can't boot live GParted, presumably because machine is too old. Question: Is there something extremely conservative that I can use on a 20-year old machine to change the ext4 back to ext3? I know I can use a CD boot loader, but I really want to boot from HD.
You can use mke2fs -t ext3 to format a filesystem that is compatible with ext3 on older kernels. This will disable newer ext4 features.
ext4 issue on very old machine
1,594,968,503,000
Is there any way to mount an ext4 partition in another PC, running Windows, at the same network in my Ubuntu Live? My HD just died earlier today and I needed to use Live Distros until I get a new one. I choose Ubuntu 18.10. I customized my Ubuntu Live and to do it I needed to make an EXT4 partition on my notebook HD(running Windows). I took the HD off and put it in my PC. I want to connect remotely so I won't need to take it off again. My workaround (no success!): I tried mount remotely by windows share a virtual HD image(created with Windows version of DD). This way I got to create the partition and edit my Ubuntu Live '.iso'. The problem was when I tried to copy my edited iso out of the HD virtual image. No matter to where I tried to copy I was getting I/O error at the end of the copy. I can't set up a virtual machine on my notebook. It has only 2GB of ram.
Unfortunately¹, Windows cannot even read EXT4 partitions without third-party software. There are a few of them out there that can do local read-only mounting of EXT4 partitions but only one (commercial) that can do both reads and writes. However, none of those will allow you to share these on a Windows Network: they're for local reading (or writing in one case) only. So to have full access to your drive remotely you'll have to: create an NTFS volume on your USB stick as Linux can easily read and write to NTFS volumes. keep data that you want to access remotely on the NTFS volume (Documents, Videos, Music, whatever) as that's just native on Windows and Windows can share it just fine. keep the data that you needed to be on the EXT4 where it is now. Note¹: Actually for us Linux admins that's a fortunately because this way, Windows cannot mess up EXT4 partitions...
Mount an ext4 partition in another PC(Windows) at the same network in my Ubuntu Live?
1,594,968,503,000
I love xfs_copy's ability to clone an xfs file system from disk to disk. Is there an equivalent tool to clone an ext4 file system? I've tried dump/restore, but it requires the destination file system to be created and mounted. So it is not an equivalent to xfs_copy. What is the "xfs_copy" equivalent for ext4?
The ext4 wiki suggests e2image -rap. The last two options are not otherwise documented; personally this would make me wary of using it. The manual page only describes its use to send debugging information to the filesystem developers. partclone.ext4 --dev-to-dev will also work. partclone is third-party software, not included in common distributions. It is part of the Clonezilla project, which is a very powerful disk imaging solution. I believe it differs from xfs_copy in that it does not change the filesystem UUID. I'm not sure why this model would be considered superior to dump/restore in principle.
"xfs_copy" equivalent for ext4?
1,594,968,503,000
I have a completely standard, single hard drive Fedora 23 desktop. The dual-boot installer set up the Linux partition as LVM, with root, swap and home logical volumes; root and home were both ext4. Having recently added an additional 4 GB of ram, I decided to expand the swap volume by shrinking the home volume by 4 GB and then adding that to swap. Everything seemed to go fine, and my computer ran for several days with no problems. However, I didn't reboot or shutdown after doing the above, and then there was a power failure. When I next booted up I was dropped into emergency recovery mode as the home ext4 volume was corrupted. I tried using fsck several times, but was unable to fully repair the problem. I ended up reformatting the home volume and restoring from a recent backup. My questions: Was the corruption due to me screwing up the swap resizing, or due to not rebooting right after the resizing? The home filesystem had around 240 GB free when I shrank it by 4 GB, and it continued to be usable for several days afterwards, so I think I didn't screw it up, but that was the first time I've ever used LVM. If I did the LVM stuff right and the problem was due to the power failure, was there any LVM command I could have issued to flush the changes to the hard drive, or is the only proper way to do it to reboot after the change?
You don't have to run anything after resizing, but you cannot just resize the logical volume even if you have unmounted the filesystem on it. You have the resize the filesystem first (for ext4 you can use resize2fs), to make sure there are unused blocks in the logical volume that can be freed up (to transfer to swap). This normally requires some calculation and you should not shrink the filesystem with less Gb than you are going to shrink the LV. To prevent the calculation and the possible errors, what I would do is the following. If the filesystem is originally 100Gb and you want to shrink it by 4Gb: resize2fs /dev/mapper/vg0-home 95G lvreduce -L 96G /dev/mapper/vg0-home # resize to fill available space resize2fs -p /dev/mapper/vg0-home (adjust vg0-home to your actual LV)
Ext4 corrupted and unrecoverable after LVM resizing + power failure