date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,393,437,350,000 |
This problem is related to Samba and inodes are not necessary.
I have a problem handling a certain file that has some special characters in it. If I search it by its inode it will list the file:
$ find . -inum 90505400 -exec ls {} \;
./12 String Quartet No. 16 in F Major Op. 135: Der schwer gefa?te Entschlu?: Grave, ma non troppo tratto (Mu? es sein ?) - Allegro (Es mu? sein !).flac
However, if I then proceed to use cp or rm on the file it will throw a file not found error (in German 'Datei oder Verzeichnis nicht gefunden'):
$ find . -inum 90505400 -exec cp {} ne.flac \;
cp: './12 String Quartet No. 16 in F Major Op. 135: Der schwer gefa?te Entschlu?: Grave, ma non troppo tratto (Mu? es sein ?) - Allegro (Es mu? sein !).flac' kann nicht zum Lesen geöffnet werden: Datei oder Verzeichnis nicht gefunden
I wonder, if I can copy the file with another command that uses the inode directly. I also had this problem for some time now. I can remove all files with rm *, but I would like to fix the broken filename.
It is an ext4 filesystem which I mount on a Raspi from an external USB HDD with this line (changed obfuscated paths and IPs):
UUID=e3f9d42a-9703-4e47-9185-33be24b81c46 /mnt/test ext4 rw,auto,defaults,nofail,x-systemd.device-timeout=15 0 2
I then share it with samba:
[mybook]
path=/mnt/test
public = yes
browseable = yes
writeable = yes
comment = test
printable = no
guest ok = no
And I mount this on a Lubuntu 16 with this:
//192.168.1.190/test /home/ben/test cifs auto,nofail,username=XXX,password=XXX,uid=1000,gid=1000
I connect to the Lubuntu 16 through VNC from a Macbook. Or I SSH directly into it. I am just telling this for full information.
I also mount the share on that Macbook (and others) in Finder. Finder does not display the filename correctly.
After a useful comment from a user, I realized I should try to manipulate the file on the host with the original filesystem instead of trying to do it over samba.
SSHing into the host reveals this filename (look at the sign with 0xF022 after '135'):
'12 String Quartet No. 16 in F Major Op. 135 Der schwer gefa?te Entschlu? Grave, ma non troppo tratto (Mu? es sein ) - Allegro (Es mu? sein !).flac'
I then was able to copy the file with cp on the host itself.
(In case anybody wonders how I came to the filename: I split a summed up flac file with it's cue sheet into the separate files and they got named automatically.)
|
All of open() (for copying), rename() and unlink() (removal) work by filenames. There's really nothing that would work on an inode directly, apart from low-level tools like debugfs.
If you can remove the file with rm *, you should be able to rename it with mv ./12* someothername.flac, or copy it with cp ./12* newfile.flac (assuming ./12* matches just that file). find in itself shouldn't be that different.
But you mentioned Mac, and I think Mac requires filenames to be valid UTF-8 and that might cause issues if the filenames are broken. Linux doesn't names that are invalid UTF-8, but of course there, too, some tools might react oddly. (I haven't tested.) Having Samba in there might not help either.
Assuming that has something to do with the issue, you could try to SSH in to the host with the filesystem, skipping the intermediary parts, and rename the files there.
| How to copy a file by using its inode number? |
1,401,431,048,000 |
I've just found some interesting information for me in man stat:
a, m, c, B
The time file was last accessed or modified, of when the inode was last changed, or the birth time of the inode.
But what's the difference between file last modified time and the inode's modified time? I'm writing a bakup-bash script which allows to copy only last modificated files from two almost equally directories so I need to know which value I will prefer to use :)
|
Access: 2014-05-20 11:04:27.012146373 -0700
Modify: 2014-04-05 20:59:32.000000000 -0700
Change: 2014-05-20 11:04:22.405479507 -0700
Access: last time the contents of the file were examined.
Modify: Last time the contents of the file were changed.
Change: Last time the file's inode was changed.
The change time includes things like modifying the permissions and ownership, while the modify time refers specifically to the files contents.
Or more precisely (from man 2 stat):
The field st_atime is changed by file accesses, for example, by execve(2), mknod(2), pipe(2), utime(2) and read(2) (of more than zero bytes). Other
routines, like mmap(2), may or may not update st_atime.
The field st_mtime is changed by file modifications, for example, by mknod(2), truncate(2), utime(2) and write(2) (of more than zero bytes). More‐
over, st_mtime of a directory is changed by the creation or deletion of files in that directory. The st_mtime field is not changed for changes in
owner, group, hard link count, or mode.
The field st_ctime is changed by writing or by setting inode information (i.e., owner, group, link count, mode, etc.).
Interestingly, direct manipulation of the file times counts as modification of the inode, which will bump the ctime to the current clock time. So you can set the ctime to the current time, but you can't set it to any other time, as you can the other two. This makes the ctime a useful canary to spot when the file's mtime might have been moved back.
Also, while you can change the inode without changing the file contents (that is, the ctime can change without the mtime changing), the reverse is not true. Every time you modify the contents of the file you will necessarily also end up bumping the ctime.
| What's the difference between modification date and inode's modification date? |
1,401,431,048,000 |
At work we use sparse files as part of out Oracle VM environment for the guest disk images. After some questions from a colleague (which have since been answered) I am left with more questions about sparse files, and perhaps more widely about inode structure - reading the man pages of stat(2) and statfs(2) (on FreeBSD) I get the impression that I'd understand more readily if I knew more C, but alas my knowledge of C is minimal at best...
I understand that some of this is dependent on file system type. I'm mostly interested UFS on FreeBSD/Solaris and ext4 - ZFS would be a plus but I'm not going to hold out hope :)
I am using Solaris 10, FreeBSD 10.3, and CentOS 6.7 regularly. The commands here are being run on a CentOS 6.7 VM, but have been cross referenced with FreeBSD.
If possible, I'm interested in gaining an understanding from a POSIX viewpoint, and favouring FreeBSD over Linux if that isn't possible.
Consider the following set of commands:
printf "BIL" > /tmp/BIL
dd of=/tmp/sparse bs=1 count=0 seek=10
dd if=/tmp/BIL of=/tmp/sparse bs=1 count=3 seek=10
dd if=/tmp/BIL of=/tmp/sparse bs=1 count=3 seek=17
dd of=/tmp/sparse bs=1 count=0 seek=30
dd if=/tmp/BIL of=/tmp/sparse bs=1 count=3 seek=30
The file /tmp/BIL should have the contents (in hex) of 4942 004c, so when I hexdump the file /tmp/sparse I should see a smattering of this combination throughout:
%>hexdump sparse
0000000 0000 4942 004c 0000 0000 4942 004c 0000
0000010 4200 4c49 0000 0000 0000 0000 0000 4942
0000020 004c
0000021
%>cat sparse
BILBILBILBIL%
1. Why does the second occurrence of "BIL" appear out of order? i.e. 4200 4c49 rather than 4942 004c? This was written by the third dd command.
2. How does cat and other tools know to print in the correct order?
Using ls we can see the space allegedly used and the blocks allocated:
%>ls -ls /tmp/sparse
8.0K -rw-r--r--. 1 bil bil 33 May 26 14:17 /tmp/sparse
We can see that the alleged size is 33 bytes, but allocated size is 8 kilobytes (file system block size is 4K).
3. How do programs like ls discern between the "alleged" size and the allocated size?
I wondered if the "alleged" figure stored in the inode while the allocated size was calculated by walking the direct and indirect blocks - though this cannot be correct since calculation via walking would take time and tools such as ls return quickly, even for very large files.
4. What tools can I use to interrogate inode information?
I know of stat, but it doesn't seem to print out the values of all of the fields in an inode...
5. Is there a tool where I can walk the direct and indirect blocks?
It would be interesting to see each address on disk, and the contents to gain a bit more understanding of how data is stored
If I run the following command after the others above, the file /tmp/sparse is truncated:
%>dd of=/tmp/sparse bs=1 count=0 seek=5
%>hexdump sparse
0000000 0000 4942 004c
0000005
6. Why does dd truncate my file and can dd or another tool write into the middle of a file?
Lastly, sparse files seem like a Good Idea for preallocating space, but there doesn't appear to be file system or operating system level assurances that the a command won't truncate or arbitrarily grow the file.
7. Are there mechanisms to prevent sparse files be shrunk/grown? And if not, why are sparse files useful?
While each question above could possibly be a separate SO question, I cannot dissect them as they are all related to the underlying understanding.
|
Some quick answers: first, you didn't create a sparse file. Try these extra commands
dd if=/tmp/BIL of=/tmp/sparse seek=1000
ls -ls /tmp/sparse
You will see the size is 512003 bytes, but only takes 8 blocks. The null bytes have to occupy a whole block, and be on a block boundary for them to be possibly sparse in the filesystem.
Why does the second occurrence of "BIL" appear out of order?
because you are on a little-endian system and you are writing output in shorts. Use bytes, like cat does.
How does cat and other tools know to print in the correct order?
they work on bytes.
How do programs like ls discern between the "alleged" size and the allocated size?
ls and so on use the stat(2) system call which returns 2 values:
st_size; /* total size, in bytes */
blkcnt_t st_blocks; /* number of 512B blocks allocated */
What tools can I use to interrogate inode information?
stat is good.
Is there a tool where I can walk the direct and indirect blocks?
On ext2/3/4 you can use hdparm --fibmap with the filename:
$ sudo hdparm --fibmap ~/sparse
filesystem blocksize 4096, begins at LBA 25167872; assuming 512 byte sectors.
byte_offset begin_LBA end_LBA sectors
512000 226080744 226080751 8
You can also use debugfs:
$ sudo debugfs /dev/sda3
debugfs: stat <1040667>
Inode: 1040667 Type: regular Mode: 0644 Flags: 0x0
Generation: 1161905167 Version: 0x00000000
User: 127 Group: 500 Size: 335360
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 664
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x4dd61e6c -- Fri May 20 09:55:24 2011
atime: 0x4dd61e29 -- Fri May 20 09:54:17 2011
mtime: 0x4dd61e6c -- Fri May 20 09:55:24 2011
Size of extra inode fields: 4
BLOCKS:
(0-11):4182714-4182725, (IND):4182726, (12-81):4182727-4182796
TOTAL: 83
Why does dd truncate my file and can dd or another tool write into the middle of a file?
Yes, dd can write into the middle. Add conv=notrunc.
Are there mechanisms to prevent sparse files be shrunk/grown? And if not, why are sparse files useful?
No. Because they take less space.
The sparse aspect of a file should be totally transparent to a program, which sometimes means the sparseness may be lost when the program updates a file.
Some copying utilities have options to preserve sparseness, eg tar --sparse, rsync --sparse.
Note, you can explicitly convert the suitably aligned zero blocks in a file to sparseness by using cp --sparse=always and the reverse, converting sparse space into real zeros, with cp --sparse=never.
| Understanding sparse files, dd, seek, inode block structure |
1,401,431,048,000 |
I understand there are 12 permission bits of which there are 3 groups of 3 bits for each of user, group, and others, which are RWX respectively. RW are read and write, but for X is search for directories and execute for files.
Here is what I don't get:
What are the 3 remaining mode bits and are they all stored in the inode?
I know the file directory itself is considered a file as well, since all things in UNIX are files (is this true?), but since UNIX systems use ACL to represent the file system, then the file system is a list of filename-inode_number pairs. Where does a file directory store it's own inode number and filename?
|
stat /bin/su shows on one system:
Access: (4755/-rwsr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
There's the octal representation 4755 of all 12 mode bits. The number corresponds to the bits:
octal 4 7 5 5
bits 100 111 101 101
sst uuu ggg ooo
ug rwx rwx rwx
Where uuu, ggg and ooo are the permission bits for the user, group and others. The remaining group (the first one in order) contains the setuid (su), setgid (sg) and sticky (t) bits.
The setuid and sticky bits are often not mentioned, since they're zero for most files. They're still there for every file, saved along with the others.
If we really get down to it, some filesystems and interfaces store the file type along the mode bits, in the still-higher bits. The above only accounts for 12 bits, so with a 16-bit field, there's 4 left over. See, for example, the description of st_mode in stat(2).
| What are the final 3 bits in the UNIX permission mode bits? |
1,401,431,048,000 |
I want to run a task with limits on the kernel objects that they will indirectly trigger. Note that this is not about the memory, threads, etc. used by the application, but about memory used by the kernel. Specifically, I want to limit the amount of inode cache that the task can use.
My motivating example is updatedb. It can use a considerable amount of inode cache, for things that mostly won't be needed afterwards. Specifically, I want to limit the value that is indicated by the ext4_inode_cache line in /proc/slabinfo. (Note that this is not included in the “buffers” or “cache” lines shown by free: that's only file content cache, the slab content is kernel memory and recorded in the “used” column.)
echo 2 >/proc/sys/vm/drop_caches afterwards frees the cache, but that doesn't do me any good: the useless stuff has displaced things that I wanted to keep in memory, such as running applications and their frequently-used files.
The system is Linux with a recent (≥ 3.8) kernel. I can use root access to set things up.
How can I run a command in a limited environment (a container?) such that the contribution of that environment to the (ext4) inode cache is limited to a value that I set?
|
Following my own question on LKML this can be archived using Control Group v2:
Pre-requisits
Make sure your Linux kernel has MEMCG_KMEM enabled, e.g. grep CONFIG_MEMCG_KMEM "/boot/config-$(uname -r)"
Depending on the OS (and systemd version) enable the use of cgroups2 by specifying systemd.unified_cgroup_hierarchy=1 on the Linux kernel command line, e.g. via /boot/grub/grub.cfg.
Make sure the cgroup2 file system is mounted on /sys/fs/cgroup/, e.g. mount -t cgroup2 none /sys/fs/cgroup or the equivalent in /etc/fstab. (systemd will do this for you automatically by default)
Invocation
Create a new group my-find (once per boot) for your process: mkdir /sys/fs/cgroup/my-find
Attach the (current) process (and all its future child processes) to that group: echo $$ >/sys/fs/cgroup/my-find/cgroup.procs
Configure a soft-limit, e.g. 2 MiB: echo 2M >/sys/fs/cgroup/my-find/memory.high
Finding the right value requires tuning and experimenting. You can get the current values from memory.current and/or memory.stat. Over time you should see high incrementing in memory.events, as the Linux kernel is now repeatedly forces to shrink the caches.
Appendix
Notice that the limit applies both to user-space memory and kernel memory. It also applies to all processes of the group, which includes child-processes started by updatedb, which basically does a find | sort | frcode, where:
find is the program trashing the dentry and inode caches, which we want to constrain. Otherwise its user-space memory requirement (theoretically) is constant.
sort want lots of memory, otherwise it will fall back to using temporary files, which will result in additional IO.
frcode will write the result to disk - e.g. a single file - which requires constant memory.
So basically you should put only find into a separate cgroup to limit its cache trashing, but not sort and frcode.
Post scriptum
It does not work with cgroup v1 as setting memory.kmem.limit_in_bytes is both deprecated and results in an "out-of-memory" event as soon as the processes go over the configured limit, which gets your processes killed immediately instead of forcing the Linux kernel to shrink the memory usage by dropping old data.
Quoting from section CONFIG_MEMCG_KMEM
Currently no soft limit is implemented for kernel memory. It is future work to trigger slab reclaim when those limits are reached.
| Limit the inode cache used by a command |
1,401,431,048,000 |
This Debian server was running just fine until a week or so ago. Now it does not allow files to be allocated, despite there still being room.
The root volume is configured with LVM.
Kernel is Linux 3.16.0-4-amd64 #1 SMP Debian 3.16.51-3 (2017-12-13) x86_64 GNU/Linux
A fsck and reboot did not help. Deleting some files did not help either.
df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vg-root 0 0 0 - /
udev 2051270 380 2050890 1% /dev
tmpfs 2053627 632 2052995 1% /run
tmpfs 2053627 5 2053622 1% /dev/shm
tmpfs 2053627 4 2053623 1% /run/lock
tmpfs 2053627 13 2053614 1% /sys/fs/cgroup
/dev/sda1 62248 328 61920 1% /boot
tmpfs 2053627 13 2053614 1% /run/user/117
tmpfs 2053627 4 2053623 1% /run/user/0
tmpfs 2053627 4 2053623 1% /run/user/1000
Meanwhile, there is plenty room on the device
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg-root 447G 293G 154G 66% /
The lvm2 setup is one volume group of 465G, which is fully used by / and swap (15G).
/ is formatted as btrfs:
btrfs filesystem df /
Data, single: total=444.63GiB, used=290.67GiB
System, DUP: total=8.00MiB, used=64.00KiB
System, single: total=4.00MiB, used=0.00B
Metadata, DUP: total=1.00GiB, used=764.58MiB
Metadata, single: total=8.00MiB, used=0.00B
GlobalReserve, single: total=256.00MiB, used=0.00B
The btrfs volume is indeed full:
btrfs filesystem show
Label: none uuid: 82ff2c95-6c6d-48c8-a0a0-3219e5cc2845
Total devices 1 FS bytes used 212.50GiB
devid 1 size 446.66GiB used 446.66GiB path /dev/mapper/vg-root
After deleting a huge logfile, the volume usage did not change; it is still full.
Any ideas on what happened and how to fix it?
|
As a modern filesystem, btrfs has no fixed inode limit at all, and that's why it reports inodes as all zeroes.
Check the status of btrfs subvolumes:
btrfs subvolume list -s /
If it turns out that you have snapshots hogging your disk space, you might need something like this to remove them:
btrfs subvolume delete -c /.snapshots/NNN/snapshot
See also this link for another user's adventure with btrfs and snapshots. The comments on that webpage include useful btrfs management commands among all the salt.
| Debian btrfs filesystem shows a total of zero inodes total, zero used, zero free |
1,401,431,048,000 |
Without creating a file can I use any inode which is free?
I want to write a script that will use all free inodes in the system. is it possible?
|
Yes you can consume all the inodes of a system. They are a limited resource just like diskspace is, and they're pre-allocated when you perform a mkfs.ext4, for example.
You can use tools such as tune2fs -l <device> or df -i <path> to see how many are allocated and used.
Example
$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/fedora-root 3276800 355850 2920950 11% /
So this filesystem has 2920950 inodes free. If I started making files, directories, or links on the filesystem, that would be all I needed to do to consume them all. Realize that I could consume all these inodes with small files or links, and still have roughly all the diskspace still available to me.
Consuming inodes without files?
I'm not sure what you're getting at here, but the only way I'm aware of, where you can consume inodes is to create files, directories, or links. I'm not familiar with any other way to consume them.
Example
Here you can see I'm consuming 1 inodes when I create a empty directory.
$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/fedora-root 3276800 355850 2920950 11% /
$ sudo mkdir /somedir
$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/fedora-root 3276800 355851 2920949 11% /
The easiest way to consume the inodes is likely to make a directory tree of directories.
$ sudo mkdir /somedir/1
$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/fedora-root 3276800 355852 2920948 11% /
$ sudo mkdir /somedir/2
$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/fedora-root 3276800 355853 2920947 11% /
$ sudo mkdir /somedir/3
$ df -i /
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/fedora-root 3276800 355854 2920946 11% /
Here's another example where I'm consuming inodes by creating several links using ln tothe same file.
$ ln -s afile ln1
$ df -i .
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/fedora_greeneggs-home 26722304 1153662 25568642 5% /home
$ ln -s afile ln2
$ df -i .
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/fedora_greeneggs-home 26722304 1153663 25568641 5% /home
$ ln -s afile ln3
$ df -i .
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/fedora_greeneggs-home 26722304 1153664 25568640 5% /home
| How to use all the inodes |
1,401,431,048,000 |
How can I work out the link count of an inode number? If I know the inode number is, say, 592255 - what workings out can I do to find out the link count?
I know directories have a link count of at least 2, but don't know how to work it out.
|
Finding the link count using the name
You can use the stat command to get a link count on a given file/directory:
$ stat lib/
File: ‘lib/’
Size: 4096 Blocks: 8 IO Block: 4096 directory
Device: fd02h/64770d Inode: 11666186 Links: 3
Access: (0755/drwxr-xr-x) Uid: ( 1000/ saml) Gid: ( 1000/ saml)
Context: unconfined_u:object_r:user_home_t:s0
Access: 2014-03-21 18:16:10.521963381 -0400
Modify: 2014-01-13 17:16:49.438408973 -0500
Change: 2014-01-14 17:57:46.636255446 -0500
Birth: -
Taking a look at the man page for stat:
%h number of hard links
%i inode number
So you can get just this value directly using stat's --printf or --format output capabilities:
$ stat --printf="%h\n" lib/
3
$ stat --format="%h" lib/
3
$ stat -c "%h" lib/
3
Finding the link count using the inode
If on the other hand you know the inode number only you can work backwards like so:
$ ls -id lib
11666186 lib
$ find -inum 11666186 -exec stat -c "%h" {} +
3
References
Hard links and Unix file system nodes (inodes)
| Work out the link count of inode number? |
1,401,431,048,000 |
It appears that it is possible on a network mount to set a quota on how much space a user can consume.
# edquota ramesh
Disk quotas for user ramesh (uid 500):
Filesystem blocks soft hard inodes soft hard
/dev/sda3 1419352 0 0 1686 0 0
You also can set a soft and a hard limit on how many inodes a user has.
Why would you ever need to limit how many inodes a user has access to?
Wouldn't the user still be able to fill up the disk with 1 really large file?
|
The reason you limit the number of inodes a user can access is so they don't make the system as a whole run out of inodes by creating a huge number of 0-byte files.
With most Linux file systems (e.g. ext3 and ext4), each file (including device files) or directory has an inode -- a number used to point to a given file/directory. If a system runs out of inodes, it doesn't matter how much free space the hard disk has; it's impossible to make a new file until inodes are freed up.
To see how many inodes each filesystem has left:
df -i
The number of inodes a filesystem has is determined by the -i argument when formatting the file system. Examples:
mkfs -t ext4 -i 1024 /dev/foo # One inode per 1024 bytes
mkfs -t ext4 -i 2048 /dev/foo # One inode per 2048 bytes
mkfs -t ext4 -i 8192 /dev/foo # One inode per 8192 bytes
The filesystem created with the -i 1024 option will have eight times as many inodes as the filesystem created with the -i 8192 option (assuming both file sytems are the same size). Sometimes, especially with some mail servers (that use "maildir") or old-school Usenet spools, one needs more inodes, since those use cases create a lot of small files.
Note that some Linux filesystems, such as Reiserfs, are able to dynamically assign inodes and do not create all of them at filesystem creation time.
| Why restrict the number of inodes a user can access? |
1,401,431,048,000 |
I'm confused because all dentries have pointers to inode objects. As far as I know, you always look through dentries to find your inode. Then, why is there an inode cache?
|
You're asking about the inode cache implemented as part of the Linux Virtual File System (VFS). Caches, including the inode cache are not just used to provide functionality, like accessing inode entries, as there are other mechanisms for this as you point out.
Caches can be used to improve performance and in this case looking up inode data from an io device such as a disk is very slow, so storing previously accessed inode data in memory makes file system access much quicker.
| Why is the inode cache needed? |
1,401,431,048,000 |
Where does the *nix system store information about number of hard links to a specific inode? I can't find any information about that. Everywhere what a hard link is but rarely a bit more advanced information that touches inodes related stuff.
An inode stores information about number of links but where does it get it from? Can I locate all the links (both hard and soft) by knowing only the inode number?
|
The hard link count is stored in the inode. It starts at 1 when the file is created, increases by 1 each time the link system call is successful, and decreases by 1 each time the unlink system call is successful.
The only way to find all the hard links to the same file, i.e. to find all the pathnames leading to a given inode, is to go through the whole filesystem and compare inode numbers. The inode does not point back to the directory entries.
Directories are a special case: their hard links obey strict rules. (Some unix variants allow root to bypass these rules at the administrator's peril.) The hard links to a directory are its . entry, its children's .. entry, and one entry in its parent directory (the parent being the directory reached by the directory's .. entry).
There is no way to find all the symbolic links pointing to a file. They could be anywhere, including on a filesystem that isn't mounted.
With GNU or FreeBSD find, you can use find /some/dir -samefile /path/to/foo to find all the hard links to the file /path/to/foo that are under /some/dir. With the -L option, you can find all the soft and hard links to that file. You can find an inode by number with the -inum predicate instead of -samefile.
| Where is information about hard/soft links stored? |
1,401,431,048,000 |
The normal way to safely, atomically write a file X on Unix is:
Write the new file contents to a temporary file Y.
rename(2) Y to X
In two steps it appears that we have done nothing but change X "in-place".
It is protected against race conditions and unintentional data loss (where X is destroyed but Y is incomplete or destroyed).
The drawback (in this case) of this is that it doesn't write the inode referred to by X in-place; rename(2) makes X refer to a new inode number.
When X was a file with link count > 1 (an explicit hard link), now it doesn't refer to the same inode as before, the hard link is broken.
The obvious way to eliminate the drawback is to write the file in-place, but this is not atomic, can fail, might result in data loss etc.
Is there some way to do it atomically like rename(2) but preserve hard links?
Perhaps to change the inode number of Y (the temporary file) to the same as X, and give it X's name? An inode-level "rename."
This would effectively write the inode referred to by X with Y's new contents, but would not break its hard-link property, and would keep the old name.
If the hypothetical inode "rename" was atomic, then I think this would be atomic and protected against data loss / races.
|
The issue
You have a (mostly) exhaustive list of systems calls here.
You will notice that there is no "replace the content of this inode" call. Modifying that content always implies:
Opening the file to get a file descriptor.
optional seek to the desired write offset
Writing to the file.
optional Truncating old data, if new data is smaller.
Step 4 can be done earlier. There are some shortcuts as well, such as pwrite, which directly write at a specified offset, combining steps #2 and #3, or scatter writing.
An alternate way is to use a memory mapping, but it gets worse as every byte written may be sent to the underlying file independently (conceptually as if every write was a 1-byte write call).
→ The point is the very best scenario you can have is still 2 operations: one write and one truncate.
Whatever the order you perform them in, you still risk another process to mess with the file in between and end up with a corrupted file.
Solutions
Normal solution
As you have noted, this is why the canonical approach is to create a new file, you know you are the only writer of (you can even guarantee this by combining O_TMPFILE and linkat), then atomically redirect the old name to the new file.
There are two other options, however both fail in some way:
Mandatory locking
It enables file access to be denied to other processes by setting a special flag combination. Sounds like the tool for the job, right? However:
It must be enabled at the filesystem level (it's a flag when mounting).
Warning: the Linux implementation of mandatory locking is unreliable.
Since Linux 4.5, mandatory locking has been made an optional feature. This is an initial step toward removing this feature completely.
This is only logical, as Unix has always shun away from locks. They are error prone, and it is impossible to cover all edge cases and guarantee no deadlock.
Advisory locking
It is set using the fcntl system call. However, it is only advisory, and most programs simply ignore it.
In fact it is only good for managing locks on shared file among several processes cooperating.
Conclusion
Is there some way to do it atomically like rename(2) but preserve hard links?
No.
Inodes are low level, almost an implementation detail. Very few APIs acknowledge their existence (I believe the stat family of calls is the only one).
Whatever you try to do probably relies on either misusing the design of Unix filesystems or simply asking too much to it.
Could this be somewhat of an XY-problem?
| Atomically write a file without changing inodes (preserve hard link) |
1,401,431,048,000 |
Just started reading a bit about the linux file system. In several places I found quotes like this one:
Unix directories are lists of association structures, each of which contains one filename and one inode number.
So I expected to find out that each directory would contain the names of the files under it, with each file mapped to an inode. But when I do vim directory_name in ubuntu, I get something like this:
" ============================================================================
" Netrw Directory Listing (netrw v156)
" /Users/user/workspace/folder
" Sorted by name
" Sort sequence: [\/]$,\<core\%(\.\d\+\)\=\>,\.h$,\.c$,\.cpp$,\~\=\*$,*,\.o$,\.obj$,\.info$,\.swp$,\.bak$,\~$
" Quick Help: <F1>:help -:go up dir D:delete R:rename s:sort-by x:special
" ==============================================================================
../
./
folder1/
folder2/
file1
file2
I expected to see an inode number next to each file name, why isn't this the case?
|
A directory is, semantically speaking, a mapping from file name to inode. This is how the directory tree abstraction is designed, corresponding to the interface between applications and filesystems. Applications can designate files by name and enumerate the list of files in a directory, and each file has a unique designator which is called an “inode”.
How this semantics is implemented depends on the filesystem type. It's up to each filesystem how the directory is encoded. In most Unix filesystems, a directory is a mapping from filenames to inode numbers, and there's a separate table mapping inode numbers to inode data. (The inode data contains file metadata such as permissions and timestamps, the location of file contents, etc.) The mapping can be a list, a hash table, a tree...
You can't see this mapping with Vim. Vim doesn't show the storage area that represents the directory. Linux, like many other modern Unix systems, doesn't allow applications to see the directory representation directly. Directories act like ordinary files when it comes to their directory entry and to their metadata, but not when it comes to their content. Applications read from ordinary file with system calls such as open, read, write, close; for directories there are other system calls: opendir, readdir, closedir, and modifying a directory is done by creating, moving and deleting files. An application like cat uses open, read, close to read a file's content; an application like ls uses opendir, readdir, closedir to read a directory's content. Vim normally works like cat to read a file's content, but if you ask it to open a directory, it works like ls and prints the data in a nicely-formatted way.
If you want to see what a directory looks like under the hood, you can use a tool such as debugfs for ext2/ext3/ext4. Make sure you don't modify anything! A tool like debugfs bypasses the filesystem and can destroy it utterly. The ext2/ext3/ext4 debugfs is safe because it's in read-only mode unless you explicitly allow writing through a command line option.
# debugfs /dev/root
debugfs 1.42.12 (29-Aug-2014)
debugfs: dump / /tmp/root.bin
debugfs: quit
# od -t x1 /tmp/root.bin
You'll see the names of the directory entries in / amidst a bunch of other characters, some unprintable. To make sense of it, you'd need to know the details of the filesystem format.
| How does linux store the mapping folder -> file_name -> inode? |
1,401,431,048,000 |
I'm assuming that this means that if the average file stored (including directories etc) is less than 16384 bytes, it may be possible to run out of inodes before using the full storage capacity of the filesystem. However, should the files being stored consume over 16384 bytes, on average, a physical space storage limit should be reached before one would run out of inodes.
|
Yes that is about right. A couple of minor points to note are:
As far as I can see the overhead of the filsystem itself isn't considered when calculating the number of inodes from this ratio, so the actual average size that a file can be will be slightly lower than 16834 when you consider the overhead superblock, inode table etc. Each inode itself is 256 bytes by default on ext4. So if this ratio is very low, the size of the inodes themselves is substantial.
Symlinks also count as inodes, so remember that a large number will bring down the average file size.
16834 is the default inode_ratio on Linux and should suit most needs. Only change it if you have a good reason to. There are other values defined in /etc/mke2fs.conf for specific usage types. Consider if one of these suits your needs (specify it with the -T option to mkfs.ext4) before defining your own.
| What are the implications of using an inode_ratio of 16384 in terms of storage use on ext4? |
1,401,431,048,000 |
A directory inode isn't substantially different from that of a regular file's inode, what I comprehend from Ext4 Disk Layout is that:
Directory Entries:
Therefore, it is more accurate to say that a directory is a series of data blocks and that each block contains a linear array of directory entries.
The directory entry stores the filename together with a pointer to its inode. Hence, if the documentation says each block contains directory entries, why debugfs reports something different that the filenames stored in the directory's inode? This is a debugging session on an ext4 formatted flash drive:
debugfs: cat /sub
�
.
..�
spam�spam2�spam3��spam4
I don't think inode_i_block can store those filenames, I've created files with really long filenames, more than 60 bytes in size. Running cat on the inode from debugfs displayed the filenames too, so the long filenames were in the inode again!
The Contents of inode.i_block:
Depending on the type of file an inode describes, the 60 bytes of storage in inode.i_block can be used in different ways. In general, regular files and directories will use it for file block indexing information, and special files will use it for special purposes.
Also, there's no reference to the inode storing the filenames in Hash Tree Directories
section which is the newer implementation. I feel I missed something in that document.
The main question is if a directory's inode contain filenames, what do its data blocks store then?
|
Directory entries are stored both in inode.i_block and the data blocks. See "Inline Data" and "Inline Directories" in the document you linked to.
| How come that inodes of directories store filenames in ext4 filesystem? |
1,401,431,048,000 |
I checked manual of find , and there's an option -ctime which specify the last modification time of inode.
But on what occasion would inode change ? When is that option useful ?
|
To simplify:
Any change in file contents changes both the mtime and the ctime.
Any change in metadatas (permissions and other information shown by stat) changes only ctime.
When is it useful: I don't know… But for example, if you want an over-approximation for the time when the last link (ln) to the inode was created, you should check ctime not mtime.
| On what occasion will inode change? |
1,401,431,048,000 |
I desire to list all inodes in current directory that are regular files (i.e. not directories, links, or special files), with ls -la (ll).
I went to the man ls searching for type and found only this which I didn't quite understand in that regard:
--indicator-style=WORD
append indicator with style WORD to entry names: none (default), slash
(-p), file-type (--file-type),
classify (-F)
How could I list only regular files with ls -la (ll as my shortcut in Ubuntu 18.04)?
|
find . -maxdepth 1 -type f -ls
This would give you the regular files in the current directory in a format similar to what you would get with ls -lisa (but only showing regular files, thanks to -type -f on the command line).
Note that -ls (introduced by BSDs) and -maxdepth (introduced by GNU find) are non-standard (though now common) extensions. POSIXly, you can write it:
find . ! -name . -prune -type f -exec ls -ldi {} +
(which also has the benefit of sorting the file list, though possibly in big independent chunks if there's a large number of files in the current directory).
| List only regular files [duplicate] |
1,401,431,048,000 |
I wonder if storing the information about files in inodes instead of directly in the directory is worth the additional overhead. It may be well that I'm overestimating the overhead or overlooking some important thing, but that's why I'm asking.
I see that something like "inodes" is necessary for hardlinks, but in case the overhead is really as big as I think, I wonder if any of the reasons justifies it:
using hardlinks for backups is clever, but efficiency of backups is not important enough when compared to the efficiency of normal operations
having neither speed nor size penalty for hardlinks can really matter, as this advantage holds only for the few files making use of hardlinks while the access to all files suffers the overhead
saving some space for a couple of equally named binaries like bunzip2 and bcat is negligible
I'm not saying that inodes/hardlinks are bad or useless, but can it justify the cost of the extra indirection (caching helps surely a lot, but it's no silver bullet)?
|
Hard links are besides the point. They are not the reason to have inodes. They're a byproduct: basically, any reasonable unix-like filesystem design (and even NTFS is close enough on this point) has hard links for free.
The inode is where all the metadata of a file is stored: its modification time, its permissions, and so on. It is also where the location of the file data on the disk is stored. This data has to be stored somewhere.
Storing the inode data inside the directory carries its own overhead. It makes the directory larger, so that obtaining a directory listing is slower. You save a seek for each file access, but each directory traversal (of which several are needed to access a file, one per directory on the file path) costs a little more. Most importantly, it makes it a lot more difficult to move a file from one directory to another: instead of moving only a pointer to the inode, you need to move all the metadata around.
Unix systems always allow you to rename or delete a file, even if a process has it open. (On some unix variants, make this almost always.) This is a very important property in practice: it means that an application cannot “hijack” a file. Renaming or removing the file doesn't affect the application, it can continue reading and writing to the file. If the file is deleted, the data remains around until no process has the file open anymore. This is facilitated by associating the process with the inode. The process cannot be associate with the file name since that may change or even disappear at any time.
See also What is a Superblock, Inode, Dentry and a File?
| What are inodes good for? |
1,401,431,048,000 |
Each file has an inode. Is there an inode for every directory ? If not, how does Linux manage directories.
|
Directories are special files, hence they have inodes.
You can test that with ls:
ls -li
or using stat:
stat -c '%F : %i : %n' *
Example:
% stat -c '%F : %i : %n' *
regular file : 670637 : bar.csv
regular file : 656301 : file.txt
directory : 729178 : foobar
The number in the middle is the inode number.
| Is there an inode for a directory? |
1,401,431,048,000 |
Two known facts:
In linux, moving a file from one location to another on the same file system doesn't change the inode (the file remains at "the same place", only the directories involved are changed)
Copying, however, generates a truly new file, with a new inode.
Armed with this information, I observer the following phenomena:
$ ls -li /tmp/*.db
1452722 -rw-r--r-- 1 omerda omerda 245760 Jul 7 12:33 /tmp/vf4.db
$
$ cp /tmp/vf4.db /tmp/vf4.2.db
$ ls -li /tmp/*.db # New inode introduced
1452719 -rw-r--r-- 1 omerda omerda 245760 Jul 7 12:38 /tmp/vf4.2.db
1452722 -rw-r--r-- 1 omerda omerda 245760 Jul 7 12:33 /tmp/vf4.db
$
$ mv /tmp/vf4.2.db /tmp/vf4.db
$ ls -li /tmp/*.db
1452719 -rw-r--r-- 1 omerda omerda 245760 Jul 7 12:38 /tmp/vf4.db
$
$ cp /tmp/vf4.db /tmp/vf4.2.db
$ ls -li /tmp/*.db # Original inode appears again! (1452722)
1452722 -rw-r--r-- 1 omerda omerda 245760 Jul 7 12:41 /tmp/vf4.2.db
1452719 -rw-r--r-- 1 omerda omerda 245760 Jul 7 12:41 /tmp/vf4.db
$
$ mv /tmp/vf4.2.db /tmp/vf4.db
$ ls -li /tmp/*.db
1452722 -rw-r--r-- 1 omerda omerda 245760 Jul 7 12:41 /tmp/vf4.db
This "round trip" always results in the original inode being attached to the original file again. I would have expected a fresh new inode being used in each copy.
How does it make this reuse of same inode?
EDIT
In the comments section some asked for a context. So the context is that this bad practice is used by some sqlite wrappers to replace db files without sqlite3 showing errors about the replacement. However, this is not a question about sqlite, please stick to the topic and question.
|
How does it make this reuse of same inode?
In ext4, the inode numbers are just indexes to a table that contains the actual inode data. The lore tells that's what the "i" means, "index". It's not actually stored as a single consecutive table, but that doesn't matter.
The particular inode number you get is one that happens to be free at the time, and it makes sense for the filesystem code to implement the choice deterministically, so if it chose 1452722 before 1452719 the first time, it makes sense for it to pick 1452722 again if it's now free, and if no other changes were done in between than making the copy. It can't reserve the inode number permanently for a particular incarnation of a file, since that would quickly lead to a filesystem full of unusable inode entries reserved for deleted files.
You can't count on getting a particular inode number, mostly because some other process on the system might be creating files at the same time, reserving the inode number you expected to get before you recreate the file. Or the other process could remove files, causing the filesystem code to give one of those next. The way the filesystem is organized into block groups with their associated inodes, might also mean that simply growing a file could change where the filesystem goes to look for a free inode. Or it might not. All the inode number tells you is to identify the file as it currently exists, and when creating a new file, you just get one that doesn't correspond to any existing file.
And also, on other types of filesystems, like VFAT, there are no static inode numbers but instead you might just get a running counter.
| How inodes numbers are assigned |
1,401,431,048,000 |
I'm aware that Linux does not allow hard-linking to a directory. I read somewhere,
that this is to prevent unintentional loops (or graphs, instead of the more desirable tree structure) in the file-system.
that some *nix systems do allow the root user to hard-link to directories.
So, if we are on one such system (that does allow hard-linking to a directory) and if we are the root user, then how is the parent directory entry, .., handled following the deletion of the (hard-link's) target and its parent?
a (200)
\-- . (200)
\-- .. (100)
\-- b (300)
| \-- . (300)
| \-- .. (200)
| \-- c (400)
| \-- . (400)
| \-- .. (300)
| \-- d (500)
<snip>
|
\-- H (400)
(In the above figure, the numbers in the parentheses are the inode addresses.)
If a/H is an (attempted) hard-link to the directory a/b/c, then
What should be the reference count stored in the inode 400: 2, 3, or 4? In other words, does hard-linking to a directory increases the reference count of the target directory's inode by 1 or by 2?
If we delete a/b/c, the . and .. entries in inode 400 continue to point to valid inodes 400 and 300, respectively. But what happens to the reference count stored in inode 400 if the directory tree a/b is recursively deleted?
Even if the inode 400 could be kept intact via a non-zero reference count (of either 1 or 2 - see the preceding question) in it, the inode address corresponding to .. inside inode 400 would still become invalid!
Thus, after the directory tree b stands deleted, if the user changes into the a/H directory and then does a cd .. from there, what is supposed to happen?
Note: If the default file-system on Linux (ext4) does not allow hard-linking to directories even by a root user, then I'd still be interested in knowing the answer to the above question for an inode-based file-system that does allow this feature.
|
Hard links to directories aren't fundamentally different to hard links for files. In fact, many filesystems do have hard links on directories, but only in a very disciplined way.
In a filesystem that doesn't allow users to create hard links to directories, a directory's links are exactly
the . entry in the directory itself;
the .. entries in all the directories that have this directory as their parent;
one entry in the directory that .. points to.
An additional constraint in such filesystems is that from any directory, following .. nodes must eventually lead to the root. This ensures that the filesystem is presented as a single tree. This constraint is violated on filesystems that allow hard links to directories.
Filesystems that allow hard links to directories allow more cases than the three above. However they maintain the constraint that these cases do exist: a directory's . always exists and points to itself; a directory's .. always points to a directory that has it as an entry. Unlinking a directory entry that is a directory only removes it if it contains no entry other than . and ...
Thus a dangling .. cannot happen. What can go wrong is that a part of the filesystem can become detached. If a directory's .. pointing to one of its descendants, so that ../../../.. eventually forms a loop. (As seen above, filesystems that don't allow hard link manipulations prevent this.) If all the paths from the root to such a directory are unlinked, the part of the filesystem containing this directory cannot be reached anymore, unless there are processes that still have their current directory on it. That part can't even be deleted since there's no way to get at it.
GCFS allows directory hard links and runs a garbage collector to delete such detached parts of the filesystem. You should read its specification, which addresses your concerns in details. This is an interesting intellectual exercise, but I don't know of any filesystem that's used in practice that provides garbage collection.
| Linux: How does hard-linking to a directory work? |
1,401,431,048,000 |
If I create a small filesystem, and grow it when I need to, will the number of inodes increase proportionally?
I want to use Docker with the overlay storage driver. This can be very inode hungry because it uses hardlinks to merge lower layers. (The original aufs driver effectively stacked union mounts, which didn't require extra inodes, but instead caused extra directory lookups at runtime). EDIT: hardlinks don't use extra inodes themselves, I can only think the issue is extra directories which have to be created.
(Closed question here. I believe the answer is incorrect. However it says the question is closed, and that I need to create a new one).
|
Yes. See man mkfs.ext4:
-i bytes-per-inode
Specify the bytes/inode ratio. mke2fs creates an inode for
every bytes-per-inode bytes of space on the disk. The larger
the bytes-per-inode ratio, the fewer inodes will be created.
This value generally shouldn't be smaller than the blocksize of
the filesystem, since in that case more inodes would be made
than can ever be used. Be warned that it is not possible to
change this ratio on a filesystem after it is created, so be
careful deciding the correct value for this parameter. Note
that resizing a filesystem changes the numer of inodes to maintain this ratio.
I verified this experimentally, resizing from 1G to 10G and looking at tune2fs /dev/X | grep Inode. The inode count went from 64K to about 640K.
I believe it's a natural consequence of Unix filesystems which use "block groups". The partition is divided into block groups, each of which has their own inode table. When you extend the filesystem, you're adding new block groups.
| If I grow an ext4 partition, will it increase the number of inodes available? |
1,401,431,048,000 |
Say I want to observe how the flow from file name to cluster on hard disc goes.
I get the inode number of I file (which is mapped in a directory data):
1863 autorun.inf
So, now i know that i have to look for the inode numbered 1863 which will contain the pointers to the data on the hard disc.
Where is the inode data located and how does the os know where to find it?
|
Inode data are usually scattered around the disk (in order to cut down seeks). Being able to tell where the inode structures are is the core functionality of a filesystem driver - check LXR for current implementation of ext3 in Linux) or e2fsprogs sources if you are interested in details.
From a user's perspective you might want to take a look at dumpe2fs which will give you some information about a ext2-based (ext3/ext4) filesystem structure.
| Location of inodes (ext)? |
1,401,431,048,000 |
I'm trying to understand how inode numbers (as displayed by ls -i) work with ext4 partitions.
I'm trying to understand whether they are a construct of the linux kernel and mapped to inodes on disk, or if they actually are the same numbers stored on disk.
Questions:
Do inode numbers change when a computer is rebooted?
When two partitions are mounted, can ls -i produce the same inode number for two different files as long as they are on different partitions.
Can inode numbers be recycled without rebooting or re-mounting partitions?
Why I'm asking...
I want to create a secondary index on a USB hard drive with 1.5TB of data and around 20 million files (filenames). Files range from 10s of bytes to 100s of GB. Many of them are hard linked multiple times, so a single file (blob on disk) might have anything up to 200 file names.
My task is to save space on disk by detecting duplicates and replacing the duplication with even more hard links.
Now as a single exercise, I think I can create a database of every file on disk, it's shasum, permissions etc... Once built, detecting duplication should be trivial. Bit I need to be certain I am using the right unique key. Filenames are inappropriate due to the large number of existing hard links. My hope is that I can use inode numbers.
What I would like to understand is whether or not the inode number us going to change when I next reboot my machine. Or if they are even more volatile (will they change while I'm building my database?)
All the documentation I read fudges the distinction between inode numbers as presented by the kernel and inodes on disk. Whether or not these are the same thing is unclear based on the articles I've already read.
|
I'm trying to understand how inode numbers (as displayed by ls -i) work with ext4 partitions.
Essentially, inode is a reference for a filesystem(!), a bridge between actual data on disk (the bits and bytes) and name associated with that data (/etc/passwd for instance). Filenames are organized into directories, where directory entry is filename with corresponding inode.
Inode then contains the actual information - permissions, which blocks are occupied on disk, owner, group, etc. In How are directory structures stored in UNIX filesystem, there is a very nice diagram, that explains relation between files and inodes a bit better:
And when you have a file in another directory pointing to the same inode number, you have what is known as hard link.
Now, notice I've emphasized that inode is reference specific to filesystem, and here's the reason to be mindful of that:
The inode number of any given file is unique to the filesystem, but not necessarily unique to all filesystems mounted on a given host. When you have multiple filesystems, you will see duplicate inode numbers between filesystems, this is normal.
This is in contrast to devices. You may have multiple filesystems on the same device, such as /var filesystem and /, and yet they're on the same drive.
Now, can inode number change? Sort of. Filesystem is responsible for managing inodes, so unless there's underlying issues with filesystem, inode number shouldn't change. In certain tricky cases, such as vim text editor,
renames the old file, then writes a new file with the original name, if it thinks it can re-create the original file's attributes. If you want to reuse the existing inode (and so risk losing data, or waste more time making a backup copy), add set backupcopy yes to your .vimrc.
The key point to remember is that where data might be the same to the user, under the hood it actually is written to new location on disk, hence the change in inode number.
So, to make things short:
Do inode numbers change when a computer is rebooted?
Not unless there's something wrong with filesystem after reboot
2.When two partitions are mounted, can ls -i produce the same inode number for two different files as long as they are on different partitions.
Yes, since two different partitions will have different filesystems. I don't know a lot about LVM, but under that type of storage management two physical volumes could be combined into single logical volume, which would in my theoretical guess be the case where ls - would produce one inode per file
Can inode numbers be recycled without rebooting or re-mounting partitions?
The filesystem does that when a file is removed( that is , when all links to file are removed, and there's nothing pointing to that inode).
My task is to save space on disk by detecting duplicates and replacing the duplication with even more hard links.
Well, detecting duplication can be done via md5sum or other checksum command. In such case you're examining the actual data, which may or may not live under different inodes on disk. One example is from heemayls answer:
find . ! -empty -type f -exec md5sum {} + | sort | uniq -w32 -dD
| How do inode numbers from ls -i relate to inodes on disk |
1,401,431,048,000 |
I want to make ext2 file system. I want to set "number-of-inodes" option to some number. I tried several values:
if -N 99000 then Inode count: 99552
if -N 3500 then Inode count:
3904
if -N 500 then Inode count: 976
But always my value is not the same. Why?
I call mkfs this way
sudo mkfs -q -t ext2 -F /dev/sda2 -b 4096 -N 99000 -O none,sparse_super,large_file,filetype
I check results this way
$ sudo tune2fs -l /dev/sda2
tune2fs 1.46.5 (30-Dec-2021)
Filesystem volume name: <none>
Last mounted on: <not available>
Filesystem UUID: 11111111-2222-3333-4444-555555555555
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: filetype sparse_super large_file
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 99552
Block count: 1973720
Reserved block count: 98686
Overhead clusters: 6362
Free blocks: 1967353
Free inodes: 99541
First block: 0
Block size: 4096
Fragment size: 4096
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 1632
Inode blocks per group: 102
Filesystem created: Thu Apr 6 20:00:45 2023
Last mount time: n/a
Last write time: Thu Apr 6 20:01:49 2023
Mount count: 0
Maximum mount count: -1
Last checked: Thu Apr 6 20:00:45 2023
Check interval: 0 (<none>)
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Default directory hash: half_md4
Directory Hash Seed: 61ff1bad-c6c8-409f-b334-f277fb29df54
|
The number hasn't been ignored, it's been rounded up. It looks like space for inodes are allocated in groups. See in your output:
Inodes per group: 1632
When you request 99,000 inodes, that's not divisible by 1,632. So to ensure that you get the number of inodes you requested, the number has been rounded up to 99,552 which is divisible by 1,632.
It looks like this limit might be somehow derived from the number of block groups, where the number of inodes in each group is uniform across all block groups. My guess is that the number of inodes per block group is calculated as the number of inodes requested divided by the number of block groups and then rounded up to a whole number. See Ext2 on OSDev Wiki
What is a Block Group? Blocks, along with inodes, are divided up into "block groups." These are nothing more than contiguous groups of
blocks.
Each block group reserves a few of its blocks for special purposes
such as:
...
A table of inode structures that belong to the group
..
| mkfs ext2 ignore number-of-inodes |
1,401,431,048,000 |
I use Vim 8.2 to edit my files in my Ubuntu 18.04. When I open a file, do some changes and quit with Vim, the inode number of this file will be changed.
As my understanding, it's because the backup mechanism of my Vim is enabled, so each edition will create a new file (.swp file) to replace the old one. A new file has a new inode number. That's it.
But I found something weird.
As you can see as below, after the first vim 11.cpp, the inode has changed, 409980 became 409978. However, after creating a hard link for the file 11.cpp, no matter how I modify the file 11.cpp with my Vim, its inode number won't change anymore. And if I delete the hard link xxx, its inode number will be changed by each edition of my Vim again.
This really makes me confused.
$ ll -i ./11.cpp
409980 -rw-rw-r-- 1 zyh zyh 504 Dec 22 17:23 ./11.cpp
$ vim 11.cpp # append a string "abc" to the file 11.cpp
$ ll -i ./11.cpp
409978 -rw-rw-r-- 1 zyh zyh 508 Dec 22 17:25 ./11.cpp
$ vim ./11.cpp # remove the appended "abc"
$ ll -i ./11.cpp
409980 -rw-rw-r-- 1 zyh zyh 504 Dec 22 17:26 ./11.cpp
$ ln ./11.cpp ./xxx # create a hard link
$ ll -i ./11.cpp
409980 -rw-rw-r-- 2 zyh zyh 504 Dec 22 17:26 ./11.cpp
$ vim 11.cpp # append a string "abc" to the file 11.cpp
$ ll -i ./11.cpp
409980 -rw-rw-r-- 2 zyh zyh 508 Dec 22 17:26 ./11.cpp
$ vim 11.cpp # remove the appended "abc"
$ ll -i ./11.cpp
409980 -rw-rw-r-- 2 zyh zyh 504 Dec 22 17:26 ./11.cpp
|
It seems the setting backupcopy is auto (run :set backupcopy? in Vim to confirm).
The main values are:
yes make a copy of the file and overwrite the original one
no rename the file and write a new one
auto one of the previous, what works best
[…]
The auto value is the middle way: When Vim sees that renaming file is possible without side effects (the attributes can be passed on and the file is not a link) that is used. When problems are expected, a copy will be made.
In case it's not clear: yes (copy and overwrite) does not change the inode number, no (rename and write anew) does change it.
In your case at first auto was like no. After ln ./11.cpp ./xxx Vim noticed there is another link and auto worked like yes.
| Why didn't inode change anymore with a hard link |
1,401,431,048,000 |
I get the feeling I'm going insane, or tmpfs is very broken for long term use.
I have a workload that very rapidly creates and unlinks files in /dev/shm/[some directory tree]. Linux Slab usage (in size-64 and size-128) increases linearly with inodes allocated/unlinked and never drops (the memory is listed an unreclaimable via meminfo, and slabinfo shows many millions of active objects).
This memory is never reclaimed, and if allowed to continue, OOM. The only fix is unmounting and remounting /dev/shm.
Another user asked this question a few years ago, but the answer did not actually cover the problem in question (operation in /dev/shm causes overflow).
Is this simply a design decision for tmpfs or is there something else going on here? It feels terribly broken that inodes would never be freed once allocated.
Timeline: Process creates 5 million files, one at a time, and unlinks each immediately after creation. All user processes killed at this point. Memory usage is as if 5 million inodes are still in /dev/shm, although df -i and df -h report that /dev/shm is essentially empty. Further iterations of the process loop linearly increase memory usage until the system is totally out of memory and OOMs.
EDIT: For anyone stumbling on this later, this seems to be an artifact of the older kernel I was running (SLES 11, 2.6.32-something). Newer kernels cannot reproduce the problem.
|
For the sake of clarity I'm adding a more-or-less scripted test of what we talked about in the comments. This is on kernel 4.7.2 where the issue does not happen either:
$ cd /dev/shm
$ free
total used free shared buff/cache available
Mem: 1794788 673948 873668 19300 247172 963316
Swap: 2097148 0 2097148
$ for i in `seq 100000`; do touch node$i; done
$ ls -1|wc -l # oops, there are extra three pulseaudio files here
100003
$ free
total used free shared buff/cache available
Mem: 1794788 738240 811944 19300 244604 890184
Swap: 2097148 0 2097148
OK, we get the memory footprint. But rm clears it
$ rm node*
$ free
total used free shared buff/cache available
Mem: 1794788 671484 896524 19300 226780 965884
Swap: 2097148 0 2097148
The match is not perfect 'cause I cleaned some caches in the meantime. But the amount of free memory and memory in cache is the same at the beginning and end of this little experiment.
Therefore yes, the issue happen only in an old kernel version. Which would indicate that there was a bug but it has been fixed already.
| Unlinking files in /dev/shm not freeing memory? |
1,401,431,048,000 |
I know a little about linux kernel. BUt for Freebsd, the "vnode" actually is similar to the "inode" in Linux kernel.
And there is a "inode" concept in FreeBSD or Solaris.
So my question is: what is "inode" in FreeBSD for?
Below is good to read.
Thank you.
http://hub.opensolaris.org/bin/view/Community+Group+advocacy/solaris-linux-freebsd
All three operating systems use a data abstraction layer to hide file
system implementation details from applications. In all three OSes,
you use open, close, read, write, stat, etc. system calls to access
files, regardless of the underlying implementation and organization of
file data. Solaris and FreeBSD call this mechanism VFS ("virtual file
system") and the principle data structure is the vnode, or "virtual
node." Every file being accessed in Solaris or FreeBSD has a vnode
assigned to it. In addition to generic file information, the vnode
contains pointers to file-system-specific information. Linux also uses
a similar mechanism, also called VFS (for "virtual file switch"). In
Linux, the file-system-independent data structure is an inode. This
structure is similar to the vnode on Solaris/FreeBSD. (Note that there
is an inode structure in Solaris/FreeBSD, but this is
file-system-dependent data for UFS file systems). Linux has two
different structures, one for file operations and the other for inode
operations. Solaris and FreeBSD combine these as "vnode operations."
|
An inode is a structure in some file systems that holds a file or directory's metadata (all the information about the file, except its name and data). It holds information about permissions, ownership, creation and modification times, etc.
Systems the offer a virtualised file system access layer (FreeBSD, Solaris, Linux), can support different underlying file systems which may or may not utilise inodes. ReiserFS, for example, doesn't use them, whereas FreeBSD's ffs2 does. The abstraction layer through which you access the file system provides a single and well-defined interface for file operations, so that applications don't need to know about the differences between different file system implementations.
| what is inode for, in FreeBSD or Solaris |
1,401,431,048,000 |
I do not really understand where the tables which contain i-nodes are located. My teacher said that each physical disk has a table of i-nodes, after which there is the files' data. But, on the Internet, I found that each directory has its own table of the inodes and names associated to the files inside it.
Are these two different tables(concepts) or is one of them wrong?
Thank you.
|
My teacher said that each physical disk has a table of i-nodes, after which there is the files' data.
This is broadly correct. More precisely, there's a table of inodes on each filesystem, and there's a separate filesystem on each partition. (Things can get more complicated but we don't need to get into these complications here.)
A filesystem's inode table maps inode numbers to file metadata. It's typically a large array of fixed-size structures. For example, element number 1234 of this array is the inode number 1234. The inode contains information such as the file's permissions, its modification time, a file type, etc. as well as an indication of where the file's contents are located.
But, on the Internet, I found that each directory has its own table of the inodes and names associated to the files inside it.
That's a table that maps file names to inode numbers. That is, the directory is a list of entries (or some more sophisticated data structure), and each element of the list contains a file name and an inode number. To find the file's metadata and contents, the system reads the inode number from the directory, then reads the designated entry in the inode table. To find a file given its path, the system starts with the root inode, finds that it's a directory, finds the directory entry for the first element, reads its inode, and so on.
Note that this is a typical design for a filesystem, but not the only possible one. Most Unix-oriented filesystems are follow this design, but other designs exist.
| Where are i-node tables stored? |
1,401,431,048,000 |
I am using:
debugfs -R 'stat <7473635>' /dev/sda7
to get the file creation time (crtime).
Inode: 7473635 Type: regular Mode: 0664 Flags: 0x80000
Generation: 1874934325 Version: 0x00000000:00000001
User: 1000 Group: 1000 Size: 34
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 8
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x55b65ebc:98040bc4 -- Mon Jul 27 22:09:24 2015
atime: 0x55da0168:60b33f74 -- Sun Aug 23 22:52:48 2015
mtime: 0x55b65ebc:98040bc4 -- Mon Jul 27 22:09:24 2015
crtime: 0x55b65ebc:970fe7cc -- Mon Jul 27 22:09:24 2015
Size of extra inode fields: 28
EXTENTS:
(0):29919781
Why am I not getting crtime in nanoseconds even though ext4 supports nanosecond resolution?
|
It does show the timestamp (with nanoseconds precision) but in hex; it's the field after crtime:, e.g. in your output 0x55b65ebc:970fe7cc. The part after the colon is the nanoseconds.
This article gives more details and explains how to calculate the timestamp/nanoseconds. So, e.g. to convert the hex values to a timestamp a la stat you could run:
date -d @$(printf %d 0x55b65ebc).$(( $(printf %d 0x970fe7cc) / 4 )) +'%F %T.%N %z'
2015-07-27 19:39:24.633600499 +0300
| Why debugfs doesn't show crtime in nanoseconds? |
1,401,431,048,000 |
Hi I am investigating tripwire and have stumbled upon something which i am unsure about. in a tripwire report generated after i modified hosts.deny to include an extra # I noticed the inode number changed from 6969 to 6915. I would like to know why this happened. I know inodes are records which store data about where data is stored on the file system, but would like to know why this number changed for a simple # being inserted.
|
Standard behavior for text editors is to rename the original file to a temporary name before writing out changes, so if there is a problem (such as out of disk space) you don't lose the file entirely. Thus the file gets a new inode number. If the editor is configured to leave the original as a backup file, you'll find the backup file has the original inode number; if not, then the backup will have been deleted after the new file was successfully written.
| tripwire report - inode number |
1,401,431,048,000 |
Formatting a disk for purely large video files, I calculated what I thought was an appropriate bytes-per-inode value, in order to maximise usable disk space.
I was greeted, however, with:
mkfs.ext4: invalid inode ratio [RATIO] (min 1024/max 67108864)
I assume the minimum is derived from what could even theoretically be used - no point having more inodes than could ever be utilised.
But where does the maximum come from? mkfs doesn't know the size of files I'll put on the filesystem it creates - so unless it was to be {disk size} - {1 inode size} I don't understand why we have a maximum at all, much less one as low as 67MB.
|
Because of the way the filesystem is built. It's a bit messy, and by default, you can't even have the ratio as down as 1/64 MB.
From the Ext4 Disk Layout document on kernel.org, we see that the file system internals are tied to the block size (4 kB by default), which controls both the size of a block group, and the amount of inodes in a block group. A block group has a one-block sized bitmap of the blocks in the group, and a minimum of one block of inodes.
Because of the bitmap, the maximum block group size is 8 blocks * block size in bytes, so on an FS with 4 kB blocks, the block groups are 32768 blocks or 128 MB in size. The inodes take one block at minimum, so for 4 kB blocks, you get at least (4096 B/block) / (256 B/inode) = 16 inodes/block
or 16 inodes per 128 MB, or 1 inode per 8 MB.
At 256 B/inode, that's 256 B / 8 MB, or 1 byte per 32 kB, or about 0,003 % of the total size, for the inodes.
Decreasing the number of inodes would not help, you'd just get a partially-filled inode block. Also, the size of an inode doesn't really matter either, since the allocation is done by block. It's the block group size that's the real limit for the metadata.
Increasing the block size would help, and in theory, the maximum block group size increases in the square of the block size (except that it seems to cap at a bit less than 64k blocks/group). But you can't use a block size greater than the page size of the system, so on x86, you're stuck with 4 kB blocks.
However, there's the bigalloc feature that's exactly what you want:
for a filesystem of mostly huge files, it is desirable to be able to allocate disk blocks in units of multiple blocks to reduce both fragmentation and metadata overhead. The bigalloc feature provides exactly this ability.
The administrator can set a block cluster size at mkfs time (which is stored in the s_log_cluster_size field in the superblock); from then on, the block bitmaps track clusters, not individual blocks. This means that block groups can be several gigabytes in size (instead of just 128MiB); however, the minimum allocation unit becomes a cluster, not a block, even for directories.
You can enable that with mkfs.ext4 -Obigalloc, and set the cluster size with -C<bytes>, but mkfs does note that:
Warning: the bigalloc feature is still under development
See https://ext4.wiki.kernel.org/index.php/Bigalloc for more information
There are mentions of issues in combination with delayed allocation on that page and the ext4 man page, and the words "huge risk" also appear on the Bigalloc wiki page.
None of that has anything to do with that 64 MB / inode limit set by the -i option. It appears to just be an arbitrary limit set at the interface level. The number of inodes can also be set directly with the -N option, and when that's used, there are no checks. Also, the upper limit is based on the maximum block size of the file system, not the block size actually chosen as the structural limits are.
Because of the 64k blocks/group limit, without bigalloc there's no way to get as few inodes as the ratio of 64 MB / inode would imply, and with bigalloc, the number of inodes can be set much lower than it.
| Why is 67108864 the maximum bytes-per-inode ratio? Why is there a max? |
1,432,127,475,000 |
I have a large number of files that contain backslashes \ that I would like to manipulate, but whenever I try something like:
$ ls -li
2036553851 -rw-rw-r-- 1 user user 6757 May 20 00:10 Simplex_config\\B1:B3\\_1.csv
2036553766 -rw-rw-r-- 1 user user 6756 May 20 00:07 Simplex_config\\B1:B3\\_2.csv
2036554099 -rw-rw-r-- 1 user user 6785 May 20 00:20 Simplex_config\\B1:B3\\_3.csv
2036553974 -rw-rw-r-- 1 user user 6785 May 20 00:15 Simplex_config\\B1:B3\\_4.csv
$ find . -type f -name 'Simplex*.csv' | xargs cat > looksee.txt
I receive a No such file or directory error. I have considered changing the filenames and then manipulating, but I am curious to see if there was an easier solution with the inode.
I came up with:
#!/bin/sh
if [ -f looksee.txt ]; then
rm -rf looksee.txt
fi
ls -i Simplex_config*.csv | awk '{ print $1 }' > inode_list.txt
while IFS= read -r inode;
do
find . -inum $inode -exec cat {} \; >> looksee.txt
done < inode_list.txt
But this is very cumbersome and I would like to try to find a way to parse the output from ls -i Simplex_config*.csv and pipe it to another command in a one-liner -- is there such an option available?
|
1.
find . -type f -name 'Simplex*.csv' -print0 | xargs -0 cat > looksee.txt
From man xargs
--null
-0
Input items are terminated by a null character instead of by whitespace, and the quotes and backslash are not special (every character is taken literally). Disables the end of file string, which is treated like any other argument. Useful when input items might contain white space, quote marks, or backslashes. The GNU find -print0 option produces input suitable for this mode.
2.
find . -type f -name 'Simplex*.csv' -exec cat {} + > looksee.txt
From man find
-exec command ;
Execute command; true if 0 status is returned. All following arguments to find are taken to be arguments to the command until an argument consisting of ; is encountered. The string {} is replaced by the current file name being processed everywhere it occurs in the arguments to the command, not just in arguments where it is alone, as in some versions of find. Both of these constructions might need to be escaped (with a \) or quoted to protect them from expansion by the shell. The specified command is run once for each matched file. The command is executed in the starting directory. There are unavoidable security problems surrounding use of the -exec action; you should use the -execdir option instead.
-exec command {} +
This variant of the -exec action runs the specified command on the selected files, but the command line is built by appending each selected file name at the end; the total number of invocations of the command will be much less than the number of matched files. The command line is built in much the same way that xargs builds its command lines. Only one instance of {} is allowed within the command. The command is executed in the starting directory.
3.
cat Simplex_config* > looksee.txt
if you have 1 level of subpath only.
| Manipulate multiple files by inode |
1,432,127,475,000 |
Setup
The following sequence of commands is setup for my question.
root@cd330f76096d:/# cd
root@cd330f76096d:~# ls
root@cd330f76096d:~# mkdir -p my_dir/my_subdir
root@cd330f76096d:~# ls -hAil
total 12K
6175969 -rw-r--r-- 1 root root 3.1K Oct 15 2021 .bashrc
6175970 -rw-r--r-- 1 root root 161 Jul 9 2019 .profile
7382820 drwxr-xr-x 3 root root 4.0K Sep 6 19:34 my_dir
Context
Notice that my_dir has three hard links, as per the output. Presumably they are:
./my_dir
my_dir/.
my_dir/my_subdir/..
However. . .
root@cd330f76096d:~# find . -xdev -inum 7382820
./my_dir
And that's it. Only one line.
Questions
What am I missing and/or how does ls -l work?
I'm half expecting that the reason why I can't locate any more files with find is that they refer to . and .. in which case I ask how exactly does ls -l work with references to the source code.
Pre setup
The example above was created in a docker container, which for convenience I'm sharing below:
$ docker pull ubuntu:jammy
jammy: Pulling from library/ubuntu
Digest: sha256:aabed3296a3d45cede1dc866a24476c4d7e093aa806263c27ddaadbdce3c1054
Status: Downloaded newer image for ubuntu:jammy
docker.io/library/ubuntu:jammy
$ docker run -it ubuntu:jammy bash
|
A pathname that find encounters (i.e., apart from the search paths given on the command line) cannot contain a . or .. component, so your command will never show these.
Why? Because the POSIX standard says so (my emphasis):
Each path operand shall be evaluated unaltered as it was provided, including all trailing <slash> characters; all pathnames for other files encountered in the hierarchy shall consist of the concatenation of the current path operand, a <slash> if the current path operand did not end in one, and the filename relative to the path operand. The relative portion shall contain no dot or dot-dot components, no trailing <slash> characters, and only single <slash> characters between pathname components.
("The current path operand" mentioned above is one of the search paths on the command line.)
The ls command can work out the link count of the directory because it makes a stat() call, which returns a stat structure, which contains the number of hard links. It strictly speaking does not know where the other hard links are located though.
| How does `ls` find hard links? |
1,432,127,475,000 |
In order to test an analysis tool, I need a file where the depth (eh.eh_depth) is greater than 1.
I've tried a couple of things:
A large file (1GiB)
Creating hundreds of smaller files (1MiB), deleting every other one, and then filling the disk with one file (hoping for massive fragmentation).
In both cases I only got a depth of 1!
I even tried manually modifying the inodes in a hex editor, but I ended up corrupting the file system.
I wondered if it could be done with debugfs, but I can't see how?
PS: I have seen the 'increasing depth of extent tree in ext4' question on stackoverflow, but I don't really want to create a 174GiB file.
|
If you want a file with a lot of extents, just do:
$ perl -we 'for ($i=0;$i<100000;$i++) {seek STDOUT,$i*8192,0; print "."}' > a
$ ll a
-rw-r--r-- 1 stephane stephane 819191809 Dec 15 23:50 a
$ filefrag a
a: 100000 extents found
That's a sparse file where every other block is sparse, so it forces the extents to be 4KiB large.
debugfs: dump_extents a
Level Entries Logical Physical Length Flags
0/ 2 1/ 1 0 - 199998 33413 199999
1/ 2 1/295 0 - 679 33409 680
2/ 2 1/340 0 - 0 34816 - 34816 1
2/ 2 2/340 2 - 2 34818 - 34818 1
[...]
| Can I create a file on ext4 with a depth > 1 for testing purposes? |
1,432,127,475,000 |
My root filesystem is running out of inodes. If this were an issue of disk space, I'd use du -s to get a top-level overview of where the space is going, then head down the directory tree to find particular offenders. Is there an equivalent option for inodes?
The answers in this question will point out individual directories with high usage, but in my case, that's no good: the Linux source directory, for example, gets scattered across 3000+ directories with low inode counts, rather than showing up as /usr/src/linux-4.0.5 52183.
|
With GNU coreutils (Linux, Cygwin) since version 8.22, you can use du --inodes, as pointed out by lcd047.
If you don't have recent GNU coreutils, and there are no hard links in the tree or you don't care if they're counted once per link, you can get the same numbers by filtering the output of find. If you want the equivalent of du -s, i.e. only toplevel directories, then all you need is to count the number of lines with each toplevel directory name. Assuming that there are no newlines in file names and that you only want non-dot directories in the current directory:
find */ | sed 's!/.*!!' | uniq -c
If you want to show output for all directories, with the count for each directory including its subdirectories, you need to perform some arithmetic.
find . -depth | awk '{
# Count the current depth in the directory tree
slashes = $0; gsub("[^/]", "", slashes); current_depth = length(slashes);
# Zero out counts for directories we exited
for (i = previous_depth; i <= current_depth; i++) counts[i] = 0;
# Count 1 for us and all our parents
for (i = 0; i <= current_depth; i++) ++counts[i];
# We don´t know which are regular files and which are directories.
# Non-directories will have a count of 1, and directories with a
# count of 1 are boring, so print only counts above 1.
if (counts[current_depth] > 1) printf "%d\t%s\n", counts[current_depth], $0;
# Get ready for the next round
previous_depth = current_depth;
}'
| Where are my inodes going? |
1,432,127,475,000 |
Is it possible to create a "file" that, essentially, is symlinked to multiple other files.
Let's say we have a /tmp/dir/ with 100 files in it. What I want is to be able to do is "cat /tmp/dir_allfiles" which would, in essence, be the same as cat /tmp/*
The real use case is more complicated where files may be at different directory levels, etc. so please don't suggest I just use find or cat */*/* or something similar.
I'm fine if I have to use C to do ridiculous / dangerous things. I'm mostly interested in if it's possible.
Here's some of my uname -a if you are curious 3.13.0-32-generic #57-Ubuntu SMP Tue Jul 15 03:51:08 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
|
That is definitely possible to do at the VFS level, e.g. using FUSE.
In particular, concat-fuse, looks pretty much like what you need:
concat-fuse is a FUSE based virtual filesystem for Linux that allows handling a collection of files as if they were a single one. It is essentially doing:
cat *.txt > large_file
But instead of creating large_file, it creates a virtual file and accesses the original *.txt files instead.
| Giving one file multiple inode values |
1,432,127,475,000 |
I have this command this shows me when a file has been modified under a concrete directory (excluding some paths):
inotifywait -m -q -r --format '%T % e %w%f' --excludei '/trash/' --timefmt '%d/%m/%Y-%H:%M:%S%z' /my/monitored/folder
Is there a way to combine this (or a similar) command with tail, so I can retrieve the last line of each modified file? It is important that this combination outputs the file's path and the last line added.
|
In your question you say that you want to scan if a file has been modified, but in your command there's no event specified.
So my answer will use the modify event:
inotifywait -m -q -r \
--format '%T % e %w%f' \
--excludei '/trash/' \
--timefmt '%d/%m/%Y-%H:%M:%S%z' /my/monitored/folder | \
while IFS=' ' read -r time event file; do
echo "file: $file"
echo "modified: $time"
last_line=$(tail -1 "$file")
echo "last line: $last_line"
echo
done
Wich will output something like this:
file: /path/file.txt
modified: 17/02/2021-09:17:02-0300
last line: foo
| How to combine inotify with tail command to print last line of every modified file |
1,432,127,475,000 |
I have a pretty basic system running Ubuntu 16.04 (this question is not specific to Ubuntu, but rather ext4 partitions), 1 HDD, running a few partitions:
sda1 - EXT4 - 100G - /
sda2 - EXT4 - 723.5G - /home
sda3 - NTFS - 100G - (windows)
sda5 - SWAP - 8G
Whenever I try to access one of 3-4 files in a specific directory in the /home partition, (the specific folder causing the issues is /home/path/to/broken/folder), the /home partition will error and remount read-only. dmesg shows the following errors:
EXT4-fs error (device sda2): ext4_ext_check_inode:497: inode #1415: comm rm: pblk 0 bad header/extent: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)
Aborting journal on device sda2-8.
EXT4-fs (sda2): Remounting filesystem read-only
EXT4-fs error (device sda2): ext4_ext_check_inode:497: inode #1417: comm rm: pblk 0 bad header/extent: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)
EXT4-fs error (device sda2): ext4_ext_check_inode:497: inode #1416: comm rm: pblk 0 bad header/extent: invalid magic - magic 0, entries 0, max 0(0), depth 0(0)
So I understand what is going on...some bad block is causing an error and is remounting the drive read-only to prevent further corruption. I know it is these specific files because I can undo the error by
Logging in as root
Running sync
Stopping lightdm (and all sub-processes)
Stop all remaining open files on /home by finding them with lsof | grep /home
Unmounting /home
Running fsck /home (fixing the errors)
Remount /home
Everything is fine again, read and write, until I try to access the same files again, then this entire process is repeated to fix it again.
The way I've tried to access the files is by running ls /home/path/to/broken/folder and rm -r /home/path/to/broken/folder, so it seems any kind of HDD operation on that part of the drive errors it and throws it into read-only again.
I honestly don't care about the files, I just want them gone. I am willing to remove the entire /home/path/to/broken/folder folder, but every time I try this, it fails and throws into read-only.
I ran badblocks -v /dev/sda2 on my hard drive, but it came out clean, no bad blocks. Any help would still be greatly appreciated.
Still looking for a solution to this. Some information that might be useful below:
$ debugfs -R 'stat <1415>' /dev/sda2
debugfs 1.42.13 (17-May-2015)
Inode: 1415 Type: regular Mode: 0644 Flags: 0x80000
Generation: 0 Version: 0x00000000
User: 0 Group: 0 Size: 0
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 0
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x5639ad86 -- Wed Nov 4 01:02:30 2015
atime: 0x5639ad86 -- Wed Nov 4 01:02:30 2015
mtime: 0x5639ad86 -- Wed Nov 4 01:02:30 2015
Size of extra inode fields: 0
EXTENTS:
Now I looked at this myself and compared it to what I suspect to be a non-corrupted inode:
$ debugfs -R 'stat <1410>' /dev/sda2
debugfs 1.42.13 (17-May-2015)
Inode: 1410 Type: regular Mode: 0644 Flags: 0x80000
Generation: 0 Version: 0x00000000
User: 0 Group: 0 Size: 996
File ACL: 0 Directory ACL: 0
Links: 1 Blockcount: 0
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x5639ad31 -- Wed Nov 4 01:01:05 2015
atime: 0x5639ad31 -- Wed Nov 4 01:01:05 2015
mtime: 0x5639ad31 -- Wed Nov 4 01:01:05 2015
Size of extra inode fields: 0
EXTENTS:
(0):46679378
I have bolded what I believe are the key differences here. I looked at other non-corrupted inodes and they display something similar to the 1410 that has a non-zero size and an extent.
Bad header/extent makes sense here...it has no extent....how do I fix this without reformatting my entire /home partition?
I really feel like I've handed this question to someone smarter than me on a silver platter, I just don't know what the meal (answer) is!
|
Finally found the answer from somebody else on another site, just zeroed the inodes and rechecked the system, that was all!
debugfs -w /dev/sda2
:clri <1415>
:clri <1416>
:clri <1417>
:q
fsck -y /dev/sda2
To anybody else with this issue, I found my bad inodes using find on the bad mount, then checked dmesg for errors on the bad inodes.
| Partition Errors and Remounts Read-Only when Accessing Specific File |
1,432,127,475,000 |
I understand the size reported by ls corresponds with number of inodes inside the directory, not their actual size.
I have noticed peculiar behavior, when displaying directory size with ls. Here is how to quickly reproduce it:
first create empty directory, the size reported by ls is 4096 (as expected)
mkdir test
ll -d test/
drwx------ 2 root root 4,096 2015-Dec-29 22:22:36 test/
create 10,000 files inside. Size reported is now 167,936
touch test/{1..9999}
ll -d test/
drwx------ 2 root root 167,936 2015-Dec-29 22:23:24 test/
remove all files. Size should decrease back to 4096
rm test/*
ll -d test/
drwx------ 2 root root 167,936 2015-Dec-29 22:23:59 test/
But the size is still reported as 167,936.
why?
can somebody explain this?
|
Generally, directory files are not cleaned up - their space usually is small enough (compared to their contents) that it's not effective to do this (particularly when they might grow again). Finding an authoritative answer for this might be hard... Forum comments are easy:
Shrink/reset directory size?
Linux directories do not shrink automatically also gives some insight.
Why directory with large amounts of entries does not shrink in size after entries are removed? provides a Linux-specific authoritative answer
| size of directory reported by ls [duplicate] |
1,432,127,475,000 |
Is there any tools or some way in linux which can be used to view internals of filesystems ?
How to view the inode related structures and journal ? and cached pages of files (pagecache).
|
This will of course depend on what filesystem you are using,
e2fsprogs contains debugfs which will work with ext2, ext3 and ext4, and is used to manually view or modify internal structures of the file system
man page for debugfs is here
| Filesystem and journal layout |
1,432,127,475,000 |
After reading this post: https://stackoverflow.com/questions/14189944/unix-system-file-tables, I've basically understood how Linux manages the files.
But I don't know how to manage the offsets of the files.
As my understanding, one element (one line) in Open File Table maintain its own offset. For example, I have two processes A and B, which are reading one same file. So I think the case should be as below:
Open File Table
____________ ______________
| processA | | offset: 12 | ------\
| fdA | ---------> |------------| \ INode Table
|----------| \______ ___________
/ | file |
____________ ______________ / |---------|
| processB | | offset: 15 | ------/
| fdB | ---------> |------------|
|----------|
So, process A has its own offset in Open File Table, so does process B. In the case above, process A is reading the file at the offset 12, process B is reading the file at the offset 15.
If I'm right, now I'm confused.
Now, if I have a process, opening a file named myfile, keeps writing strings into the file. At some moment, I execute the command > myfile to empty the file. As my understanding, the process has its own offset, and the process of > myfile has another offset. > myfile only changed its own offset, but why does the writing process now start to write strings at the beginning of the file (offset equals to 0 now) after executing > myfile?
In a word, how does the writing process knows that it should change the offset after executing > myfile? Is there some offset-synchronous-mechanism?
|
In a word, how does the writing process knows that it should change the offset after executing > myfile?
It doesn’t. The file offset isn’t changed as a result of > myfile.
What happens to subsequent file operations depends on the circumstances. read returns 0 if the offset is past the end of the file. write adjusts the file offset to the end of the file if it was opened with O_APPEND; otherwise, the write happens at the requested offset, even if that results in adding missing data to the file.
| How does linux manage the offsets of files |
1,432,127,475,000 |
What are functions to manipulate sparse files under Linux? (let's say in C, notes about other systems highly welcome)
e.g.:
make hole inside of file by removing part of its inside
investigate structure , e.g. generate sequence of pairs denoting beginnings and ends of separated continuous blocks of data
split file into two at some point, by reassigning range of blocks (i.e. without moving actual data)
investigate inodes, and other relevant aspects? (maybe possible to assign some blocks to multiple files in copy-on-write manner?)
Context:
Original question that come to my mind and I arrived from was after man rsync of --sparse option:
Why rsync's --sparse option conflicts with --inplace ?
Is it limitation of filesystem calls api?
From data structure point of view, if source sparse file is seen as sequence of non-continuous blocks of data, than I would expect from "r"syncing to deallocate on destination those ranges that does not exist at source, allocate missing ones, rest update accordingly (even with standard rsync rolling hash algorithm, treating all remaining sequences as one, or running separately on each).
Reference:
man rsync
-S, --sparse
Try to handle sparse files efficiently so they take up less space on the destination. Conflicts with --inplace because it's
not possible to over-
write data in a sparse fashion.
|
Sparse files are designed to be transparent to userspace: holes are created by seeking past unused areas, and are read as blocks of zeroes. They can’t be detected using standard userspace APIs, at least not yet — as pointed out by Stéphane Chazelas, at least Solaris and Linux support the SEEK_DATA and SEEK_HOLE lseek(2) flags which allow userspace programs to find holes, and these flags might be added to POSIX at some point.
This explains the incompatibility between rsync’ --sparse and --inplace options: when writing to an existing file portably, holes can’t be created in existing data. --sparse works by rewriting the whole file, skipping over (long) sequences of zeroes, which results in sparse files on OSs and file systems which support them.
On Linux, you can retrieve details of files’ sparseness using the fiemap ioctl, and e2fsprogs’ filefrag(8); see Detailed sparse file information on Linux. On the writing side, you can use fallocate(2) (and the handy fallocate(1) utility) to punch holes in an existing file, making it sparse if the holes cover entire blocks. Support is file system dependent — only XFS, btrfs, ext4, and tmpfs currently support these operations. Recent kernels (since 4.1) and very recent versions of util-linux support inserting holes in files, shifting the content after the hole (fallocate -i, introduced in util-linux 2.30 which should be released soon).
Your last two questions are file system surgery, and I’m not sure there’s any generic system call or ioctl available to perform such operations. reflink-compatible file systems allow files to share their contents; this can be achieved using the FICLONEand FICLONERANGE ioctls.
| What are functions to manipulate sparse files under Linux? |
1,432,127,475,000 |
I created a hard link for the shadow file. For removing the passwd of the user I opened the shadow file in vi editor and removed the encrypted passwd and then saved. The inode value of the shadow file was changed. Then I updated the passwd of the user and again the inode value of the shadow file changed.
Why the inode of the shadow file changes when it is edited/updated?
|
The usual implementation of password changing involves hardlinking /etc/shadow to /etc/stmp (or some similar name; link() being atomic on local filesystems, this constitutes a kind of lock file mechanism), writing out a new one to a temporary file, then renaming the original /etc/shadow to /etc/shadow- or similar and renaming the temporary to /etc/shadow. This is done for robustness: at all times the original shadow file, unmodified, still exists and can be recovered easily even if the power fails at just the wrong time or something equally bad (unless it destroys the entire disk).
| Why the inode value of shadow file changes? |
1,432,127,475,000 |
using dumpe2fs on some ext4 partition, I get in the initial data, that the first inode is #11. However, if I ls -i on this disk root partition, I get that it's inode number is #2 (as expected). So... What is this “first partition” reported by dumpe2fs ?
|
#11 is the first "non-special" inode, that can be used for the first regularly created file or directory (usually used for lost+found). The number of that inode is saved in the filesystem superblock (s_first_ino), so technically it doesn't need to be #11, but mke2fs always sets it that way.
Most of the inodes from #0 to #10 have special purposes (e.g. #2 is the root directory) but some are reserved or used in non-upstream versions of the ext filesystem family. The usages are documented on kernel.org.
inode Number
Purpose
0
n/a
1
List of defective blocks
2
Root directory
3
User quota
4
Group quota
5
Reserved for boot loaders
6
Undelete directory (reserved)
7
"resize inode"
8
Journal
9
"exclude" inode (reserved)
10
Replica inode (reserved)
| what is this “first inode” reported by dumpe2fs? |
1,432,127,475,000 |
I am using Ubuntu Linux and, just for fun, I want to create a hardlink to a directory (as seen here). Because I'm just doing this for fun, I'm not looking for any sort of pre-developed directory-hardlinking software that someone else wrote, I want to know how to do it myself. So, how do I directly, manually, modify an inode?
Ideally I would like the answer as a Linux command that I can run from the Bash command line, but if there is no way to do it from there I would also accept information on how to do it in C or (as a last resort) assembly.
|
That depends on the filesystem. For ext4, you can do this with debugfs as follows:
dennis@lightning:/tmp$ dd if=/dev/zero of=ext4.img bs=1M count=100
104857600 bytes (105 MB) copied, 0.645009 s, 163 MB/s
dennis@lightning:/tmp$ mkfs.ext4 ext4.img
mke2fs 1.42.5 (29-Jul-2012)
ext4.img is not a block special device.
Proceed anyway? (y,n) y
...
Writing superblocks and filesystem accounting information: done
dennis@lightning:/tmp$ mkdir ext4
dennis@lightning:/tmp$ sudo mount ext4.img ext4
dennis@lightning:/tmp$ mkdir -p ext4/test/sub/
dennis@lightning:/tmp$ sudo umount ext4
dennis@lightning:/tmp$ debugfs -w ext4.img
debugfs 1.42.5 (29-Jul-2012)
debugfs: link test test/sub/loop
^D
dennis@lightning:/tmp$ ls ext4/test/sub/loop/sub/loop/sub/loop/sub/loop/sub/loop/
total 1
drwxrwxr-x 2 dennis dennis 1024 mrt 26 12:15 sub
Notes:
you cannot link directly to the parent, so foo/bar can't be a link to foo, hence the extra directory.
You should not run debugfs on mounted filesystems. If you do, you will need to unmount/mount after making changes.
Tools like find and ls still won't loop:
dennis@lightning:/tmp$ find ext4
ext4
ext4/lost+found
find: `ext4/lost+found': Permission denied
ext4/test
ext4/test/sub
find: File system loop detected; `ext4/test/sub/loop' is part of the same file system loop as `ext4/test'.
| How do I manually modify an inode? |
1,432,127,475,000 |
I'm working on an assignment for my college course, and one of the questions asks for the command used to create a hard link from one file to another so that they point to the same inode. We were linked a .pdf file to refer to, but it doesn't explain said process. Is it any different from creating a standard hard link?
|
Hard links are not "between" the files, there's one inode, with >1 entries in various directories all pointing to that one inode. ls -i should show the inodes, then experiment around with ln (hard link) and ln -s (soft or symbolic):
$ touch afile
$ ln -s afile symbolic
$ ln afile bfile
$ ls -1 -i afile symbolic bfile
7602191 afile
7602191 bfile
7602204 symbolic
$ readlink symbolic
afile
$
| Two hard linked files share inode |
1,432,127,475,000 |
I'm reading about file systems and storage medium and I can't understand why if I create a one block size file I can't have a smaller inode than a one of a bigger file. Can't the OS dynamically choose the inode size according to the file size?
|
One reason for inodes to be fixed-size is that in the traditional Unix filesystem format (which e.g. ext4 still follows pretty closely), the inodes are stored in what is essentially a single table. With fixed-size items, locating an item based on its index number is trivial. With any other data structure it would require more work, and perhaps more importantly, more random-access reads across the data structure.
The size of the file itself doesn't influence the inode itself, which traditionally stores the location of the first couple of data blocks. For longer files, the system allocates extra blocks (indirect blocks) to hold the locations the rest. (See the wikipedia page for a pretty picture.)
I said "traditionally", since e.g. ext4 actually does things differently, it can save the contents of symlinks in the inode itself (they're often short, so it's useful to not allocate a full block), and the traditional tree of single blocks is replaced by a tree of extents, i.e. spans of several blocks.
As far as I can tell, on ext4 the reason for supporting inodes larger than the minimum size, is to allow for new fields to be stored, and to use the extra space for extended attributes. Looking at the table in the first link, the extra fields above the original 128 bytes on an ext4 inode mostly store higher precision time stamps. In SELinux systems, the security labels are implemented as extended attributes, so being able to store them directly in the inode can be very useful.
Newer filesystems like btrfs, XFS and ZFS with less orthodox formats are likely to do things differently, and I don't know much about the file systems used on, say BSD systems.
| Why is inode size fixed? |
1,432,127,475,000 |
Can I safely conclude that the sticky bit is not used in current file sysems and reuse the bit for my own purpose.
|
No, you cannot assume that. It's not true for directories. You can make the narrower assumption that it's true for non-directory files.
| Is the sticky bit not used in current file systems |
1,432,127,475,000 |
when I try to resize the disk we get that
resize2fs /dev/sdb
resize2fs 1.42.9 (28-Dec-2013)
Please run 'e2fsck -f /dev/sdb' first.
so when I try to do e2fsck
I get the following
e2fsck -f /dev/sdb
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Deleted inode 142682 has zero dtime. Fix<y>?
is it ok ? to continue by entering yes option , or this is something that can delete the data on disk ?
|
It’s OK to let fsck fix this, it refers to a deleted inode — the data has already been deleted, nothing more will be deleted.
| rhel + efsck + Deleted inode xxxxx has zero dtime |
1,432,127,475,000 |
I am a Unix wanderer. I just noticed that symlinks don't have data blocks allocated to them, I think the inode of the symlink file stores the filename which the symlink refers to, is this actually the case?
$ stat sdb
File: sdb -> /dev/sdb
Size: 8 Blocks: 0 IO Block: 4096 symbolic link
Device: 803h/2051d Inode: 26348139 Links: 1
....
I could only imagine one possibility for now, the inode of sdb symlink contains among other things (i,e. owner, permissions...) + /dev/sdb path.
|
ext4 stores the target of a symbolic link inside the inode, if the target is less than 60 bytes long. Longer targets will be stored in a data block.
| Why symbolic links have no data blocks allocated to them in ext4fs? |
1,432,127,475,000 |
Say I have a directory with these permissions:
drwxrwx---
Inside this directory, a file with these permissions:
-rw-rw-rw-
Is the file readable/writable by everyone or not ?
If not, how secure is this access restriction?
What if a random user makes a link to my file inside his home directory? Could he access the file then?
Or could he access the file by guessing its inode number and using some system calls on inodes?
|
Yes, a file in a directory is only accessible to users who have the execute permission on the directory. It's like leaving jewelry in an unlocked drawer inside a locked house: the jewelry is under lock.
A random user cannot create a hard link to a file, only the owner file. If the file has multiple hard links, some of which are in a publicly accessible directory, then the file will be publicly accessible. But that has to be set up by the owner of the file.
Anyone can create symbolic links that happen to point to a file, but that doesn't allow them to access the file. Symbolic links do not bypass permissions.
If the directory is world-executable at some point and there are processes that have the file or a parent directory opened at the time you restrict the permissions on the directory, then those processes still have the file open afterwards. However if they close it (or move out to another directory) they won't be able to reopen it (or change directory back in). Similarly, a setuid or setgid process may open the file or change to the directory, then drop its permissions. All of this requires the cooperation of the file or directory owner.
There is no way to open a file via its inode. The fact that this would allow to bypass restrictive permissions such as this case is the main reason why this feature doesn't exist.
| Is a -rw-rw-rw- file really inaccessible inside a drwxrwx--- directory? |
1,432,127,475,000 |
I know how hard links and symlinks work and I know why hard links can't be used for directories but in this case, is it some kind of exception?
For example I do:
ls -al Documents
total 8
drwxr-xr-x 2 piotr piotr 4096 cze 28 11:19 .
drwxrwx--- 17 piotr piotr 4096 lip 2 16:41 ..
. is a hard link to Documents itself and .. is a hard link to my home directory so hey, it's illegal
|
As someone said in a comment on the question, just because hard links to directories aren't permitted (i.e., by the ln command), does not mean they are not possible. The superuser can actually use the "-d" or "-F" option to the ln command to force the creation of a hard link to a directory (though the man page says it will "probably" fail due to filesystem restrictions - not sure what that's about, and I'm not going to try it on one of my own systems to see...).
Hard links to directories are not permitted because they can create loops for programs that try to traverse the directory structure. In any directory, . and .. are hard links to that directory, and its parent, respectively - these are "well known" special cases and anything that tries to traverse the filesystem knows to account for that. But it is certainly technically possible to create a hard link to a directory if you're persistent - it's just not advisable.
| Why . and .. are hard links to directories while in *nix systems hard links are not allowed for directories? |
1,432,127,475,000 |
Is there a command line tool that can be used to rewrite all regular files in a directory tree either in-place or by creating new inodes?
With rewriting a file in-place, I mean opening the file for reading and writing, reading blocks of a reasonable size and writing those blocks at the same location, doing this for the whole file. Basically what this command line does:
find dir -type f -print0 | xargs -0 -n1 bash -c 'dd if="$1" of="$1" conv=notrunc bs=64M' -
If instead a new inode is created, file attributes should be replaced as good as possible, e.g. what this command does:
find dir -type f -print0 | xargs -0 -n1 bash -c 'echo "$1"; cp -a "$1" "$1~" && mv "$1~" "$1"' -
Background:
I'm in the process of trying to gain some experience and finding good practices for using ZFS deduplication, where appropriate. ZFS deduplication uses a DDT (deduplication table) and operates on blocks of a size given by the file's recordsize, which has an impact on the effectiveness and memory-usage of deduplication. I'm exploring the possibilities of migrating already-written data to use or stop using the DDT or change the file's recordsize. ZFS does not automatically change these parameters of already-written data, so the data needs to be rewritten.
To change whether the DDT is used, it is sufficient to rewrite the data in place (without creating a new file). But the recordsize of a file is determined when it is created and thus a new file needs to be created to change it.
|
I just created tool that does just that:
https://github.com/pjd/filerewrite
Alternatively with ZFS you can also use zfs send/recv where the target file system has deduplication turned on. After that you will need to rename the file systems and make sure all the other file system properties are moved over.
| Command line tool to rewrite all files in directory tree |
1,432,127,475,000 |
We have an inode number that we're trying to associate to an actual file name. The filesystem is XFS. Looking there are examples that purport to be able to accomplish this with xfs_db and/or xfs_ncheck but thus far we've been unsuccessful in doing this.
Example
We're triaging an issue where we'd like to find the filenames associated to the inode numbers which show up in a procs fdinfo file under /proc.
$ grep inotify /proc/9652/fdinfo/23 | head
inotify wd:58eb9 ino:cfd30c7 sdev:20 mask:3c0 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c730fd0c00000000
inotify wd:58eb8 ino:cfd1f09 sdev:1e mask:3c0 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:091ffd0c00000000
inotify wd:58eb7 ino:cfd1ee9 sdev:1a mask:3c0 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:e91efd0c00000000
inotify wd:58eb6 ino:cfd1ec8 sdev:1c mask:3c0 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:c81efd0c00000000
inotify wd:58eb5 ino:cfd1eb9 sdev:19 mask:3c0 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:b91efd0c00000000
inotify wd:58eab ino:cfd24cf sdev:20 mask:3c0 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:cf24fd0c00000000
inotify wd:58eaa ino:cfdbc51 sdev:1e mask:3c0 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:51bcfd0c00000000
inotify wd:58ea9 ino:cfdbc31 sdev:1a mask:3c0 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:31bcfd0c00000000
inotify wd:58ea8 ino:cfdbc0f sdev:1c mask:3c0 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:0fbcfd0c00000000
inotify wd:58ea7 ino:cfdb000 sdev:19 mask:3c0 ignored_mask:0 fhandle-bytes:8 fhandle-type:1 f_handle:00b0fd0c00000000
These inodes are in HEX so we need to convert them to DEC:
$ echo $((16#cfd30c7))
217919687
Using xfs_ncheck:
$ xfs_ncheck -i $(echo $((16#cfd30c7))) /dev/mapper/vg0-dockerlv
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed. Mount the filesystem to replay the log, and unmount it before
re-running xfs_ncheck. If you are unable to mount the filesystem, then use
the xfs_repair -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.
must run blockget -n first
Questions
How can we do this with XFS?
I've done similar things using debugfs and ext3/4 filesystems but this doesn't seem as easy with XFS?
References
find inode/file using specific block on running xfs filesystem
How to map an inode in an XFS filesystem
xfs_ncheck man page
|
In theory the command should work, but in practice, xfs_ncheck is a shell script around xfs_db and xfs_db very much prefers cleanly unmounted filesystems:
# xfs_db /dev/SSD/root
xfs_db: /dev/SSD/root contains a mounted filesystem
fatal error -- couldn't initialize XFS library
So by default, for mounted filesystems it does not even run at all, additional options are required to ignore mounted state (implied by xfs_ncheck) but even then, on a mounted or otherwise unclean filesystem, xfs_db-related commands often don't work as expected, and then you get a somewhat unclear message about logs that need to be replayed and the like.
So you'd have to umount, or re-mount read-only, or use a copy-on-write snapshot to produce a clean filesystem image to run those commands successfully.
But if it's just the regular inode number, for a mounted filesystem, you can just as well use
find /path/to/mountpoint -xdev -inum X
But this won't find already deleted files and might also miss files hidden under other mountpoints (in that case consider mount --bind instead of -xdev).
Also note that inum-filename correlation can be somewhat arbitrary in case of hardlinks and the like.
| Find the filename associated with an inode number on XFS filesystem |
1,432,127,475,000 |
In the Unix File system (UFS), the file is represented as an inode structure which has 15 pointers that reference the direct blocks or indirect blocks.
Taking the below images is an example.
Each block represented as Data on the right hand side contains the actual file data. And the size of this data block usually is 4096 and is decided during the file system creation.
For a huge file of 40 MB it would occupy nearly 1K data blocks. Given this scenario, if we append data to this file I see it would only impact the last block or if there is no space in the last data block it will create new data block.
But if we add some data (some 200 bytes) at the start of file, would it have cascade effect on the below data blocks and results in moving (or pushing) last 200 bytes of its each data block to the next data block?
Similarly when we delete the first 200 bytes from the first data block, will it have cascade effect on the lower data blocks?
Or is there an efficient way that UFS or in general file systems employ to handle such scenarios, may be some buffer space is reserved for each data block?
Thanks in advance.
|
Most filesystems don't support inserting data at the beginning of a file, and Unix doesn't have an API for that. In most operating systems, the only ways to modify a file are to overwrite a segment (e.g. change aaaaaaaaaa to aaabbbaaaa), to append data at the end (e.g. change aaaaaaaaaa to aaaaaaaaaacccc), or to truncate the file (e.g. change aaaaaaaaaa to aaaaa).
If you want to add data at the beginning of a file, create a new file with the additional data, and copy the content of the old file after that.
This is true both for the original Unix and for most if not all modern ones (and more generally for most operating systems).
| Does adding the content at the start of file result in updating all the data blocks? |
1,432,127,475,000 |
I am reading in man lsof that
+L enables the listing of file link counts. A specification of
the form "+L1" will select open files that have been unlinked.
I don't understand why deleted files should have count 1.
Should not the count for deleted files be 0 ?
|
Well, yes. The manpage on my Debian system says “When +L is followed by a number, only files having a link count less than that number will be listed.”
| link count of deleted files |
1,432,127,475,000 |
I am trying to understand what an inode is. However, this passage from Wikipedia puzzles me:
Installation of new libraries is simple with inode filesystems. A running process can access a library file while another process replaces that file, creating a new inode, and an all new mapping will exist for the new file so that subsequent attempts to access the library get the new version. This facility eliminates the need to reboot to replace currently mapped libraries. For this reason, when updating programs, best practice is to delete the old executable first and create a new inode for the updated version, so that any processes executing the old version may proceed undisturbed.
|
In Unix-style file systems, everything the system knows about a file (except its name) is stored either in the inode or in a location pointed to by the inode. That includes its contents, ownership, modification dates, and permissions. A Unix directory entry is just a name and a pointer to the inode, and is only used when a process is opening a file. Once the file is open, the directory entry is irrelevant.
What that means is that it's possible to delete a file that's currently open without disturbing the processes that are reading or writing that file. Deleting the file simply removes the directory entry. The inode remains until all processes have closed the file, at which point the inode and all other file data are deleted (or at least marked as no longer in use and available for reclamation). This is handled by a field, called "link count", part of the inode structure.
Therefore, if you want to upgrade a shared library that's in use by a running program, you can just delete the library file. Since the program already has the file open, it won't be affected by this. Then you install the new version of the library as a new file (which gets a new inode).
| Why do inode-based file systems NOT need reboot after updating library versions? [duplicate] |
1,432,127,475,000 |
I had a folder deleted from a SMB share from a windows machine. Thanks to zero confirmation a whole folder was deleted. First ran photorec that pulled most of the files except 1, the very last file copied. Further testing with extundelete was able to pull the whole folder minus 4-5 files. However the single most important file again was not recovered. Looking at the inodes I can see the files recovered have sequential inodes. So i was able to narrow down the exact inode. However I get the following trying to recover that specific inode.
Loading filesystem metadata ... 59613 groups loaded.
Loading journal descriptors ... 29932 descriptors loaded.
Unable to restore inode 60596808 (file.60596808): No undeleted copies found in the journal.
However when I search for that inode I do get data:
Loading filesystem metadata ... 59613 groups loaded.
Group: 14794
Contents of inode 60596809:
0000 | e4 81 e8 03 dd df b2 1b 43 2d 08 5d 53 2d 08 5d | ........C-.]S-.]
0010 | fd 97 05 5d 53 2d 08 5d e8 03 00 00 00 00 00 00 | ...]S-.]........
0020 | 00 00 08 00 01 00 00 00 0a f3 00 00 04 00 00 00 | ................
0030 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................
0040 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................
0050 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................
0060 | 00 00 00 00 70 57 ff 3f 00 00 00 00 00 00 00 00 | ....pW.?........
0070 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................
0080 | 20 00 00 00 ec e9 88 2a b0 16 cf 0f 1c 76 bb a2 | ......*.....v..
0090 | 3c 2d 08 5d d4 64 6c a9 00 00 00 00 00 00 00 00 | <-.].dl.........
00a0 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................
00b0 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................
00c0 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................
00d0 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................
00e0 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................
00f0 | 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 | ................
Inode is Unallocated
File mode: 33252
Low 16 bits of Owner Uid: 1000
Size in bytes: 464707549
Access time: 1560816963
Creation time: 1560816979
Modification time: 1560647677
Deletion Time: 1560816979
Low 16 bits of Group Id: 1000
Links count: 0
Blocks count: 0
File flags: 524288
File version (for NFS): 1073698672
File ACL: 0
Directory ACL: 0
Fragment address: 0
Direct blocks: 62218, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0
Indirect block: 0
Double indirect block: 0
Triple indirect block: 0
Using debugfs I tried to dump the inode however all I got was a file the correct size but zero'd.
The size in bytes, dates, I am 99% sure this is the exact inode file I need. Is this data basically a stub missing a pointer to the exact locations on the disk? Is there anyway to use this inode data to recover the actual data?
|
See https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout#The_Contents_of_inode.i_block
The "File flags: 524288" is 0x80000 in hex, so it is the "extents" flag. So, although your extundelete interpreted the inode.i block as direct/indirect/double indirect/triple indirect block pointers, this is not correct. But we can still decode this ourselves.
The first number in "Direct blocks" field is 62218, which is 0xF30A in hex - the magic number for the extent tree mode (eh_magic), confirming the "File flags" value. Since the old-style block pointers are little-endian 32-bit, but the extent mode magic number is 16-bit, we know that the eh_entries field would have been displayed as part of the first "Direct blocks" number. Since it did not mess up the displayed magic number, eh_entries must be zero.
Likewise, the second number in "Direct blocks" is 4, which decodes to two 16-bit numbers: 4 for eh_max and 0 for eh_depth. The rest of the inode.i block seem to be all zeroes.
So here are the contents of the inode.i block interpreted according to extent mode:
eh_magic = 62218, correct.
eh_entries = 0, no valid entries following the header.
eh_max = 4, maximum of 4 entries in inode.i.
eh_depth = 0, this extent node would point directly to data blocks
eh_generation = 0 (not used by standard ext4)
The rest of the inode.i is all zeroes, so there are no valid struct ext4_extent nor struct ext4_extent_idx nodes here, confirming the eh_entries value of 0.
So unfortunately it looks like the extent table has been zeroed out as part of the delete operation, and the actual pointers indicating the location of the file's blocks on disk are gone. So you're correct, this is indeed just a stub.
| Restoring via inode that's no longer in the journal? |
1,432,127,475,000 |
A follow-up from this question.
My further reading on Docker storage drivers revealed that the overlay driver merges all the image layers into lower layers using a hard link implementation which cause excessive inode utilization. Can someone explain this? As far as I know, creating hard links does not create a new inode.
|
OverlayFS is a union filesystem, and there are two storage drivers at the Docker level that make use of it: the original/older version named overlay and the newer version named overlay2. In OverlayFS, there is a lower-level directory which is exposed as read-only. On top of this directory is the upper-level directory, which allows read-write access. Each of these directories is called a layer. The combined view of both the lower-level and upper-level directories is presented as a single unit, called the 'merged' directory.
The newer overlay2 storage driver natively supports up to 128 such layers. The older overlay driver is only able to work with two layers at a time. Since most Docker images are built using multiple layers, this limitation is fairly significant. To work around this limitation, each layer is implemented as a separate directory that simulates a complete image.
To examine the differences on my test system, I pulled the 'ubuntu' image from Docker Hub and examined the differences in directory structure between the overlay2 and overlay drivers:
[root@testvm1 overlay2]$ ls */diff
4864f14e58c1d6d5e7904449882b9369c0c0d5e1347b8d6faa7f40dafcc9d231/diff:
run
4abcfa714b4de6a7f1dd092070b1e109e8650a7a9f9900b6d4c3a7ca441b8780/diff:
var
a58c4e78232ff36b2903ecaab2ec288a092e6fc55a694e5e2d7822bf98d2c214/diff:
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
c3f1a237c46ed330a2fd05ab2a0b6dcc17ad08686bd8dc49ecfada8d85b93a00/diff:
etc sbin usr var
[root@testvm1 overlay]# ls */root/
001311c618ad7b94d4dc9586f26e421906e7ebf5c28996463a355abcdcd501bf/root/:
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
048f81f400f7d74f969c4fdaff6553c782d12c04890ad869d75313505c868fbc/root/:
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
8060f0c647f24050e1a4bff71096ffdf9665bff26e6187add87ecb8a18532af9/root/:
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
fbdef944657234468ee55b12c7910aa495d13936417f9eb905cdc39a40fb5361/root/:
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
In the overlay representation, each layer simulates a complete image, while the overlay2 layers only contain the exact differences between layers. In the overlay driver's approach, hard links are used as a way to save space between the different layers. However, this method is still not perfect, and new inodes are required when the image data contains special files such as symbolic links and character devices. This can quickly add up to large number of inodes.
The inode distribution betwen the overlay2 and overlay drivers on my test system are as shown below.
[root@testvm1 overlay2]$ du --inodes -s *
8 4864f14e58c1d6d5e7904449882b9369c0c0d5e1347b8d6faa7f40dafcc9d231
27 4abcfa714b4de6a7f1dd092070b1e109e8650a7a9f9900b6d4c3a7ca441b8780
3311 a58c4e78232ff36b2903ecaab2ec288a092e6fc55a694e5e2d7822bf98d2c214
1 backingFsBlockDev
25 c3f1a237c46ed330a2fd05ab2a0b6dcc17ad08686bd8dc49ecfada8d85b93a00
5 l
[root@testvm1 overlay]# du --inodes -s *
3298 001311c618ad7b94d4dc9586f26e421906e7ebf5c28996463a355abcdcd501bf
783 048f81f400f7d74f969c4fdaff6553c782d12c04890ad869d75313505c868fbc
768 8060f0c647f24050e1a4bff71096ffdf9665bff26e6187add87ecb8a18532af9
765 fbdef944657234468ee55b12c7910aa495d13936417f9eb905cdc39a40fb5361
The total count of inodes on overlay2 comes to 3378 on my system. Using overlay, this count goes up to 5615. This value is considering a single image and with no containers running, so a large system with a number of docker containers and images could quickly hit the inode limit imposed by the backing filesystem (XFS or EXT4, where the /var/lib/docker/overlay directory is located).
Due to this reason, the newer overlay2 storage driver is currently the recommended option for most new installations. The overlay driver is deprecated as of Docker v18.09 and is expected to be removed in a future release.
| Overlay storage driver internals |
1,543,874,986,000 |
pCloud is a cloud storage service that allows Linux users to mount their cloud storage inside of their home directory, appearing as:
/home/username/pCloudDrive/
As far as I can tell, the pCloudDrive directory is only accessible by the user and not by root.
Running ls -l inside the home directory (as root) displays:
d????????? ? ? ? ? ? pCloudDrive
and in pcmanfm (as root), pCloudDrive is described as "inode/x-corrupted type".
From my experience with Linux, root should be able to access everything, because every other file and directory belongs to it.
What I would like to know is:
How is pCloudDrive's true nature being occluded?
Is there a way to access the pCloudDrive directory and contents as root?
|
I have no direct experience with it, but it looks like pCloud is mounted as a FUSE file system. A FUSE file system is not accessible by root by design. The aim is to prevent mounted file systems from doing nasty things (see an explanation in libfuse's FAQ).
To let root, or other users, access a FUSE file system, you have to mount it with the options -o allow_root or -o allow_others. You need also to uncomment/add user_allow_other in /etc/fuse.conf, otherwise your user will not be able to set the aforementioned options.
Your experience may be the same of many other users, puzzled by an apparently non-intuitive behavior. See, as an example, this question on serverfault.
Of course, since pCloud appears not to be open source, there might actually be no allowed nor easy ways to change how it mounts its volume.
Obviously, root can access a FUSE file system given that it can impersonate other users. For instance:
# sudo -u your_user ls /home/your_user/fuse_mount_point
(executed as root) should just work.
| Is pCloudDrive really inaccessible to root? |
1,543,874,986,000 |
Considering rsync used for incremental OS backup creates hardlink farms for all non-differing files, if I use it to backup a large, slowly-changing system regularly to a dedicated volume, I'm worried I'll run out of inodes for the hardlinks ages before I run out of diskspace.
Would it be better to tinker with mke2fs parameters and increase number(density) of inodes for such a disk, or is the default sufficient for backing up typical 'desktop Linux' with a good multimedia library... or maybe a different FS than ext3 would be better?
|
A hardlink is by definition a link to an inode. Multiple hardlinks to an inode hence do not need additional inodes...
The only thing that will increase inode usage is that for each "generation" the directory tree itself will be duplicated, so for each directory in each generation an additional inode will be needed, whether files are changed or not. That said, in my experience the default inode allocation is sufficient for an incremental rsync backup system (I use dirvish to automate the backups). Certainly as you're talking about multimedia then the average file size will be larger than what the default inode allocation takes into account.
| Should I increase inode count for a rsync backup volume? |
1,543,874,986,000 |
After posing this question, I'm kind of confused by the action of the linxu kernel.
First of all, I know how a process writes strings into a file: a process will obtain some buffer, the buffer can be written by the process, once the buffer is full or the process flushes the buffer, the content of the buffer will be written into the data block of the file. For example, in the program of C, when we printf a \n, it will flush the buffer.
Now, let's consider the case in the post above: a process has opened a file and is writing to it while the file is deleted by the comomand rm.
As my understanding, the command rm will unlink the file, meaning that its inode and its data blocks will be marked as UNUSED. So we can't access it through the filename anymore. And if a process opens a file, the kernel will create a file descriptor to access it.
So if I'm right, rm a file, to which a process is writing, won't cause any error of the process, because the process could access the file through the file descriptor. As someone mentioned in the comment of that post, we can still access the file through cat /proc/<pid>/fd/3.
Now I'm confused. If we can still access the file through cat /proc/<pid>/fd/3 while the inode and the data have been marked as UNUSED because of rm, does it mean that the kernel will hold the whole file in RAM? If so, if the file is very huge, such as some log file, does it mean that lots of RAM will be used?
In a word, if a file isn't rmed, a process can write things into the buffer and once the buffer is flushed, its content will be written into the data blocks of the file. But if a file has been rmed, its data blocks will be marked as UNUSED but a process can still write to it. Where is this "it"?
|
As my understanding, the command rm will unlink the file, meaning that its inode and its data blocks will be marked as UNUSED.
This is the key to understanding what’s going on here: rm only asks the kernel to remove a given directory entry. If the directory entry pointed to an inode which is no longer referenced by anything else (other directory entries, open file descriptions, file mappings, loop mounts etc.), the kernel will also free the inode and the associated data.
Thus the kernel doesn’t need to keep the deleted file’s data: it’s still there, wherever the file system keeps it. As long as a process holds a file descriptor pointing at it, it will remain there, and can be recovered through /proc/.../fd/... on Linux.
| Will the kernel hold the whole file to which some process is writing |
1,543,874,986,000 |
The inode structure of some filesystems includes a list of pointers to the blocks used to store the file contents. This list should exist for ext2/3/4, as specified in the first comment to this question.
The addresses of the blocks used by a file can be obtained with istat, one of the Sleuthkit tools: but this is not exactly a list of the pointers inside the inode, which should be 15 at most, while in this example they are more.
How to obtain such a list, for a given inode number?
|
If you have a file entry pointing to the inode, you can use debugfs:
$ debugfs /path/to/filesystem
debugfs: inode_dump -b fileentry
0000 0004 0000 0104 0000 0204 0000 0304 0000 ................
0020 0404 0000 0504 0000 0604 0000 0704 0000 ................
0040 0804 0000 0904 0000 0a04 0000 0b04 0000 ................
0060 2902 0000 2a02 0000 0000 0000 )...*.......
The -b flag causes inode_dump to only output i_block values, so these can be interpreted directly. Here the block numbers are 0x0400 through 0x040B (file blocks), then the indirect block at 0x0229, and the double-indirect block at 0x022A.
| inode, list block pointers |
1,543,874,986,000 |
I am trying to understand what it happens to a file when I move it from a directory to another, inside the same File System.
Here is the example I made up.
I have two directories and a file :
~/Documents/dir1
~/Documents/dir2
~/Documents/dir1/fileName.txt
Here I have some details about the file fileName.txt (ls -li):
784088 -rw-r--r-- 1 myUser myUser 0 Oct 25 02:18 fileName.txt
Then, I moved the file fileName.txt from dir1 to dir2 by issuing the following command:
~/Documents/dir1$ mv fileName.txt ../dir2
and here I have the details about the file fileName.txt (ls -li) after having issued mv:
784088 -rw-r--r-- 1 myUser myUser 0 Oct 25 02:22 fileName.txt
What I expected was a change of inode number, but I was wrong . So, for what I have understood up to now, by moving a file inside the same File System:
the data block is not touched (that's good to me)
the inode (that is strange to me)
Can anyone tell me then what it changes in file properties (apart from modification time)?
Thank you in advance, really.
|
Within the same filesystem, mv-ing actually uses rename(2). So the inode will remain intact, just the inode is removed from one directory entry and attached to another.
| Moving a file inside the same File System |
1,543,874,986,000 |
I searched the Internet, but I was not able to find a satisfying answer to my problem. The Problem I'm encountering currently is, that I'm transitioning my data from a NTFS Partition to a ext4 partition. What surprised me was the fact, that I could store less data on the same harddrive with the ext4 filesystem. After investigating a little I found out that this might have something to do with the Inodes of ext4.
me@server:/media$ LANG=C df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sda1 3815552 31480 3784072 1% /media/storage
/dev/sdb1 1905792 1452 1904340 1% /mnt
When running the command
me@server:~$ sudo find /mnt -type f | wc -l
1431
it tells me that I have 1431 files on the harddrive, each being around 4-8GB. So basically I have too much Inodes for very few files.
My questions are:
How can I change the number of Inodes now?
Is there maybe a better filesystem for just storing files?
|
By default, ext2/ext3/ext4 filesystems have 5% of the space reserved for the root user. This makes sense for the root filesystem in a typical configuration: it means that the system won't grind to a halt if a user fills up the disk, critical functionality will still work and in particular logs can still be written. It doesn't make sense in most other scenarios.
To avoid reserving 5% for the root user, pass -m 0 to mkfs when creating the filesystem, or call tune2fs with the option -m 0 afterwards.
Though if your filesystem is 95% full, you should look into expanding it. Most filesystems (including both NFS and the ext? family) don't operate efficiently when they're very nearly full.
| ext4 file system tuning for storage partition |
1,543,874,986,000 |
Imagine I want to access the blocks of file /hello/file.
How many inodes should I walt through?
I guess two, since I should not go through the root inode, right?
|
I would expect three /, hello and file. Changing permissions of any one of these can limit access to file.
| How many inodes do I need to access to read a file? |
1,543,874,986,000 |
I'm running a CentOS 4.6 final box with a second drive array (raid 1, the mount is /mnt/raid, as listed below in my nagios warning) that uses ntfs-3g as the file system. My nagios warnings just went off saying that I'm running out of inodes but still have 10% of the drive space available (I am aware this is common).
However I'm possibly mis-understanding the problem since when I do a df -hi, my output seems to indicate I only have a pending drive space issue. Also, NTFS does not use inodes, so what gives?
Here's my df -hi output
[root@images ~]# df -hi
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/VolGroup00-LogVol00
9.1M 190K 8.9M 3% /
/dev/hda1 26K 39 26K 1% /boot
none 127K 1 127K 1% /dev/shm
/dev/sda1 52M 141K 52M 1% /mnt/raid
here's my df -h output:
[root@images ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
72G 6.2G 62G 10% /
/dev/hda1 99M 15M 80M 16% /boot
none 506M 0 506M 0% /dev/shm
/dev/sda1 466G 415G 52G 90% /mnt/raid
here's my fstab file:
/dev/VolGroup00/LogVol00 / ext3 defaults 1 1
LABEL=/boot /boot ext3 defaults 1 2
none /dev/pts devpts gid=5,mode=620 0 0
none /dev/shm tmpfs defaults 0 0
none /proc proc defaults 0 0
none /sys sysfs defaults 0 0
/dev/VolGroup00/LogVol01 swap swap defaults 0 0
/dev/sda1 /mnt/raid ntfs-3g defaults 0 0
#/dev/md0 /mnt/raid/backup ntfs-3g defaults 0 0
/dev/sdd1 /extraid ntfs-3g rw,umask=0000,defaults 0 0
/dev/hdc /media/cdrom2 auto pamconsole,exec,noauto,managed 0 0
And here's my Nagios warning:
DISK WARNING - free space: /mnt/raid 52452 MB (10% inode=99%):
The warning says free space, but 52GB is plenty at 10% available. What concerns me is the inode=99% part. Doesn't this mean Nagios is reporting that 99% of inodes on /mnt/raid are being used? /mnt/raid is NTFS, so I don't think it even uses inodes? Correct?
|
It is complaining that there is only 10% free space left, which is not good. It is saying 99% of the inodes are free.
| Understanding NTFS-3g Inode Use |
1,543,874,986,000 |
$sudo blkid
/dev/sda1: UUID="F959-61DE" TYPE="vfat" PARTUUID="950b18a0-1501-48b4-92ef-ba1dd15aaf21"
/dev/sda2: UUID="6dfcfc23-b076-4eeb-8fba-a1261b4ea399" TYPE="ext4" PARTUUID="ddc69ee8-40b0-49c9-9dcb-0b9064caca7d"
/dev/sda3: UUID="fec0af18-d28e-4f2a-acb7-6380ddee3dc2" TYPE="ext4" PARTUUID="e19628dc-c04a-4c9d-a3c6-469511e89480"
/dev/sda4: UUID="a6f7669b-6e86-432a-b91c-f39780c849ac" TYPE="swap" PARTUUID="e45cf647-3d78-4fea-a950-022a3ae9b4e0"
/dev/sda5: UUID="5a75937f-8a83-44a9-b5c5-502b7e3884f2" TYPE="ext4" PARTUUID="3e086aff-105f-48b3-a384-1eb1d18c6fb3"
/dev/sda6: UUID="04460cd2-a1bb-4a3e-94df-1ad10080f356" TYPE="ext4" PARTUUID="d37fdea8-a386-4f6f-8016-fa2764a71b60"
$pwd
/home/milad
$touch a
$ls -i a
3935203 a
$sudo /sbin/debugfs/ -R 'stat 3935203' /dev/sda6
debugfs 1.44.5 (15-Dec-2018)
3935203: File not found by ext2_lookup
How to get birth date my file in ext4 partition drive?
Thanks for helping
|
debugfs’s stat command expects a path name, or an inode number “quoted” using angle brackets; you might as well use stat milad/a instead:
sudo /sbin/debugfs -R 'stat milad/a' /dev/sda6
The file path is relative to the root of the file system; since that is mounted at /home, /home/milad/a becomes milad/a.
If your version of the stat utility is recent enough, you can use that instead of debugfs: run
stat a
from your shell, and you’ll see its birth time (if your kernel is also recent enough to record it and make it available).
| debugfs not working | file not found by ext2_lookup |
1,543,874,986,000 |
I'm trying to read the contents of a file using the file's inode.
This works fine:
echo "First line" > data.txt
sync
sudo debugfs -R "cat <$(stat -c %i data.txt)>" /dev/sda3
debugfs tells me the file contents are "First line". This part of the command gets data.txt's inode number: $(stat -c %i data.txt).
Things go awry when adding a second line:
echo "Second line" >> data.txt
sync
sudo debugfs -R "cat <$(stat -c %i data.txt)>" /dev/sda3
I still only get "First line" from debugfs. This doesn't change after adding more lines, running sync again, or retrying a couple of days later.
Why doesn't debugfs show the remainder of the file contents? Am I using debugfs the wrong way?
I can reproduce this behavior reliably with other files.
I noticed that when overwriting existing file contents with echo "New content" > data.txt, debugfs does show the new content. But adding a second line to it, will, as described above, show only the first line.
I'm on Arch Linux 5.12.3 using debugfs 1.46.2. The file system on /dev/sda3 is ext4. Calling debugfs -R "stat ..." produces the following, which seems unsuspicious to me:
Inode: 16515371 Type: regular Mode: 0644 Flags: 0x80000
Generation: 3923658711 Version: 0x00000000:00000001
User: 1000 Group: 1000 Project: 0 Size: 34
File ACL: 0
Links: 1 Blockcount: 8
Fragment: Address: 0 Number: 0 Size: 0
ctime: 0x60b639e5:71315fa0 -- Tue Jun 1 15:45:09 2021
atime: 0x60b63988:b7c456cc -- Tue Jun 1 15:43:36 2021
mtime: 0x60b639e5:71315fa0 -- Tue Jun 1 15:45:09 2021
crtime: 0x60b63988:b7c456cc -- Tue Jun 1 15:43:36 2021
Size of extra inode fields: 32
Inode checksum: 0xbfa4390e
EXTENTS:
(0):66095479
|
It is due to caching. You have at least two options.
Use the -D flag:
-D Causes debugfs to open the device using Direct I/O, bypassing the buf‐
fer cache. Note that some Linux devices, notably device mapper as of
this writing, do not support Direct I/O.
Drop buffer cache:
echo 1 | sudo tee /proc/sys/vm/drop_caches
See for example:
How do you empty the buffers and cache on a Linux system?
https://www.kernel.org/doc/Documentation/sysctl/vm.txt
Documentation for /proc/sys/vm/ HTML (I like the plain text variant better, but hey ;)
If you do not pass the -D flag you can still see some action by piping the result to for example xxd:
sudo debugfs -R "cat <$(stat --printf %i data.txt)>" /dev/sda3 | xxd -a -c 32
You will see that the cat'ed file is filled with zero bytes, and sometimes data (if enough has been written).
For example, after echo A >data.txt
00000000: 410a A.
Then after for i in {1..7}; do echo A >>data.txt; done:
00000000: 410a 0000 0000 0000 0000 0000 0000 0000 A...............
You can also monitor by using something like this:
Usage: sudo ./script file_to_monitor
It launches watch with an awk script that prints statistics for the device from /sys/block in addition to print the result of cat <inode> for the file.
#!/bin/sh
if [ "$1" = "-h" ]; then
printf '%s FILE\n' "$0"
exit 1
fi
file="$1"
inode=$(stat --printf %i "$file")
dev_path="$(df -P -- "$file" | awk 'END{print $1}')"
dev_name="$(lsblk -no NAME "$dev_path")"
dev_core="$(lsblk -no PKNAME "$dev_path")"
if [ "$dev_core" = "loop" ]; then
fn_stat=/sys/block/$dev_name/stat
else
fn_stat=/sys/block/$dev_core/$dev_name/stat
fi
printf 'File : %s\n' "$file"
printf 'Inode: %s\n' "$inode"
printf 'Stat : %s\n' "$fn_stat"
printf 'Dev : %s\n' "$dev_path"
printf "Continue? [yN] " >&2
read -r ans
if ! [ "$ans" = "y" ] && ! [ "$ans" = "Y" ]; then
exit
fi
watch -n 0.2 'awk \
-v inode="'$inode'" \
-v dev_path="'$dev_path'" \
"{
rs = \$3 * 512
rsk = rs / 1024
rsm = rsk / 1024
ws = \$7 * 512
wsk = ws / 1024
wsm = wsk / 1024
printf \" 1: Reads Completed : %9d\n\", \$1
printf \" 2: Reads Merged : %9d\n\", \$2
printf \" 3: Read Sectors : %9d %6d MiB %9d KiB %d bytes\n\",
\$3, rsm, rsk, rs
printf \" 4: Read ms : %9d\n\", \$4
printf \" 5: Writes Completed : %9d\n\", \$5
printf \" 6: Writes Merged : %9d\n\", \$6
printf \" 7: Write Sectors : %9d %6d MiB %9d KiB %d bytes\n\",
\$7, wsm, wsk, rs
printf \" 8: Write ms : %9d\n\", \$8
printf \" 9: I/Os in progress : %9d\n\", \$9
printf \"10: I/O ms : %9d\n\", \$10
printf \"11: I/O ms weighted : %9d\n\", \$11
printf \"\n\nFILE <%d> %s:\n\", inode, dev_path
system(\"sudo debugfs -R '\''cat <\"inode\">'\'' \"dev_path\" | xxd -a -c 32\")
}
"' "$fn_stat"
| Reading stale file data with debugfs cat |
1,543,874,986,000 |
Today I noticed that tripwire thinks that some Apache configuration files changed yesterday. I know I did not make any changes to those files.
Looking at the info, it shows that only the Inode number changed:
Property: Expected Observed
------------- ----------- -----------
Object Type Regular File Regular File
Device Number 2305 2305
* Inode Number 5770048 5771399
Mode -rw-r--r-- -rw-r--r--
Num Links 1 1
UID root (0) root (0)
GID root (0) root (0)
Size 1055 1055
Modify Time Mon 09 Oct 2017 04:54:54 PM PDT
Mon 09 Oct 2017 04:54:54 PM PDT
Blocks 8 8
CRC32 BSW2x+ BSW2x+
MD5 CqXESieHTV/33Ye6iuaHjk CqXESieHTV/33Ye6iuaHjk
How could the Inode of a file change and nothing else?
|
One way:
cp -p file file.new && mv file.new file
For example:
$ ls -li file
12289 -rw-r--r-- 1 jeff jeff 0 Jun 13 14:24 file
$ cp -p file file.new && mv file.new file
$ ls -li file
12292 -rw-r--r-- 1 jeff jeff 0 Jun 13 14:24 file
Another possibility would be that the file was restored from a backup system (and that backup system restored timestamps).
Another activity that would update the inode number and not touch the contents would be a sed -i command that made no changes, since sed -i use a temporary file for the results which is then renamed to the original at the end.
| Why does a file's Inode number change and nothing else? |
1,543,874,986,000 |
What does +0200 mean after the Access/Modify/Change timestamps?
File: task-system.md
Size: 197 Blocks: 24 IO Block: 4096 regular file
Device: 33h/51d Inode: 14155787 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ tom) Gid: ( 1000/ tom)
Access: 2018-08-26 15:19:07.047602175 +0200
Modify: 2018-08-26 15:18:59.531538750 +0200
Change: 2018-08-26 15:18:59.535538783 +0200
Birth: -
|
That’s the timezone. The times are given in a UTC+2 timezone (the timestamps are stored as seconds since the Unix epoch, and translated to whatever the current user’s timezone is for display).
| Inode Timestamp Plus/Minus Interpretation |
1,543,874,986,000 |
Why are byte offsets for a pipe/FIFO maintained in the inode rather than the file table, like for regular files?
I read this line at page 113 of The Design of The Unix Operating System (1986) by Maurice Bach.
Maintaining the byte offsets in inode allows convenient FIFO access to the pipe data and differs from the regular files where the offset is maintained in the file table.
|
Note that that book describes the AT&T Unix system internals as they were 30 years ago. You can't assume things are done the same in modern Unix and unix-like systems.
In any case regardless of how pipes are implemented internally, while for regular files or other seekable files, the byte offset is something that belongs to the open file description (I suppose that's what your book calls a file table entry). That is, two processes opening the same file independently will have each their own offset within the file. One process reading from the file doesn't affect the offset of the other process.
For pipes, all file descriptors of all processes open on a pipe share the same offset. Or in other words the offset belongs to the pipe. So it makes sense to store it in the inode rather than duplicating it in all the open file descriptions.
| Byte Offsets for pipe/FIFO |
1,543,874,986,000 |
I have a btrfs file system, and it all the folders in the top level are inode 256.
This is not good for me, the device id is the same, So I assume these are "virtual inodes numbers", Btrfs has their own mechanism of doing this.
Is it possible to get the real unique physical inode of each directory ?
look what happens:
root@ReadyNAS-DEV:/home# find / -xdev -inum 256
/
/home
/data
/apps
root@ReadyNAS-DEV:/home#
Thats not good.
|
I'm assuming that the three directories /home, /data and /apps are mount points.
When you mount something on /home, the inode that is reported for /home is the inode of the root directory of the mounted partition, not that of the original /home directory. It is therefore not strange that these inodes are the same as those of other partition's root directories.
On my OpenBSD machine (which doesn't use btrfs):
$ find / -xdev -inum 2
/
/home
/usr
/var
/tmp
I see the same on my Ubuntu VM. This is not a bug.
Another way of saying it: The stat structure returned by the stat() system call for the different directories have the same value of st_ino, but different values of st_dev. See the description of stat() and sys/stat.h in POSIX.
It may be that you misunderstand the -xdev option to find. With it, it will not descend into those directories that are on other filesystems, but it will still print the directory names if they match the other criteria.
| btrfs same inode number |
1,543,874,986,000 |
[~]$ stat -c %i /
2
As you can see in above, the inode for / is 2. But the First inode of /dev/sda2 is 11.
[~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 350G 67G 266G 21% /
tmpfs 12G 44M 12G 1% /dev/shm
[~]$ sudo tune2fs -l /dev/sda2 | grep 'First inode'
First inode: 11
Can any one help me to understand this difference?
|
The value in the superblock shown by tune2fs is the first inode number usable for new files, while the root directory must always exist when the file system is created.
The kernel’s Ext4 documentation lists the inode numbers which are used internally by file systems features.
| Why are the first inode of the `/` mounted partition and inode of `/` different? |
1,543,874,986,000 |
How approximately calc bytes-per-inode for ext2?
I have 7.3GB storage (15320519 sectors 512B each). I have made ext2 filesystem with block size 4096
mke2fs /dev/sda2 -i 524288 -m 0 -L "SSD" -F -b 4096 -U 11111111-2222-3333-4444-555555555555 -O none,filetype,sparse_super,large_file
Filesystem label=SSD
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
15104 inodes, 1915064 blocks
0 blocks (0%) reserved for the super user
First data block=0
Maximum filesystem blocks=4194304
59 block groups
32768 blocks per group, 32768 fragments per group
256 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Usually all my files has size 100kB (and about 5 files can be 400MB). I try to read this and this. But still not clear how approximately calc bytes-per-inode? Current 524288 is not enough, for now I can't make new files in sda2 but still have a lot of free space.
P.S. Extra info
# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
/dev/root ext4 146929 84492 59365 59% /
devtmpfs devtmpfs 249936 0 249936 0% /dev
tmpfs tmpfs 250248 0 250248 0% /dev/shm
tmpfs tmpfs 250248 56 250192 0% /tmp
tmpfs tmpfs 250248 116 250132 0% /run
/dev/sda2 ext2 7655936 653068 7002868 9% /mnt/sda2
# df -h
Filesystem Size Used Available Use% Mounted on
/dev/root 143.5M 82.5M 58.0M 59% /
devtmpfs 244.1M 0 244.1M 0% /dev
tmpfs 244.4M 0 244.4M 0% /dev/shm
tmpfs 244.4M 56.0K 244.3M 0% /tmp
tmpfs 244.4M 116.0K 244.3M 0% /run
/dev/sda2 7.3G 637.8M 6.7G 9% /mnt/sda2
# fdisk -l
Disk /dev/sda: 7.45 GiB, 8001552384 bytes, 15628032 sectors
Disk model: 8GB ATA Flash Di
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x0a19a8af
Device Boot Start End Sectors Size Id Type
/dev/sda1 309 307508 307200 150M 83 Linux
/dev/sda2 307512 15628030 15320519 7.3G 83 Linux
|
Your free space is roughly 7.3*1024*1024*1024 bytes. On average, the size of a file is expected to be 100*1024 bytes. This means you have room for approximately
7.3*1024*1024*1024 / (100*1024) = 7.3*1024*1024/100 ≃ 76,546
distinct files. That implies you need precisely that many inodes.
The mke2fs output indicates you currently have 15,104 inodes; it's no wonder you run out of them -- you need approx. five times as many.
I believe you are missing that the -i option already directly specifies your expected average file size. You need one inode per (distinct) file, so if your avg file size is 100KB, then a new inode should be assigned to every 100KB of storage. Simply re-run the command with -i $((100*1024)).
(Your current option -i 524288 tells mke2fs that your usual file size will be 512KB, which is cca. five times larger than reality -- that's why you get cca. five times fewer inodes than needed.)
In summary, just read "bytes-per-inode" as "bytes-per-distinct-file".
| ext2 How to choose bytes/inode ratio |
1,543,874,986,000 |
When I run df -H, my network mount reports 100% usage:
user@system:/mnt/backup$ df -H
Filesystem Size Used Avail Use% Mounted on
...
//192.168.71.2/Linux-Database-Backup-Storage 806G 806G 0 100% /mnt/backup
Running stat -f confirms there are no free inodes/blocks:
user@system:/mnt/backup$ stat -f /mnt/backup
File: "/mnt/backup"
ID: 0 Namelen: 255 Type: smb2
Block size: 1024 Fundamental block size: 1024
Blocks: Total: 786432000 Free: 0 Available: 0
Inodes: Total: 0 Free: 0
However, when I try to find what's using up all the disk space, the reported usage (~115G) isn't anywhere near the network mount size (~806G):
user@system:/mnt/backup$ du -h /mnt/backup
13G /mnt/backup/backups-a
99G /mnt/backup/backups-b
3.7G /mnt/backup/backups-c
115G /mnt/backup
I have a relatively small number of files as well, checking the reported inodes usage only reports 66 inodes in use:
user@system:/mnt/backup$ du -s --inodes /mnt/backup | sort -rn
66 /mnt/backup
My /etc/fstab configuration:
user@system:/mnt/backup$ cat /etc/fstab
\\192.168.71.2\Linux-Database-Backup-Storage /mnt/backup cifs credentials=/home/user/.smbcredentials2,file_mode=0755,dir_mode=0755,vers=2.0,_netdev,uid=1000,gid=1000 0 0
Why is there a discrepancy between df and du? What's using up all the extra storage?
Thanks!
|
There are various possible reasons for this, mostly hinging around the fact that a network share need not correspond to an entire filesystem. Here, it's possible that although the remote filesystem is 806GB, only part of it is being exported to you over the network. You have used 115GB but another party has used the remaining 690GB or so.
Here's an illustrative example from my home network:
mkdir -p /mnt/net
mount -o 'user=…' //REMOTE/Share /mnt/net
df -h /mnt/net
Filesystem Size Used Avail Use% Mounted on
//REMOTE/Share 984G 915G 59G 94% /mnt/net
du -hsx /mnt/net
8.7G /mnt/net
In my situation there is a 1TB filesystem that is shared out in a number of different ways. (Each of my family members has their own semi-private network share, for example.) The total usage is 915GB of 984GB but the share shown here is using less than 10GB of it.
| Discrepancy between "df" and "du" on a CIFS network drive |
1,686,470,364,000 |
Is there any way to find exactly the blocks allocated to a inode, and view that file? I don't care if I have to be on a live cd, but i need to do this for example:
cat verylongsentice > a
ls -i a
101010 a
ln a /some/random/path
rm a
inode_find 101010
verylongsentice
is there any way to do this? maybe as root or from a live cd? I do not care about the file name. Also would this be possible with deleted files?
|
There's no inode number for a deleted files. Also: Inode numbers are not guaranteed to be immutable, or not be reused immediately.
In the comments below your question you're very insistent that what you want should work. It shouldn't:
To open a file directly by inode nr and not through file name is in direct conflict with how the POSIX idea of a file works. It also would be incompatible with the POSIX permissions model, in which the path through which you access a file devices whether you can or cannot access it.
Therefore, the Linux kernel cannot offer you an API for opening files via inode.
In case the file does still exist, AND your file system actually stores inodes (I would guess most file systems do not store actual inode numbers, as that's a bad way to organize a large file system, but a remnant from the 1970s, probably. The file system driver would compute the inode nr from positions in directory tree structures, maybe, or something else), you could go in, and use find /mountpoint -i {number} to look for a file of that inode number on the whole file system. If it has already been deleted, it doesn't exist anymore, and you thus can't find it.
| open a file by inode number [duplicate] |
1,686,470,364,000 |
When I use the dumpe2fs command to look at the Block Group of the ext4 filesystem, I see "free inodes" and "unused inodes".
I want to know the difference between them ?
Why do they have different values in Group 0 ?
Group 0: (Blocks 0-32767) [ITABLE_ZEROED]
Checksum 0xd1a1, unused inodes 0
Primary superblock at 0, Group descriptors at 1-3
Reserved GDT blocks at 4-350
Block bitmap at 351 (+351), Inode bitmap at 367 (+367)
Inode table at 383-892 (+383)
12 free blocks, 1 free inodes, 1088 directories
Free blocks: 9564, 12379-12380, 12401-12408, 12411
Free inodes: 168
Group 1: (Blocks 32768-65535) [ITABLE_ZEROED]
Checksum 0x0432, unused inodes 0
Backup superblock at 32768, Group descriptors at 32769-32771
Reserved GDT blocks at 32772-33118
Block bitmap at 352 (+4294934880), Inode bitmap at 368 (+4294934896)
Inode table at 893-1402 (+4294935421)
30 free blocks, 0 free inodes, 420 directories
Free blocks: 37379-37384, 37386-37397, 42822-42823, 42856-42859, 42954-42955, 44946-44947, 45014-45015
Free inodes:
|
The "unused inodes" reported are inodes at the end of the inode table for each group that have never been used in the lifetime of the filesystem, so e2fsck does not need to scan them during repair. This can speed up e2fsck pass-1 scanning significantly.
The "free inodes" are the current unallocated inodes in the group. This number includes the "unused inodes" number, so that they will still be used if there are many (typically very small) inodes allocated in a single group.
| Ext4 "unused inodes" "free inodes" diffrence? |
1,686,470,364,000 |
I installed debian strech through the installer in a software raid 10 configuration.There are 4 drives, each is 14TB. Partition was formatted by the installer with ext4. The inode ratio defaults to 16384.
cat /proc/mdstat
Personalities : [raid10] [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4]
md3 : active raid10 sdc4[1] sda4[0] sdb4[2] sdd4[3]
27326918656 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
bitmap: 5/204 pages [20KB], 65536KB chunk
md2 : active raid1 sdd3[3] sdc3[1] sda3[0] sdb3[2]
976320 blocks super 1.2 [4/4] [UUUU]
md1 : active raid10 sdd2[3] sdc2[1] sda2[0] sdb2[2]
15616000 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
unused devices:
mdadm --detail /dev/md3
/dev/md3:
Version : 1.2
Creation Time : Sun Mar 8 16:21:02 2020
Raid Level : raid10
Array Size : 27326918656 (26060.98 GiB 27982.76 GB)
Used Dev Size : 13663459328 (13030.49 GiB 13991.38 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Wed Apr 1 01:00:06 2020
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : aaaaaaa:2 (local to host aaaaaaa)
UUID : xxxxxxxxxxxxxxxxxxxxxxxx
Events : 26835
Number Major Minor RaidDevice State
0 8 4 0 active sync set-A /dev/sda4
1 8 36 1 active sync set-B /dev/sdc4
2 8 20 2 active sync set-A /dev/sdb4
3 8 52 3 active sync set-B /dev/sdd4
cat /etc/mke2fs.conf
[defaults]
base_features = sparse_super,large_file,filetype,resize_inode,dir_index,ext_attr
default_mntopts = acl,user_xattr
enable_periodic_fsck = 0
blocksize = 4096
inode_size = 256
inode_ratio = 16384
Now i run:
tune2fs -l /dev/md3
tune2fs 1.43.4 (31-Jan-2017)
Filesystem volume name:
Last mounted on: /
Filesystem UUID: xxxxxxxxxxxxxxxxxxxxxxxxxxx
Filesystem magic number: 0xEF53
Filesystem revision #: 1 (dynamic)
Filesystem features: has_journal ext_attr dir_index filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file dir_nlink extra_isize metadata_csum
Filesystem flags: signed_directory_hash
Default mount options: user_xattr acl
Filesystem state: clean
Errors behavior: Continue
Filesystem OS type: Linux
Inode count: 426983424
Block count: 6831729664
Reserved block count: 341586483
Free blocks: 6803907222
Free inodes: 426931027
First block: 0
Block size: 4096
Fragment size: 4096
Group descriptor size: 64
Blocks per group: 32768
Fragments per group: 32768
Inodes per group: 2048
Inode blocks per group: 128
RAID stride: 128
RAID stripe width: 256
Flex block group size: 16
Filesystem created: Sun Mar 8 16:24:38 2020
Last mount time: Tue Mar 31 12:06:30 2020
Last write time: Tue Mar 31 12:06:21 2020
Mount count: 17
Maximum mount count: -1
Last checked: Sun Mar 8 16:24:38 2020
Check interval: 0 ()
Lifetime writes: 27 GB
Reserved blocks uid: 0 (user root)
Reserved blocks gid: 0 (group root)
First inode: 11
Inode size: 256
Required extra isize: 32
Desired extra isize: 32
Journal inode: 8
Default directory hash: half_md4
Directory Hash Seed: xxxxxxxxxxxxxxxxxxxxxxxxxxx
Journal backup: inode blocks
Checksum type: crc32c
Checksum: 0x30808089
bytes-per-inode = (blocks/inodes) * block_size
In my case:
bytes-per-inode = (6831729664/426983424) * 4096 = 16 * 4096 = 65536
Why is the ratio showing as 65536 in the tune2fs -l output. It should be 16384.
I have the same debian strech distribution installed on my notebook and there is no discrepancy between /etc/mke2fs.conf and tune2fs -l.
|
Your file system is over 16 TiB in size, so mke2fs defaulted to the “huge” file system type, with an inode ratio of 65,536 bytes. See the -T option in the linked manpage, and the huge type in mke2fs.conf:
huge = {
inode_ratio = 65536
}
| ext4 inode ratio discrepancy between /etc/mke2fs.conf and tune2fs |
1,686,470,364,000 |
I have 3 computers, A, B, and C, and I perform identical operating system and software installations on them. For any specific file on A, can I expect that same file on B and C to have the same inode number for its instance of that file?
Our intrusion detection system is set up by acquiring an initial file system image from A and then using that same file system image to do future comparisons against A, B, and C. I am new to the program, but it seems that this has worked in the past.
I don't know how the inode number sequencing works, so I'm guessing it just starts at 1 and counts up for each file, or something similar. If that's the case, that's probably why file inode numbers have been consistent even across computers for us in the past - the same files were created in the same order. Though I'm not sure if we can always count on that.
However, now I am getting notifications of a few files changed, and for the first file I am looking into it is only the file inode number which has changed. I think someone reinstalled the operating system and software on the computer with the notifications.
Can file inode numbers be counted on to be the same across identical OS/software installs on different computers (or sequential re-installs on the same computer)?
If I were to acquire a new file system image from either A, B, or C, can I expect that to fix my "problem" (not even sure if it's a problem)?
I generally have access to only 1 or 2 of the computers at a time, so I cannot inspect A or C right now, and I do not know what their report would look like. I only know that the inode number of at least 1 file on computer B is not what was expected.
In this case, the operating system is QNX 6. For file system type, mount tells me that the /dev/hd file's are "on / type qnx4"... so file system type qnx4? I guess qnx has its own file system type? I didn't realize that. Or maybe that's no accurate. Other commands for checking file system type do not seem to exist on the computer.
Update: Apparently I was mistaken about something. Although our reference data on the original state of the file system does include files' inode numbers and I have the option to include that in the test, I was not supposed to include the inode data in this check that I described in the question, and "It worked in the past" is because of this. So I do not actually need what I have asked for here after all, sorry about that. I will leave this question open though since I still find it interesting and a partial answer has been started in the comments.
|
[...] and I perform identical operating system and software installations on them.
For any specific file on A, can I expect that same file on B and C to have the same inode number for its instance of that file?
No, because I/O runs in parallel, and the order of I/O operations is not deterministic and affected by what the hardware does. The OS assigns inode numbers, and if some operations run in a different order on, say, system A and B, the OS could assign different inode numbers for the "same" file.
A similar artifact is that assignments in /dev/ are not guaranteed to be consistent across reboots, even on the same system: What is /dev/sdb now could have been /dev/sda on the last boot.
If I were to acquire a new file system image from either A, B, or C, can I expect that to fix my "problem" (not even sure if it's a problem)?
It's not a problem (unless you make it into one), and yes, if you copy over whole file system images, they'll have the same inode numbers.
Although our reference data on the original state of the file system does include files' inode numbers and I have the option to include that in the test, I was not supposed to include the inode data in this check that I described in the question,
Exactly. The answer is "no, you can't rely on it, and therefore you shouldn't test it, because then it becomes a problem."
| How does the sequence of inode numbers work? Can I expect consistency across identical installs on different computers? |
1,686,470,364,000 |
The *nix filesystems maintain an inode table at the beginning of the disk (or at some fixed location). It is indexed by the inode number, which is an integer that uniquely identifies an inode. Knowing the inode number, an inode can be found quickly. The inode contains pointers/addresses to other disk blocks, which contains the actual data of the file.
I would like to know whether my approach below to get rid of the inode table and the inode number is efficient:
We still have inodes, but now, the inodes are stored in the data region of disk, and instead of keeping track of the inode number, we just record the disk address or block number of the inode. Whenever we try to access a file or its inode, we just use the disk address to find the inode, instead of indexing into the inode table using the inode number. This will save us from another layer of indirection.
What is missing in my approach? I would like to understand the rationale behind the inode table.
|
If I understand you correctly, you want to replace the inode number with the block address. That means (1) one inode per block, which wastes a lot of space (the inode isn't that large), and (2) it's not that different from using an inode number: An inode has a fixed size, so a block contains a known number n of inodes. So if you devide the inode number by n (which ideally is a power of two, so it's just a shift), the quotient is the block number of the inode (plus the disk address where the inode table starts), and the remainder is the index of the inode inside that block.
To understand the rationale behind the inode table, think about what data is stored in the inode table: It's attributes like owner, group, permissions and timestamps, and indices and indirect indices of the data blocks. You have to store those somewhere, and you can't store them together with the file data.
So if you want to design your own filesystem, the first question you have to answer is "how do I identify the data blocks that belong to a file?" and the second question is "where do I store attributes like ownership, permissions and timestamps?". And yes, you can use different schemes for that than inodes.
Edit
As for
why not just use its address, like we do with main memory and objects therein?
As I wrote, basically you have the block address - you'll just have to divide first, and add an offset. If you add the offset to every inode on principle, the "inode number" will be much larger, and you'll have a constant value in the high bits that's repeated in every inode number. This in turn will make each directory entry larger.
Don't forget that the unix filesystem was invented when harddisk sizes where around 20 Mbytes or so. You don't want to waste space, so you pack everything densely, and you avoid redundancy. Adding an offset every time you access an inode is cheap. Storing this offset as part of every "inode number" reference is expensive.
And the interesting thing is that even though the inode scheme it was invented for small harddisks in today's terms, it scales well, and even on harddisks in the terabyte range in "just works".
| Filesystem design: necessity of inode number and table [closed] |
1,686,470,364,000 |
CentOS6
I am logged in as root. It is a virtual machine running on a Windows 10 host in virtualbox as a vagrant machine.
I tried
chmod -R 777 /home/thomas/WWW
chown -R root:root /home/thomas/WWW
when trying to rm with rm -rf /home/thomas/WWW I get
remove `/home/thomas/WWW/': Is a directory
which is weird
ls -la reveals a broken inode
d?????????? ? ? ? ? ? WWW
but I have no idea how to fix it now.
|
If you filesystem is neither /boot, /, /usr or /var things are easy
just comment your filesystem in /etc/fstab
#/dev/vgdata/archives /home/archemar/tmp365 ext4 defaults 0 2
reboot
fsck -t ext4 /dev/vgdata/archives (fsck should recognize ext4 fstype)
else
locate you filesystem (df . ) will tell you
/dev/mapper/vg1-lv1 ... /var
you are in volume group vg1 and logical volume lv1
download an iso CD of same version of your OS (centos or redhat in this case).
use virtualbox to mount this iso and boot on it.
from boot menu, do not install, choose rescue mode.
locate your volume group, logical volume and filesystem, and do a fsck on it.
| Cannot rm corrupt directory |
1,686,470,364,000 |
I get the same confusion in multi-level paging as well. For inodes, we have direct and indirect pointers that point to data blocks. However, for small files we prefer to use indirect pointers since they can store a lot more pointers for our purpose.
However, why is it more data-consuming to store direct pointers in sequence at one level and less so if we use indirect pointers? Surely the pointers all must exist at some place in the filesystem, and incur the same amount of space, don't they? Where does this extra space come from?
Here is an example of what I think: If I have 10 direct pointers and 2 indirect pointers, each of which lead to 128 and 128^2 pointers respectively, will the total size consumed be the same as having 10 + 128 + 128^2 direct pointers? If not, how is the space saving done?
As a side question, what is the typical size of an inode and why do the sizes of inode vary?
|
The original hierarchy of the inodes levels works roughly like this:
You can store one or a few block numbers directly in the inode. This means you use a few bytes more for the inode, but for small files, you don't have to allocate a complete block, which is mostly empty.
The next level is one indirection: You allocate a block to store the block pointers. Only the address of this indirect block is stored in the inode. This doesn't use somehow "less space", and most filesystems, even early ones, worked like that (have a pointer near the inode/filename which points to a block, which stores the block numbers of the file).
But what do you do when the space in this block runs out? You have to allocate another block, but where do you store the reference to this block? You could just add those references to the inode, but to store largers files, the inode would get large. And you want small inodes, so as many as possible inodes can fit into a single block (less disk access to read more inodes).
So you use a two-level indirect block: You just add one pointer to the inode, then you have a whole block to store pointers to indirect blocks, and the indirect blocks store the block address of the file itself.
And so on, you can add higher-level indirect blocks, or stop at some stage, until you reach the maximal size of a file possible with the structure you want.
So the point is not "use up less space in total", but "use a scheme that uses blocks efficiently for the expected distribution a files wrt. to size, i.e. many small files, some larger files, and very few huge files".
Page tables on the other hand work very differently.
Edit
To answer the questions in the comment:
Data blocks are of fixed sizes (originally 512 bytes, IIRC), which is a multiple of the block size of the underlying harddisks. So data block size can't "decrease".
As I tried to describe above, the whole point of having the inodes not use up too much space is to make inode access faster (or, alternatively, make caching inodes use up less memory - back then when the unix file system with inodes was invented, computers had a lot less memory than today). It's not about somehow saving space in total. As you say yourself, everything has to be stored somewhere, and if it doesn't use up space at location X, it will use up space at location Y.
Just adding a variable number of block pointers to the inode is not practical, because the inode must take up a fixed amount of space - you want to use the inode number to calculate the block address and the offset inside the block where the inode information is stored. You can't do that if every inode has a different size. So there must be some form of indirection.
Page tables work differently because hardware implements them differently - that's just how it is. The hierarchy has a fixed depth, always the same (though sometimes configurable. And while reading a block from disk is slow, that doesn't matter for page tables. So the design issues are completely different.
| Why does using indirect pointers in inodes not incur the same amount of space? |
1,686,470,364,000 |
When I list an inode with stat command:
File: 'text'
Size: 0 Blocks: 0 IO Block: 4096 regular empty file
Device: 802h/2050d Inode: 8391119 Links: 1
Access: (0664/-rw-rw-r--) Uid: ( 1000/ cagdas) Gid: ( 1000/ cagdas)
Access: 2017-07-31 17:00:00.513753567 +0300
Modify: 2017-07-31 17:00:00.513753567 +0300
Change: 2017-07-31 17:00:00.513753567 +0300
Birth: -
what does Device: 802h/2050d stand for? When I do stat on char or block devices from /dev, it is shown as Device: 6h/6d.
|
802 (hexadecimal) is the combination of the major and minor numbers (8, 2) of /dev/sda2 where the file text resides. The major number is placed in the most significant half of a 16-bit word, the minor number in the least significant half. For historical reasons the value is displayed like this, even though Linux since version 2.6 uses 32 bits for the device number (12 bits major, 20 bits minor). 2050 is the same value in decimal.
| What is "Device: 802h/2050d" stands for in inode? |
1,686,470,364,000 |
Dears, I have a Redhat linux server Red Hat Enterprise Linux Server release 5.5 (Tikanga). It's a production sensitive server. now it ran out of inodes in /storage2 directory. It has many space but it almost finished inodes and I need to increase the number of inodes ASAP. with this link It has the solution but it needed to have backup and after changing the number of inodes in file system restore them. I wonder if there is any online solution so I could increase the inodes without taking backup and restore.
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol02
20G 6.5G 12G 35% /
/dev/mapper/VolGroup00-LogVol04
58G 4.4G 51G 8% /home
/dev/mapper/VolGroup00-LogVol01
9.7G 211M 9.0G 3% /tmp
/dev/mapper/VolGroup00-LogVol03
20G 16G 2.6G 87% /var
/dev/mapper/vg_fvnx_stg2-lv_fvnx_stg2
3.0T 2.2T 690G 76% /storage2
$ df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/VolGroup00-LogVol02
5.0M 270K 4.8M 6% /
/dev/mapper/VolGroup00-LogVol04
15M 310 15M 1% /home
/dev/mapper/VolGroup00-LogVol01
2.5M 71 2.5M 1% /tmp
/dev/mapper/VolGroup00-LogVol03
5.0M 7.9K 5.0M 1% /var
/dev/mapper/vg_fvnx_stg2-lv_fvnx_stg2
192M 191M 1.3M 100% /storage2
|
Answer : no, you can't increase the inodes without taking the backup/restore.
man page for mkfs.ext4 (which I assume is the filesystem type in play here) is pretty clear on this:
It is not possible to change this value after the filesystem is created."
You could look into such solutions as creating a /storage2/subdirectoryname filesystem, and effectively place a few thousand files into that new filesystem, thus releasing a pile of inodes from /storage2.
| increase the number of inodes online |
1,686,470,364,000 |
Today, I ran out of inodes on one of my VPSs.
I deleted a bunch of superfluous small files, freeing enough inodes to make the system operational again:
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/simfs 586K 529K 58K 91% /
I also hacked together a short command to give me the number of used inodes (essentially listing all files with their inode numbers, filtering out duplicates and counting the lines of the result):
sudo find / -xdev -type f -printf '%i~%P\n' > files.txt
<files.txt | sort -u -t'~' -k1,1 > inodes.txt
wc -l files.txt inodes.txt
1513608 files.txt
275320 inodes.txt
So it looks like the files on / only account for 275k inodes, but df reports 529k used.
How can that be?
(I even restarted the system to free any inodes that were still locked by processes, but that didn't change the amount of used inodes at all.)
|
Use a bind mount instead of -xdev. Also, directories use inodes too.
mkdir /mnt/somewhere
mount -o bind / /mnt/somewhere
find /mnt/somewhere -printf '%i\n' | sort -u | wc -l
| Where have all my inodes gone? |
1,686,470,364,000 |
A question was given to us by a lecturer:
How many data blocks are needed to collect all the data in an EXT4
file system using inodes if the file size is 54 KB and there is a
block size of 4KB.
Answer: 15
The only explanation I can find is 54/4 = 13.5, which is round up to 14 data blocks and we add 1 inode block, so 15 blocks in total. What confuses me is that the question asks explicitly for data blocks, not inode blocks. Does this mean that an inode block is the same as a data block? Regardless of that, is the statement each file gets one inode block true, and does that apply only to EXT4 filesystem?
I have not yet gotten the explanation from a lecturer nor could I find one on the internet, thus I am asking it here. Please let me know, if it is not the right place to ask.
I thank for the answer in advance.
|
It's hard to know what they're thinking exactly (you'd have to ask them), especially since they talk about "all data on the FS" (not just one file), and mention "using inodes" (in plural).
But, one thing they might be referring to, would be the basic block addressing, which addresses the first 12 data blocks directly from the inode, and then allocates an extra block to contain the addresses of the next 1024 data blocks (assuming the usual 4 kB filesystem block size). For 14 data blocks, you'd need that one indirect block in addition to the inode itself, for a total of 15 data blocks.
However, that's a bit dated, since AFAIK ext4 usually uses extent-based mappings nowadays, meaning it stores just one entry for each contiguous run of data blocks. That means the amount of metadata needed depends on how fragmented the file is, but I'd assume the common case is that there are only a few extents needed, and they can be stored directly in the inode:
The root node of the extent tree is stored in inode.i_block, which allows for the first four extents to be recorded without the use of extra metadata blocks.
See "The Contents of inode.i_block" in the Ext4 Disk Layout document on wiki.kernel.org.
| Each file gets one inode block |
1,686,470,364,000 |
When I trace the function graph when calling write(), I find that within function ext4_file_write_iter() it locks the inode->i_rwsem by calling inode_lock(inode) at the beginning. After that call __generic_file_write_iter() to write data to file. And unlock the inode in the end.
So is it the inode->i_rwsem used to protect concurrent write to the same file?
But I write a program that concurrently writes data to the same region of a file (pwrite(fd,buf,SIZE,0)) and the result shows that writes are not serialized. And I found it has to use flock/fcntl to serialize concurrent writes which works deponded on inode->i_flctx.
What I want to ask is that what's the purpose of the inode->i_rwsem.
What is different among inode->i_rwsem, inode->i_flctx and inode->i_lock?
Thanks.
|
inode->i_rwsem is used internally by the kernel to ensure that the kernel itself doesn't read or write from/to a file at the same time, to avoid any corruption or race conditions. It doesn't affect the userspace; you can still have the file opened for read/write by multiple processes at the same time. But if multiple processes try to read/write from/to the file simultaneously, the kernel will actually do it serially behind the scenes.
In you case, if there are two processes that are trying to write to the same region with pwrite(fd,buf,SIZE,0), without an internal locking mechanism such as what i_rwsem is used for, the kernel might start writing some of the data from the first process, and at the same time start writing the data from the second processes, without the write operation of the first process completed. It will impact the integrity of the entire filesystem, and might even lead to the kernel crashing due to race condition.
The internal locking in the kernel prevents those situations. The first write from the first process will complete, and only then the second write will be performed (and probably override the "write" from the first process, if they both write to exactly the same region in the file).
inode->i_flctx, as you've already found out, is controlled by flock/fcntl calls from userspace, when the process itself wants to limit the number of processes the can have the file open at the same time. For instance, one process can lock the file for writing, and if another one wants to lock the same file before the other one releases it, it will be denied or blocked.
Let's take this case of two processes that write to the same file, and perform different writes. Each process could override the data written by the other process. In order to avoid that in the userspace, the application itself could use flock/fcntl to prevent two processes opening the same file.
Here's another example:
One process writes to a file, and a second process reads from the same file.
The second process could read partial data because the first one hasn't completed the write.
In that case, to prevent this situation:
The first process will have to acquire a lock the file to prevent other processes from opening it until it finishes the write.
The second process will try acquire a lock to the same file, and will be blocked (or failed, depends on how it tried to lock the file) because it's already locked by another process.
The first process finishes the write, releases the lock (again,
explicitly in userspace by calling one of the system calls
mentioned)
Only then the second process could lock the file for reading.
While the second process is reading the file, other processes that
will try to acquire lock for the file will again get blocked until:
The reading process finishes the reading.
So with flock/fcntl you can handle those cases programmatically in the application's source code, and the kernel uses i_flctx to know if a certain process acquired a lock to the file, and to prevent other process to acquire another lock until the first process released it.
inode->i_lock, just like inode->i_rwsem, is used only by the kernel to protect the kernel from race conditions when dealing with the inode's state in the kernel. i_rwsem is used to protect the writing, i_lock is used to protect changes in the inode state.
In other words, unless you're a kernel developer, you shouldn't worry about inode->i_lock or inode->i_rwsem, which are only parts of the kernel's implementation mechanism of a inode, and also about inode->i_flctx which is part of the kernel's internal implementation mechanism of file locking from userspace.
| What‘s different between inode->i_rwsem and i_flctx? |
1,686,470,364,000 |
I have an example to better illustrate what I'm talking about:
$ touch tas
$ ln -s /etc/leviathan_pass/leviathan3 /tmp/l2/tas
ln: failed to create symbolic link '/tmp/l2/tas': File exists
Basically I can only symlink if the file I want to link doesn't exist. I understand this issue when talking about hard links - there's no way of linking two different files as it would lead to an inode conflict (so the file must be created at the time the command is running, to assure, and I'm presuming, they both "point" to the same inode). Now when talking about soft links it doesn't make sense to me, symlinks have nothing to do with the inodes, so what could be the problem?
Thanks in advance for any help.
|
The command ln won’t clobber existing files by default. You can use ln -sf TARGET LINK_NAME to force overwriting the destination path (LINK_NAME) with a symlink.
You can use ln -f TARGET LINK_NAME to overwrite LINK_NAME with a hard link to, your explanation doesn’t make any sense about inode conflict. It just replaces the file. You are partially right that the target has to exist first for hard links.
| Why can't I symlink a preexisting file to a target file? [duplicate] |
1,686,470,364,000 |
There are two groups of LSM hooks under Security hooks for inode operations: inode_* and path_*.
Many of them look identical. For example, inode_link and path_link.
What is the difference between the inode and path hooks? When each should be used?
|
Path hooks were added by TOMOYO maintainers, to allow file path calculation in LSM module.
These hooks receive a pointer to path struct.
inode hooks resides on a lower level, and receive a pointer to inode struct. The file path cannot be retrieved from this struct.
Generally speaking, if you don't need the file path you should use inode hooks since they are called on a lower level. It means that your hook will be called less frequently.
Note that path hooks are compiled only if the kernel is compiled with CONFIG_SECURITY_PATH.
| LSM Hooks - What is the difference between inode hooks to path hooks |
1,686,470,364,000 |
I'm reading some doc about UNIX but I don't understand two things:
Why is important for the kernel to know the current working directory of the running process?
Why not keeping the inode information in the directory?
|
The system needs to keep track of the current directory of all processes because otherwise processes couldn't use relative paths for anything (including for example file open or stat, and changing directories — what does chdir("..") mean if you don't track were the process currently sits?).
There's also the matter that without tracking that info, the kernel wouldn't be able to check if a process is sitting inside a given mount point. So you'd be liable to accidentally unmount a filesystem from under a process, leading to inconsistent state.
For your second question: think about hard links. They would be much harder to implement correctly and safely if the inode data was in the directory "structure" itself. Much easier to have essentially pointers to the inodes in the directory structure, makes adding or removing links to a given inode pretty simple.
| Kernel current working directory and inode information placement |
1,686,470,364,000 |
background:
Being somewhat of a coward I have up to now dd whole filesystems for backup. Major drawback has become the excessive use of memory for those complete backups (which unfortunatelly also included free blocks)
question
I would like to backup now only the files inside the filesystem, but yet be able to recreate the filesystem if need be. While the data can be easily extracted (i.e. via rsync -a),
I wonder, if there are some cases I overlook where for instance the inode number assigned to the file would matter?
This is especcially with the background of backing up the / root filesystem with the system on it. I am not so much worried about the /home/ filesystem but would be imaginative enough to expect that some strange thing might be when restoring the / root filesystem and suddenly the inodes have changed?
A good answer would include a most comprising list of cases in which the inode number might matter and cause eventually trouble.
update
Some experimenting reveals that for instance the hard links (naturally referencing the same inode) might need some attantion. Unsure if they need to have to be reassigned necessarily the same inode.
Luckily the number of hardlinks on a plain ubuntu 12.04 here is only about 10 files (so that I can script-record them and repair if needed and rsync -a would not care about the inode number)
Example
one case I think important is the case of selinux security module as it bassically uses inode numbers. So this is already one case, but maybe there are others.
Update2
I just run a test backing up and restoring a dummy 12.04 Ubuntu system using rsync -aH while reformating the partition in between to setup a new ext4 by mkfs.ext4 /dev/sdX -U oldfsUUID. Essentially when the files where restored all the inodes used most often where not anymore related to the original ones. Luckily hence it seems that for this one case of my Ubuntu 12.04 setup the inodes did not seem to matter. I am aware that this does not prove much. I still would appreciate an answer with a list of problematic cases. The one selinux I already mentioned, but I think there might be more and hence the chance for a good answer from somebody who knows.
|
Inode numbers don't matter to normal applications. This is partly because there's little use for inode numbers, and partly because if an application depended on inode numbers, it would stop working after a backup-and-restore cycle. So backup systems don't restore the inode numbers, so applications don't depend on them, so backup systems don't need to restore the inode numbers.
Most approaches to backups wouldn't even be able to restore inode numbers. The filesystem driver in the kernel uses whatever inode is free when creating a file, there's no way to constrain that from applications.
Some filesystems don't even have inode numbers.
The one thing applications use inode numbers for is to test whether two paths designate the same file: compare the device number and inode number, at specific point in time. The device number and inode number don't have to remain constant over time for this. Backup programs themselves do this to detect hard links.
There's no way to open a file given its inode number, or get at a file to a path given its inode number (excluding debugging tools requiring access to the underlying block device). On most filesystems, the path points to the inode, but the inode doesn't contain a pointer to the directory containing the file, so this couldn't be implemented without traversing the whole filesystem. Besides the file could even be deleted (as in, might have a hard link count of 0, waiting to be closed before its contents gets deleted and its inode freed).
SELinux uses inodes to track contexts, not inode numbers. SELinux contexts are stored using paths, like everything else.
rsync -AHX is a safe and common way of making backups.
I can think of one application that uses inode numbers: some versions of Rogue, one of the first full-screen terminal-based games, which motivated the Curses library still used today. It stores the inode number in a save file, to prevent casual copying of save files. I've never seen that done in a “serious” application.
| At Backup, when would the filesystems inode numbers matter? |
1,686,470,364,000 |
I know that it isn't possible to change the inode count of an ext filesystem after its creation, but I haven't been able to find any explanation on why it isn't.
Can anyone enlighten me?
|
Why? Because no one has written a tool that does it. And that's probably because it's a not entirely trivial change to the filesystem metadata.
There are other issues like this; for example you can't resize ext4 to >16TB. That needs 64bit structures which aren't used by default.
Same with other filesystems, for example you can't shrink XFS.
None of these things are impossible, but it seems that no tools exist to do it either, at least not directly. Someone would have to develop them... and that usually requires in depth knowledge of the specific filesystem.
| Why is it impossible to change the inode count of an ext filesystem? |
1,686,470,364,000 |
A bit of context that I think is relevant for the appropriate solution:
I have a server that has two folders; one is ingest, the other is sorted. The source of the sorted folder is the ingest folder, all directories are unique, all files are hard links.
The result of this is that when the ingest folder has a file deleted, it stays in the sorted folder, and vice versa. This makes cleanup almost impossible, as there are hundreds of thousands of files totaling about 40 terabytes.
I have a script to add all links to a database, with their inode and path name. I can then use some SQL to find the inodes that only appear once, and decide whether or not I want to delete them.
This solution is very slow (need to refresh the entire database every time I want to manage it) and quite clunky (need to run the query, then manually delete files over CLI).
Is there a solution like ncdu or any dual-pane file browser that can show inodes, and filter specifically on number of links for the inode (as shown by stat)?
|
I have now used find . -type f -links +1 to get all the files with more than one link, then used sed to make all links absolute, and then ncdu -X list.txt to scan for any files except for those listed.
This solution is still slow and I am looking for a better one, but it does improve my process quite a bit already so I am posting it as an answer
| Comparing two directories based on inodes |
1,686,470,364,000 |
What is the difference between FS_IOC_GETFLAGS and FS_IOC_FSGETXATTR ioctl commands? What flags do both return?
|
In the Linux context, FS_IOC_GETFLAGS and FS_IOC_FSGETXATTR both retrieve inode flags.
GETFLAGS is the older ioctl, and comes originally from ext2 (again, in Linux); it manipulates a 32-bit value and has thus limited expansion capabilities — there aren’t many unused bits available.
FSGETXATTR comes from XFS, and was recently (2016) moved from XFS to the shared VFS layer. It uses a data structure, struct fsxattr, which allows for more values and more expansion.
Both of these, and the meanings of the data they return, are defined in linux/fs.h. The GETFLAGS flags are additionally documented in ioctl_iflags(2). Common values between the two correspond mostly to GETFLAGS flags which were historically supported by XFS: “append only”, “no atime updates”, “no dump”, “immutable”, and “synchronous updates”.
Note that in both cases support varies from one file system to another, and some flags aren’t actually supported at all.
| File system Inode flags: difference between FS_IOC_GETFLAGS and FS_IOC_FSGETXATTR |
1,686,470,364,000 |
While copying music to the sd card of my android phone, the laptop froze so I had to reboot it using sysrq magic. Now any file manager on my Ubuntu or Android shows a directory 0 bytes big, and undeletable. It's type is inode/x-corrupted The ls command ran from root user on Android, doesn't show the directory, however. The Internets tell me that I have to find out the inode of the directory, but when I do ls -i from my Ubuntu, it shows every others directory inode, and an I/O error on this one.
What do I do to get rid of it?
|
I had a similar issue with my SD card recently. I was not able to fix it under Linux. However, as soon as I plugged the card into a Windows machine, the system came up with a message asking whether I want to repair the card as apparently it was not unmounted correctly. The repair under Windows helped.
| How to delete corrupted directory |
1,532,142,916,000 |
I know that ls lists the names of the files in a given directory and ls -i shows the names and the inode numbers.
But why is it slower?
EDIT: This happens with big directories
The names and the inode numbers are stored in the directory information block together, hence why does it take more time to query the inode numbers?
|
strace shows me that ls -i is calling lstat() on each filename
That would explain the extra work.
Given that readdir() has already returned the inode number this appears to be sub-optimal
while this feels like a bug, this behaviour is for consistency with mount points. (see Thomas' comment)
| Why is ls -i slower than ls? |
1,532,142,916,000 |
I have an ext4 formatted disk with thousands of files that are generated automatically and are needed. A few thousand of them are only one byte long, some two bytes. All files in both groups of tiny files are identical.
How much space can I save by locating these, say 1000, files of 1 byte in length, removing each and hard-linking to a single representative file?
Like this:
# ls -l
-rw-r----- 1 john john 1 Feb 25 10:29 a
-rw-r----- 1 john john 1 Feb 25 10:29 b
-rw-r----- 1 john john 1 Feb 25 10:29 c
# du -kcs ?
4 a
4 b
4 c
12 total
Try to consolidate:
# rm b c
# ln a b
# ln a c
ll
total 12
-rw-r----- 3 john john 1 Feb 25 10:29 a
-rw-r----- 3 john john 1 Feb 25 10:29 b
-rw-r----- 3 john john 1 Feb 25 10:29 c
# du -kcs ?
4 a
4 total
(Please note that du does not even list b and c which I find curious).
Question: Is it really that easy and one can save 999*4 KiB in my 1000 file scenario if an allocation block is 4 KiB in size?
Or, does ext4 have the ability to transparently "merge tails", or store tiny files in the "directory inode" (I vaguely remember some filesystems can do that)?
(I know file allocation blocks can vary and a command like tune2fs -l /dev/sda1 can tell me.)
|
There are three parts to storing files: the blocks used to store the file contents, the inode used to store the file’s metadata, and the directory entry (or entries) pointing to the inode.
When you create multiple separate files, in the most general case you pay this cost as many times as there are files.
With inline data (if your file system was created with the appropriate options), you save the blocks used to store the file contents if the file is small enough, but you still need one inode per file and at least one directory entry per file.
With hard links, you save the blocks used to store the file contents and the inodes: there’s only one inode, one instance of the file data (whether inline in the inode or separate), and as many directory entries as links.
Given that you need to store the directory entries anyway, hard links are effectively free. Anything else will involve more storage; exactly how much depends on your file system’s specific settings.
| How much space can I save on ext4 by replacing 1000 identical 1-byte files with 999 hard-links and 1 file? |
1,532,142,916,000 |
How can I get the number of inodes used by files in a given directory tree?
Important: including hidden directories under it, like .git
|
As found on How do I count all the files recursively through directories
find . -printf '%i\n' | sort -u | wc -l
Or if you don't have GNU find and need a portable version:
find . -exec ls -id '{}' \; | awk '{print $1}' | sort -u | wc -l
| Get the number of inode in a tree |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.