date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,286,221,328,000
I understand the reasoning why nearly every unix version doesn't allow hard-linking of directories (in fact HFS+ on OS X is the only one I know, but even that isn't made easy to do yourself). However, all file-systems in theory support hard-linked directories, as all directories contain at least one extra hard-link to itself, plus extra hard-links in sub-directories pointing back to their parent. Now, I realise that hard-linking can be dangerous if misused, as it can create cyclical structures that few programs will check for, and thus become stuck in an infinite loop. However, I was hoping to use hard-links to create a Time Machine style backup that can work for any unix. I don't believe that this kind of structure would be dangerous, as the links simply point to previous backups; there should be no risk of cyclical linking. In my case I'm currently just using rsync to create hard-links to existing files, but this slow and wasteful, particularly with very large backups and especially if I already know which directories are unchanged. With this in mind, is there any way to force the creation of directory hard-links on unix variants? ln is presumably no good as this is the place that many unix flavours put their restricts upon in order to prevent hard-linking directories, and ln versions that support hard-linked directories specifically state that the operation is likely to fail. But for someone who knows the risks, and knows that their use-case is safe, is there any way to actually create the link anyway? Ideally for a shell script, but if I need to compile a small program to do it then I suppose I could.
Don't do this. If you want to have a backup system using hard links to save space, better to use rsync with --link-dest, which will hard link files appropriately to save space, without causing the problems that this causes (that is, hard linking between directories is a corruption of the filesystem, and will cause it to report wrong inode counts + fail fsck + generally have unknown semantics due to not being a DAG).
Forcibly create directory hard link(s)?
1,286,221,328,000
I have a 2.5 TB of data that I want to put in a 2TB hard drive to mail somewhere. It's not hopeless, as a very large fraction of the data consists of duplicate files. I am considering using jdupes with the -H option, which will replace duplicate files with hardlinks to a single file. Here's the problem: If I tar a directory containing multiple hard links to other files in the directory tree, will tar "reduplicate" them in the archive file?
Probably a duplicate from Dereferencing hard links By default, a single copy of hardlinked data should be included in your archive.
How does tar deal with hardlinked files? [duplicate]
1,286,221,328,000
/path/to/fname#line1242 Is there a way to refer to a specific line number in a file as part of the pathname, or some way to package a link to a line number, which looks/behaves like a pathname? For example, giving the string above to another user of the same filesystem so that they can easily open my file "fname" and instantly be on line 1242 therein. I'm thinking of behavior similar to HTML anchors, which can be included inline with the URI to a page, and then behave just like links. Namely, they can be given to other users of the same filesystem and will indicate a given line in the file to those users, opening the file by default to that line. I realize that in UNIX the only things that can truly be part of a pathname may be directories, files and pseudo-files. But then there are globs, string expansion, etc., which are not strictly part of a pathname but can be interposed in one "comfortably", while the filename is passed around and referenced, without entailing any additional commands. Is there an inline way to simulate anchor/link behavior like this for a UNIX file? I could include command substitution in my "pathname" with sed "1242p" and some kind of self-reference, but then I'm no longer dealing with a pathname, just a command operating on a file. Come to think of it, then I would not be linking, just extracting a line. I can't think of a way to link to a specific line at all (without ignoring the rest of the file). GNU bash, version 3.2.51
You could create a file called +view +1242 fname. Then calling vi or view on that file: view '+view +1242 fname' would open fname in view and put the cursor at the beginning of the 1242nd line (here assuming the vim implementation of vi/view). Or do: ln -s / '+view +1242 ' So you do: view '+view +1242 /etc/passwd' to view /etc/passwd at line 1242. Or: touch '+exe "view +1242 \57etc\57passwd"' And view that with: view '+exe "view +1242 \57etc\57passwd"' You could also make the top line of the target file: #! /usr/bin/less +1242 And make it executable (chmod +x fname) and execute it for it to be open by less at the 1242nd line.
Bash: Path or link to a line in a file?
1,286,221,328,000
I plan to keep all my movies in one giant folder, and then create other folders for the genres, while creating links for all the movies in the genre folder. This way I can organize movies into multiple genres without unnecessarily copying them. I've been planning to use hard links in this endeavor in order to create a more robust system in which I may be able to move files around without breaking links. However, I'm wondering if hard links will take more storage space than symbolic links and bloat my machine. I've created soft and hard links to files to test this, but they both show the original file size when I look at file size under preferences in Thunar. Which file takes more hard drive space, Symbolic Links, or Hard Links? On a semi-unrelated side note, does xbmc / kodi recognize links to videos as actual videos?
Symbolic link files take more space. Hard linked files share the same inode; but a symbolic file is a pointer to the original (location). Despite that, there are two caveats for hard links: Not all file system support hard links. Hard links cannot be applied for folders. I guess you do not need to consider about the storage issue since in most of the cases they are trivial. In addition, there might be some tools helping you organize genres virtually (they may take advantages of virtual file systems).
What is the difference in file size between Symbolic and Hard links?
1,286,221,328,000
Where does the *nix system store information about number of hard links to a specific inode? I can't find any information about that. Everywhere what a hard link is but rarely a bit more advanced information that touches inodes related stuff. An inode stores information about number of links but where does it get it from? Can I locate all the links (both hard and soft) by knowing only the inode number?
The hard link count is stored in the inode. It starts at 1 when the file is created, increases by 1 each time the link system call is successful, and decreases by 1 each time the unlink system call is successful. The only way to find all the hard links to the same file, i.e. to find all the pathnames leading to a given inode, is to go through the whole filesystem and compare inode numbers. The inode does not point back to the directory entries. Directories are a special case: their hard links obey strict rules. (Some unix variants allow root to bypass these rules at the administrator's peril.) The hard links to a directory are its . entry, its children's .. entry, and one entry in its parent directory (the parent being the directory reached by the directory's .. entry). There is no way to find all the symbolic links pointing to a file. They could be anywhere, including on a filesystem that isn't mounted. With GNU or FreeBSD find, you can use find /some/dir -samefile /path/to/foo to find all the hard links to the file /path/to/foo that are under /some/dir. With the -L option, you can find all the soft and hard links to that file. You can find an inode by number with the -inum predicate instead of -samefile.
Where is information about hard/soft links stored?
1,286,221,328,000
If we have a file on a disk and create a hard link pointing to it then we have two references to the same data. If one link gets deleted it does not affect the other link as it is directly pointing to the data. If I have two links (A and B) pointing to file ABC and I move link A to another disk then I will have two copies of the data. Link A will be pointing to the data on the new disk and link B pointing to the data on the old disk. If I want to move both links A and B to a new disk how can I do this without end up having two copies of data on the new disk?
rsync is able to copy hard links for you. Check -H option: -H, --hard-links preserve hard links
How can you move hardlinks to another disk
1,286,221,328,000
The normal way to safely, atomically write a file X on Unix is: Write the new file contents to a temporary file Y. rename(2) Y to X In two steps it appears that we have done nothing but change X "in-place". It is protected against race conditions and unintentional data loss (where X is destroyed but Y is incomplete or destroyed). The drawback (in this case) of this is that it doesn't write the inode referred to by X in-place; rename(2) makes X refer to a new inode number. When X was a file with link count > 1 (an explicit hard link), now it doesn't refer to the same inode as before, the hard link is broken. The obvious way to eliminate the drawback is to write the file in-place, but this is not atomic, can fail, might result in data loss etc. Is there some way to do it atomically like rename(2) but preserve hard links? Perhaps to change the inode number of Y (the temporary file) to the same as X, and give it X's name? An inode-level "rename." This would effectively write the inode referred to by X with Y's new contents, but would not break its hard-link property, and would keep the old name. If the hypothetical inode "rename" was atomic, then I think this would be atomic and protected against data loss / races.
The issue You have a (mostly) exhaustive list of systems calls here. You will notice that there is no "replace the content of this inode" call. Modifying that content always implies: Opening the file to get a file descriptor. optional seek to the desired write offset Writing to the file. optional Truncating old data, if new data is smaller. Step 4 can be done earlier. There are some shortcuts as well, such as pwrite, which directly write at a specified offset, combining steps #2 and #3, or scatter writing. An alternate way is to use a memory mapping, but it gets worse as every byte written may be sent to the underlying file independently (conceptually as if every write was a 1-byte write call). → The point is the very best scenario you can have is still 2 operations: one write and one truncate. Whatever the order you perform them in, you still risk another process to mess with the file in between and end up with a corrupted file. Solutions Normal solution As you have noted, this is why the canonical approach is to create a new file, you know you are the only writer of (you can even guarantee this by combining O_TMPFILE and linkat), then atomically redirect the old name to the new file. There are two other options, however both fail in some way: Mandatory locking It enables file access to be denied to other processes by setting a special flag combination. Sounds like the tool for the job, right? However: It must be enabled at the filesystem level (it's a flag when mounting). Warning: the Linux implementation of mandatory locking is unreliable. Since Linux 4.5, mandatory locking has been made an optional feature. This is an initial step toward removing this feature completely. This is only logical, as Unix has always shun away from locks. They are error prone, and it is impossible to cover all edge cases and guarantee no deadlock. Advisory locking It is set using the fcntl system call. However, it is only advisory, and most programs simply ignore it. In fact it is only good for managing locks on shared file among several processes cooperating. Conclusion Is there some way to do it atomically like rename(2) but preserve hard links? No. Inodes are low level, almost an implementation detail. Very few APIs acknowledge their existence (I believe the stat family of calls is the only one). Whatever you try to do probably relies on either misusing the design of Unix filesystems or simply asking too much to it. Could this be somewhat of an XY-problem?
Atomically write a file without changing inodes (preserve hard link)
1,286,221,328,000
I am using Mac OS X, but the command line. I want to make a link from my .profile file, to another file on my system so that updating one updates the other and vice versa. This article makes me think that a hard link is what I need. The command I have been using is: ln .profile ~/Newpath/.profile This kind of works, in that a file is created at Newpath, however, updating one file does not automatically update the other nor vice versa. I have tried ln with simple files on my desktop, and the links do indeed update each other. I am wondering if anybody has experience with links not working with dot files or with files in their home directory on Mac for some reason. Any idea what could be going on here?
dubiousjim's comment pointed out my issue: I think git will break hard links every time you checkout a new copy of the file. EDIT: Yes, I just verified it will, even if the hard links are in a single repo
Why are hard links are not updated when modified with an editor
1,286,221,328,000
Let's say /A/B/c.sh is symbolic linked to /X/Y/c.sh. If c.sh has the command "./SOMETHING", '.' means /A/B/ or /X/Y/? How about the hard link?
. is actually the current working directory in either case; it has nothing to do with the directory holding the script: [/tmp] $ echo "realpath ." > test.sh && chmod +x test.sh [/tmp] $ /tmp/test.sh /tmp [/tmp] $ cd /usr/bin [/usr/bin] $ /tmp/test.sh /usr/bin
Symbolic link and hard link questions
1,286,221,328,000
I'm writing a function called restore that will copy a file from a backup directory to the current directory. I now need to create a hard link to restore so that it can be called as purge. How would I implement it so that I could use the if statement if [ "$0" = "purge" ] for when restore is called as purge? Here is my code, although I will shorten it since I have tested it (it works): restore(){ if [ "$1" = "-p" ] || [ "$0" = "purge" ]; then while [ ! ] do #Purge code, etc... done elif [ "$1" != "-p" ]; then select fname in $(find /$HOME/Backup -name "$1*" | sed 's!.*/!!' | sed 's!$1!!') quit do #If restore is called with an argument code... done local newfname=$(echo "$fname"|sed -E 's/[0-9]{11}$//') cp -i "/$HOME/Backup/$fname" "$newfname" exit 0 fi while [ ! ] do fname="" select fname in $(ls /$HOME/Backup) quit do #Restore with no arguments code... done local newfname=$(echo "$fname"|sed -E 's/[0-9]{11}$//') cp -i "/$HOME/Backup/$fname" "$newfname" done } Calling restore with the -p option is the same as invoking restore as purge. So how would I implement the code so that restore can be invoked by using purge? It is supposed to be a script rather than a function. I made a hard link to Restore.sh named Purge.sh, but when I call it using ./Purge.sh it still runs the standard Restore code. How can I determine if Restore is called by the hard link file?
Do the hard link to the restore.sh: ln restore.sh link_to_restore.sh The content of the restore.sh file: #!/bin/bash if [ "$0" = "./link_to_restore.sh" ]; then echo foo elif [ "$0" = "./restore.sh" ]; then echo bar fi Testing $ ./restore.sh bar $ ./link_to_restore.sh foo
Calling a function by a second name (homework)
1,286,221,328,000
Is it possible (in classical ext4, and/or in any other filesystem) to create two files that point to the same content, such that if one file is modified, the content is duplicated and the two files become different? It would be very practical to save space on my hard drive. Context: I have some heavy videos that I share on an owncloud server that can be modified by lot's of people and therefore it may be possible that some people modify/remove these files... I really would like to make sure I have a backup of these files, and therefore I need for now to maintain two directories, the normal nextcloud one, and one "backup" directory, which (at least) doubles the size required to store it. I was thinking of creating a git repo on top of the nextcloud directory, and it make the backup process much easier when new videos are added (just git add .), but git still doubles the space, between the blob and the working directory. Ideally, a solution that can be combined with git would be awesome (i.e. that allows me to create a history of the video changes, with commits, checkouts... without doubling the disk space). Moreover, I'm curious to have a solution for various file systems (especially if you have tricks for filesystems that do not implement snapshots). Note that LVM snapshot is not really a solution as I don't want to backup my full volume, only some specific files/folders.
Yes on a Copy On Write file systems (Btrfs, ZFS). git-annex is as close as you are likely to get on ext4. Note that you can mount --bind a LVM backed volume or a Btrfs file system over a folder in another file system.
Hardlink that "split" when a file changes
1,286,221,328,000
Does using rm on a symlink or a hard-link remove the source in addition to the link? Is there a good way to delete a hard-link or a symlink without deleting the source? And if there is a good way outside of rm, would it be a good idea to use this more frequently instead of rm?
rm always removes a link. If it's the last one, the space allocated to the file is reclaimed. However, removing a symlink doesn't affect its target, so it seems like you've been misled. If the file no longer has a name, how would you find it?
Safely remove a symlink or hardlink
1,286,221,328,000
For example: I have file a.txt and file b.txt. I want a link from a.txt to b.txt. If I open/read file a.txt, file b.txt should open/read. If I try something like ln -s a.txt b.txt I get an error because file b.txt exist. How can I create a link from a.txt to b.txt?
You need to remove file b.txt previously with command rm b.txt, then create symbolic link with your command ln -s a.txt b.txt. You could use hard link from b.txt to a.txt, then execute ln a.txt b.txt, both a.txt and b.txt would point the same file on hard drive and removing a.txt doesn't remove file, which could be read through b.txt. With symbolic link from b.txt to a.txt removing a.txt remove file and b.txt symbolic link will be broken. More about hard links: https://en.wikipedia.org/wiki/Hard_link
Create link between two existing files
1,286,221,328,000
Say I have the following setup : $ cat fileA textA $ cat fileB textB $ ln fileA myLink $ cat myLink # as expected textA I do not understand the following behaviour : $ cp fileB fileA $ cat myLink # expected ? textB I would have expected this outcome if I had written ln -s fileA myLink instead, but not here. I would have expected cp in overwriting mode to do the following : Copy the content of fileB somewhere on the hard drive Link fileA to that hard drive address but instead, I infer it does the following : Follow the link fileA Copy the content of fileB at that address The same does not seem to go for mv, with whick it works as I expected. My questions : Is this explained somewhere that I have missed in man cp or man mv or man ln ? Is this behaviour just a coincidence, (say if fileB is not much greater in size than fileA), or can it be reliably used as a feature ? Does this not defeat the idea of hard links ? Is there some way to modify the line cp fileB fileA so that the next cat myLink still shows textA ?
There is no "following the link" with hardlinks - creating a hardlinks simply gives several different names to the same file (at low level, files are actually integer numbers - "inodes", and they have names just for user convenience)- there is no "original" and "copy" - they are the same. So it is completly the same which of the hardlinks you open and write to, they are all the same. So cp by defaults opens one the files and writes to it, thus changing the file (and hence all the names it has). So yes, it is expected. Now, if you (instead of rewriting) first removed one of the names (thus reducing link count) and then recreated new file with the same name as you had, you would end up with two different files. That is what cp --remove-destination would do. 1 basics are documented at link(2) pointed to by ln(1) 2 yes it is normal behaviour and not a fluke. But see above remark about cp --remove-destination 3 no, not really. Hardlinks are simply several names for same file. What you seem to want are COW (copy-on-write) links, which only exist is special filesystems 4 yes, cp --remove-destination fileB fileA
cp overwriting without overwriting hardlinks to destination
1,286,221,328,000
It is easy to convert a symlink into a hardlink with ln -f (example) It would also be easy to convert a hardlink (filenames link and original) back to a symbolic link to link->original in the case where you know both files and define yourself which one is the "original file". You could easily create a simple script convert-known-hardlink-to-symlink that would result in something like: convert-known-hardlink-to-symlink link original $ ls -li 3802465 lrwxrwxrwx 1 14 Dec 6 09:52 link -> original 3802269 -rw-rw-r-- 1 0 Dec 6 09:52 original But it would be really useful if you had a script where you could define a working directory (default ./) and a search-directory where to search (default /) for files with the same inode and then convert all those hard-links to symbolic-links. The result would be that in the defined working directory all files that are hard-links are replaced with symbolic-links to the first found file with the same inode instead. A start would be find . -type f -links +1 -printf "%i: %p (%n)\n"
I created a script that will do this. The script converts all hard-links it finds in a source directory (first argument) that are the same as in the working directory (optional second argument) into symbolic links: https://gist.github.com/rubo77/7a9a83695a28412abbcd It has an option -n for a dry-run, that doesn't do anything but shows what would be done. Main part: $WORKING_DIR=./ #relative source directory from working directory: $SOURCE_DIR=../otherdir/with/hard-links/with-the-same-inodes # find all files in WORKING_DIR cd "$WORKING_DIR" find "." -type f -links +1 -printf "%i %p\n" | \ while read working_inode working_on do find "$SOURCE_DIR" -type f -links +1 -printf "%i %p\n" | sort -nk1 | \ while read inode file do if [[ $inode == $working_inode ]]; then ln -vsf "$file" "$working_on" fi done done The -links +1 --> Will find all files that have MORE than 1 link. Hardlinked files have a link count of at least two.
Convert a hardlink into a symbolic link
1,286,221,328,000
Additional Info Firstly, thank you for all the answers. So I re-ran the tests again to test the answer below that a directory/folder entry takes up 4KB and this was skewing my numbers, so this time by placing 20,000 files in a single directory and doing the cp -al to another directory. The results were very different, after taking off the length of the filenames, the hardlinks worked out to about 13 bytes per hardlink, much better than 600. Ok, so then for completeness working on the answer given below that this is due to each entry for a directory/folder taking up 4KB I did the test again, but this time I created thousands of directories and placed one file in each directory. The result after the maths (increased space taken on hd / by number of files (ignoring directories) was almost exactly 4KB for each file, showing that a hardlink does only take up a few bytes but that an entry for an actual directory/folder takes 4KB. So I was thinking of implementing the rsync / hardlink /snapshot backup strategy and was wondering how much data a hardlink took up, like it has to put an entry for the extra link as a directory entry etc. Anyway I couldn't seem to find any information on this and I guess it is file system dependent. The only info I could find was suggestions they took no space (probably meaning they take no space for file contents), to the space they take is negligible to they only take a few bytes to store the hardlink. So I took a couple of systems (one a vm and one on real hardware) and did the following in the root directory as root: mkdir link cp -al usr link The usr directory had about 54,000 files. The space used on the hd increased by about 34MB. So this works out around 600 bytes per hardlink, or am I doing something wrong? I am using LVM on both systems, formatted as ext4. The file name size is about 1.5MB altogether (I got that by doing ls -R and redirecting it to a file). To be honest, the rsync with hardlinks works so well I was planning on using it for daily backup on a couple of the work servers. I also thought it would be easy to make incremental backups / snapshots like this for a considerable period of time. However, after ten days 30mb is 300mb and so on. In addition if there have only been a few changes to the actual file data/contents, say a few hundred KB then storing 30+ MB of hardlinks per day seemed excessive, but I take your point about the size of modern disks. It was simply that I had not seen this hardlink size mentioned anywhere that I thought I may be doing something wrong. Is 600 bytes normal for a hardlink on a Linux OS? To calculate the space used I did a df before and after the cp -al.
cp -al usr link creates a bunch of hard links, but it also creates some directories. Directories can't be hard linked¹, so they're copied. Each hard link occupies the space of a directory entry, which needs to store at least the file's name and the inode number. Each directory occupies the space of a directory entry, plus an inode for its meta data. Most filesystems, including the ext2 family, count inode space separately. All the hard links are in directories created by the copy operation. So the space you're seeing is in fact the size of the directories under /usr. In most filesystems, each directory occupies at least one block. 4kB is a typical block size on Linux. So you can expect the copy to take 4×(number of directories) in kB, plus some change for the larger directories that require multiple blocks. Assuming 4kB blocks, your copy created about 8500 blocks, which sounds about the right ballpark for a /usr directory containing 54000 files. Directories must have exactly one parent directory. They in fact do have hard links (or at least appear so, though modern filesystems tend not to use hard links under the hood): one for their entry in their parent, one for their . entry, and one for the .. entry in every subdirectory. But you can't make other hard links to them. Some Unix variants allow root to make hard links to directories on some filesystems but at the risk of creating loops that can't be removed or hidden directory trees that can't be accessed.
hardlinks seem to take several hundred bytes just for the link itself (not file data)
1,286,221,328,000
I have a bash script in two places and I can't remember how I created them. they have the same inode but none of them seem to link to another. is there a hard link but should not Links count for that inode become two? $ ls -l ~/bin/dropbox-backup -rwxr-xr-x 1 bak bak 676 Aug 14 09:32 dropbox-backup $ ls -l ~/Dropbox/linux/scripts/dropbox-backup -rwxr-xr-x 1 bak bak 676 Aug 14 09:32 ~/Dropbox/linux/scripts/dropbox-backup $ stat ~/bin/dropbox-backup File: `dropbox-backup' Size: 676 Blocks: 8 IO Block: 4096 regular file Device: 806h/2054d Inode: 528738 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 1001/ bak) Gid: ( 1001/ bak) Access: 2013-08-14 20:40:25.599322386 +0100 Modify: 2013-08-14 09:32:47.748546462 +0100 Change: 2013-08-14 20:40:25.591322386 +0100 Birth: - $ stat ~/Dropbox/linux/scripts/dropbox-backup File: `/home/rag/Dropbox/linux/scripts/dropbox-backup' Size: 676 Blocks: 8 IO Block: 4096 regular file Device: 806h/2054d Inode: 528738 Links: 1 Access: (0755/-rwxr-xr-x) Uid: ( 1001/ bak) Gid: ( 1001/ bak) Access: 2013-08-14 20:40:25.599322386 +0100 Modify: 2013-08-14 09:32:47.748546462 +0100 Change: 2013-08-14 20:40:25.591322386 +0100 Birth: -
The files have the same inode and are on the same filesystem. You can see that in the output of stat: it reports Device: 806h/2054d Inode: 528738 for both files. All native unix filesystems report distinct inodes for distinct files (this may not be guaranteed for some remote or foreign filesystems). The two names for the files do “link to each other”, or more properly speaking, they lead to the same file. ~/bin/dropbox-backup and ~/Dropbox/linux/scripts/dropbox-backup are the same file. The most likely explanation is that ~/bin is a symbolic link to ~/Dropbox/linux/scripts or vice versa, so that you're reaching the same file through two different directory and symbolic link chains. You can check that by comparing the canonicalizations of the two paths (i.e. the paths with all symbolic links resolved): readlink -nf ~/bin/dropbox-backup ~/Dropbox/linux/scripts/dropbox-backup
Files have the same inode but they don't link to each other
1,286,221,328,000
Whenever I use mcedit to edit a file that is hard linked somewhere and I want to save the file, the editor asks me if I want to remove the hard links. Is that common behavior on Linux or is mcedit "special" in doing so? Why would regular applications (not fsck or other admin tools) care about hard links?
When you want to modify a file, you have two options, each with its benefits and drawbacks. You can overwrite the file in place. This does not use any extra space, and conserves the hard links, permissions and any other attribute beyond the content of the existing file. The major drawback of doing this is that if anything happens while the file is being written (the application crashes, or the power goes out), you end up with a partially written file. You can write the new version of the file to a new file with a different name, then move it into place. This uses more space and breaks hard links, and if you have write permissions on a file but not on the directory it contains, you can't do it at all. On the flip side, the old version of the file is atomically replaced by the new version, so at every point in time the file name points to a valid, complete version of the file. Mcedit is asking you which strategy to choose. Strangely though, mcedit's default strategy, for files with a single directory entry, is to truncate the existing file, putting your data at risk. Only when the safe strategy would break a hard link does it give you the opportunity to use it. You can change this in the “Edit save mode” dialog from the Options menu: “quick save” means overwrite, “safe save” means save to a temporary file then rename. When safe mode is chosen, you don't get a choice not to break symbolic links. (Observations made on mc 4.8.3. If this is still the case in the latest version, consider reporting it as a design bug — “safe mode” should be the default, and you should get an option not to break hard links in that case.) Good editors such as Vim or Emacs let you choose the default strategy.
Why does mcedit recommend removing hardlinks when saving a file?
1,286,221,328,000
I use rsync to make backups: rsync -a --link-dest=PATHTO/$PREVIOUSBACKUP $SOURCE $CURRENTBACKUP This way I save space due to using hard links. The problem appears when I need to backup a huge file which is always changing (virtual machine image). Is it possible to hardlink not whole the image, but only it's changed parts? Is there any kind of tool that can take care of this?
There are a number of things that could be done here. Note that none of them actually use hard links since they can only point to a full file. Using the btrfs filesystem opens up some very useful possibilities here. Note that btrfs is currently (most recent version is v3.13) still experimental. However, its COW (copy-on-write) ability is perfect for this kind of thing (provided of course that it is acceptable to have the backup on the same filesystem). Consider a btrfs filesystem mounted on /mnt, you can make an atomic snapshot of the whole filesystem with: btrfs subvolume snapshot /mnt /mnt/snapshot To allow for partial snapshots, you have to put your files to be backed up inside a subvolume rather than a directory. Eg: btrfs subvolume create /mnt/subvol mv stuff /mnt/subvol btrfs subvolume snapshot /mnt/subvol /mnt/subvol_snapshot Aside from using btfrs, you could also consider mounting the virtual machine image on one or both sides of the backup and using rsync between the two mount points. This blog shows how to mount a Virtual Box .vdi image using qemu-utils. The commands as root (untested): modprobe nbd qemu-nbd -c /dev/nbd0 <vdi-file> mount /dev/nbd0p1 /mnt ... umount /mnt qemu-nbd -d /dev/nbd0 Finally, the simplest approach which may be of some use is the --inplace option for rsync. From the man page: --inplace This option changes how rsync transfers a file when its data needs to be updated: instead of the default method of creating a new copy of the file and moving it into place when it is complete, rsync instead writes the updated data directly to the destination file. ... This option is useful for transferring large files with block-based changes or appended data, and also on systems that are disk bound, not network bound. The problem here of course is that there isn't any benefit to using this in combination with --link-dest (in rsync versions <2.6.4 the two are incompatible altogether) as a copy of the file will still have to be created at the destination.
“hard-linking” parts of a big file in which only a small part has changed
1,286,221,328,000
I wonder if storing the information about files in inodes instead of directly in the directory is worth the additional overhead. It may be well that I'm overestimating the overhead or overlooking some important thing, but that's why I'm asking. I see that something like "inodes" is necessary for hardlinks, but in case the overhead is really as big as I think, I wonder if any of the reasons justifies it: using hardlinks for backups is clever, but efficiency of backups is not important enough when compared to the efficiency of normal operations having neither speed nor size penalty for hardlinks can really matter, as this advantage holds only for the few files making use of hardlinks while the access to all files suffers the overhead saving some space for a couple of equally named binaries like bunzip2 and bcat is negligible I'm not saying that inodes/hardlinks are bad or useless, but can it justify the cost of the extra indirection (caching helps surely a lot, but it's no silver bullet)?
Hard links are besides the point. They are not the reason to have inodes. They're a byproduct: basically, any reasonable unix-like filesystem design (and even NTFS is close enough on this point) has hard links for free. The inode is where all the metadata of a file is stored: its modification time, its permissions, and so on. It is also where the location of the file data on the disk is stored. This data has to be stored somewhere. Storing the inode data inside the directory carries its own overhead. It makes the directory larger, so that obtaining a directory listing is slower. You save a seek for each file access, but each directory traversal (of which several are needed to access a file, one per directory on the file path) costs a little more. Most importantly, it makes it a lot more difficult to move a file from one directory to another: instead of moving only a pointer to the inode, you need to move all the metadata around. Unix systems always allow you to rename or delete a file, even if a process has it open. (On some unix variants, make this almost always.) This is a very important property in practice: it means that an application cannot “hijack” a file. Renaming or removing the file doesn't affect the application, it can continue reading and writing to the file. If the file is deleted, the data remains around until no process has the file open anymore. This is facilitated by associating the process with the inode. The process cannot be associate with the file name since that may change or even disappear at any time. See also What is a Superblock, Inode, Dentry and a File?
What are inodes good for?
1,286,221,328,000
Example script: #!/bin/sh -e sudo useradd -m user_a sudo useradd -m user_b -g user_a sudo chmod g+w /home/user_a set +e sudo su user_a <<EOF cd umask 027 >> file_a >> file_b >> file_c ls -l file_* EOF sudo su user_b <<EOF cd umask 000 rm -f file_* ls -l ~user_a/ set -x mv ~user_a/file_a . cp ~user_a/file_b . ln ~user_a/file_c . set +x ls -l ~/ EOF sudo userdel -r user_b sudo userdel -r user_a Output: -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_a -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_b -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_c total 0 -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_a -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_b -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_c + mv /home/user_a/file_a . + cp /home/user_a/file_b . + ln /home/user_a/file_c . ln: failed to create hard link ‘./file_c’ => ‘/home/user_a/file_c’: Operation not permitted + set +x total 0 -rw-r----- 1 user_a user_a 0 Jul 11 12:26 file_a -rw-r----- 1 user_b user_a 0 Jul 11 12:26 file_b userdel: user_b mail spool (/var/mail/user_b) not found userdel: user_a mail spool (/var/mail/user_a) not found
Which system are you running? On Linux, that behaviour is configurable, through /proc/sys/fs/protected_hardlinks (or sysctl fs.protected_hardlinks). The behaviour is described in proc(5): /proc/sys/fs/protected_hardlinks (since Linux 3.6) When the value in this file is 0, no restrictions are placed on the creation of hard links (i.e., this is the historical behavior before Linux 3.6). When the value in this file is 1, a hard link can be created to a target file only if one of the following conditions is true: The calling process has the CAP_FOWNER capability ... The filesystem UID of the process creating the link matches the owner (UID) of the target file ... All of the following conditions are true: the target is a regular file; the target file does not have its set-user-ID mode bit enabled; the target file does not have both its set-group-ID and group-executable mode bits enabled; and the caller has permission to read and write the target file (either via the file's permissions mask or because it has suitable capabilities). And the rationale for that should be clear: The default value in this file is 0. Setting the value to 1 prevents a longstanding class of security issues caused by hard-link-based time-of-check, time-of-use races, most commonly seen in world-writable directories such as /tmp. On Debian systems protected_hardlinks and the similar protected_symlinks default to one, so making a link without write access to the file doesn't work: $ ls -ld . ./foo drwxrwxr-x 2 root itvirta 4096 Jul 11 16:43 ./ -rw-r--r-- 1 root root 4 Jul 11 16:43 ./foo $ mv foo bar $ ln bar bar2 ln: failed to create hard link 'bar2' => 'bar': Operation not permitted Setting protected_hardlinks to zero lifts the restriction: # echo 0 > /proc/sys/fs/protected_hardlinks $ ln bar bar2 $ ls -l bar bar2 -rw-r--r-- 2 root root 4 Jul 11 16:43 bar -rw-r--r-- 2 root root 4 Jul 11 16:43 bar2
Why can I not hardlink to a file I don't own even though I can move it?
1,286,221,328,000
I have many files in a folder. I want to concatenate all these files to a single file. For example cat * > final_file; But this will increase disk space and also will consume time. Is there is a way where I can hardlink/softlink all the files to final_file? For example ln * final_file.
With links, I'm afraid, this will not be possible. However, you could use a named pipe. Example: # create some dummy files echo alpha >a echo beta >b echo gamma >c # create named pipe mkfifo allfiles # concatenate files into pipe cat a b c >allfiles The last call will block until some process reads from the pipe and then exit. For a continuous operation one can use a loop, which waits for a process to read and starts over again. while true; do cat a b c >allfiles done
hardlink/softlink multiple file to one file
1,286,221,328,000
I use symbolic links quite often, but after moving the original file, I lose track of the symbolic link. I also use symbolic links for keeping track of some files in the same directory, but again, I lose track. Is there any way (tool/method) to keep track of the symbolic link no matter what change I make? Is the hard link the only way for this? Is there any way to make the symbolic link in a relative way so that when I move the directory that contains both the original and link, the link should work.
Concering your second question, if you make the symlink using a relative path and then move the whole directory structure, it still should work. Consider the following terminal session: ~$ mkdir test ~$ cd test/ ~/test$ mkdir test2 ~/test$ cd test2/ ~/test/test2$ touch testfile; echo "hello, world" > testfile ~/test/test2$ cat testfile hello, world ~/test/test2$ cd .. ~/test$ ln -s ./test2/testfile testfileln ~/test$ ls -l total 8 drwxr-xr-x 2 xxxx xxxx 4096 2010-09-09 09:18 test2 lrwxrwxrwx 1 xxxx xxxx 16 2010-09-09 09:18 testfileln -> ./test2/testfile ~/test$ cd .. ~$ mv test/ testfoo ~$ cd testfoo/ ~/testfoo$ ls -l total 8 drwxr-xr-x 2 xxxx xxxx 4096 2010-09-09 09:18 test2 lrwxrwxrwx 1 xxxx xxxx 16 2010-09-09 09:18 testfileln -> ./test2/testfile /testfoo$ cat testfileln hello, world As for your first question, if you really want a link that will refer to the same file no matter what you do with the original location of the file, a hard link is probably what you want. A hard link is basically just another name refering to the same inode. Thus, there is no difference between the hard link and the "original file." However, if you need to link across file systems, hard links often do not work and you usually cannot make hard links to directories. Further, you will notice some differences when performing some file operations. Most notably, removing the original will not remove the file. The hard link will still point to the file and be accessible.
Keep tracking of symbolic links?
1,286,221,328,000
I'm trying to manage my dotfiles under version control. My dotfiles repos contains a xfce-base folder, this folder contains the .config/xfce4/.../xy-setting.xml stuff. I can stow, or better, symlink to the correct place, everything works as expected. But, when I open one of the xfce settings editors (Window Manager, Keyboard Shortcuts), the changes made ther do overwrite my symlink with a normal file. So, adieu version control. I guess this would not happen, If I had hard links, right? Is hard linking possible with gnu stow (doesnt seem so?), or are there any alternatives? EDIT: I came across this, does hard links, but doesn't work recursivly (complains about existing .config directory...) EDIT II: I'm still not sure if a hard link is a good solution.
You are correct that GNU Stow doesn't support hard-linking currently. However I think you're also correct in that hard-linking probably isn't any better a solution than symlinking, because if an external application will happily replace a symlink with a normal file then it can certainly also break a hard link (i.e. replace the inode). However, I do have some good news for you :-) I also use GNU Stow to manage my dotfiles, which is why in 2.1.3 I specifically added the --adopt option to help deal with precisely this scenario. After an external program has broken your symlink, you can simply restow with this option, and then the potentially changed version of the file will be adopted into your Stow package directory and the symlink restored, with the net effect of no change to the contents of the file. Since you track your dotfiles via version control, you can then see what has changed (e.g. via git diff) and then if you want, check in some or all of the changes. N.B. For package directories which are not tracked in a version control system, this is a risky option because it can modify the contents of the Stow package with no way of undoing the changes.
dotfiles: can/should(?) gnu stow make hard links, so I can still use xfce settings gui programs
1,286,221,328,000
I'm aware that Linux does not allow hard-linking to a directory. I read somewhere, that this is to prevent unintentional loops (or graphs, instead of the more desirable tree structure) in the file-system. that some *nix systems do allow the root user to hard-link to directories. So, if we are on one such system (that does allow hard-linking to a directory) and if we are the root user, then how is the parent directory entry, .., handled following the deletion of the (hard-link's) target and its parent? a (200) \-- . (200) \-- .. (100) \-- b (300) | \-- . (300) | \-- .. (200) | \-- c (400) | \-- . (400) | \-- .. (300) | \-- d (500) <snip> | \-- H (400) (In the above figure, the numbers in the parentheses are the inode addresses.) If a/H is an (attempted) hard-link to the directory a/b/c, then What should be the reference count stored in the inode 400: 2, 3, or 4? In other words, does hard-linking to a directory increases the reference count of the target directory's inode by 1 or by 2? If we delete a/b/c, the . and .. entries in inode 400 continue to point to valid inodes 400 and 300, respectively. But what happens to the reference count stored in inode 400 if the directory tree a/b is recursively deleted? Even if the inode 400 could be kept intact via a non-zero reference count (of either 1 or 2 - see the preceding question) in it, the inode address corresponding to .. inside inode 400 would still become invalid! Thus, after the directory tree b stands deleted, if the user changes into the a/H directory and then does a cd .. from there, what is supposed to happen? Note: If the default file-system on Linux (ext4) does not allow hard-linking to directories even by a root user, then I'd still be interested in knowing the answer to the above question for an inode-based file-system that does allow this feature.
Hard links to directories aren't fundamentally different to hard links for files. In fact, many filesystems do have hard links on directories, but only in a very disciplined way. In a filesystem that doesn't allow users to create hard links to directories, a directory's links are exactly the . entry in the directory itself; the .. entries in all the directories that have this directory as their parent; one entry in the directory that .. points to. An additional constraint in such filesystems is that from any directory, following .. nodes must eventually lead to the root. This ensures that the filesystem is presented as a single tree. This constraint is violated on filesystems that allow hard links to directories. Filesystems that allow hard links to directories allow more cases than the three above. However they maintain the constraint that these cases do exist: a directory's . always exists and points to itself; a directory's .. always points to a directory that has it as an entry. Unlinking a directory entry that is a directory only removes it if it contains no entry other than . and ... Thus a dangling .. cannot happen. What can go wrong is that a part of the filesystem can become detached. If a directory's .. pointing to one of its descendants, so that ../../../.. eventually forms a loop. (As seen above, filesystems that don't allow hard link manipulations prevent this.) If all the paths from the root to such a directory are unlinked, the part of the filesystem containing this directory cannot be reached anymore, unless there are processes that still have their current directory on it. That part can't even be deleted since there's no way to get at it. GCFS allows directory hard links and runs a garbage collector to delete such detached parts of the filesystem. You should read its specification, which addresses your concerns in details. This is an interesting intellectual exercise, but I don't know of any filesystem that's used in practice that provides garbage collection.
Linux: How does hard-linking to a directory work?
1,286,221,328,000
I want to create a backup of a single .tex file. I created the hard link to the file (which is not in Dropbox directory, lets say it is A) inside Dropbox directory. I did this so because I do not want to backup other auxiliary file created (eg. axu, .log, .bbl etc.) when compiling the tex file. I edit and compile the tex file in A. The changes are reflected when I see the file inside Dropbox directory. But its is not synced with the online folder. However, If I change the file (hard link) inside the Dropbox directory, it gets synced. Please let me know what is the problem here. Please give the solution within what I am using and trying to do rather than proposing alternative solution for same task. I am using Fedora 13.
Dropbox is probably using inotify or a variant thereof to watch for changes in the Dropbox directory. Because the change happens outside of the Dropbox directory, Dropbox doesn't see it. To get the desired effect, you might be able to use symlinks instead of hard-links. I'm not sure if there's any special reason it needs to be a hard-link for your use case (edit and compile).
Hardlinks in Dropbox not updated
1,286,221,328,000
I have a backup script which uses rsync -avz --link-dest=$oldbkp $source $newbkp at its core. The problem is that rsync a lot of times doesn't recognize that a file in $source hasn't changed and so it plainly copies it to $newbkp instead of hard linking it from $oldbkp. Another perplexing thing is that it is inconsistent - there are some files where the hardlinking works as expected. The backup partition is ntfs. As you can see here, a backup only an hour after the previous one takes whole 2GB of new data when the content barely changed at all (this is my home PC). $ du -hsc 20170424-1559 20170424-1724 2.6G 20170424-1559 2.1G 20170424-1724 4.6G total I've tried stating some examples. This one is failed hardlink (sha256 is the same for all): $ stat 20170424-1559/Documents/depeche File: 20170424-1559/Documents/depeche Size: 21400 Blocks: 48 IO Block: 4096 regular file Device: 811h/2065d Inode: 140380 Links: 1 Access: (0777/-rwxrwxrwx) Uid: ( 1000/ marek) Gid: ( 1000/ marek) Access: 2017-04-24 17:14:00.271104500 +0200 Modify: 2016-08-01 16:30:38.000000000 +0200 Change: 2017-04-24 15:59:44.407252700 +0200 Birth: - $ stat 20170424-1724/Documents/depeche File: 20170424-1724/Documents/depeche Size: 21400 Blocks: 48 IO Block: 4096 regular file Device: 811h/2065d Inode: 361117 Links: 1 Access: (0777/-rwxrwxrwx) Uid: ( 1000/ marek) Gid: ( 1000/ marek) Access: 2017-04-24 17:24:55.732080500 +0200 Modify: 2016-08-01 16:30:38.000000000 +0200 Change: 2017-04-24 17:24:55.736274500 +0200 Birth: - $ stat ~/Documents/depeche File: /home/marek/Documents/depeche Size: 21400 Blocks: 48 IO Block: 4096 regular file Device: 2ah/42d Inode: 4397 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 1000/ marek) Gid: ( 1000/ marek) Access: 2017-03-07 09:51:07.681090473 +0100 Modify: 2016-08-01 16:30:38.000000000 +0200 Change: 2016-11-06 19:58:14.053859011 +0100 Birth: - This one is successful hardlink (sha256 are the same): $ stat 20170424-1559/Documents/ios7bkplist.txt File: 20170424-1559/Documents/ios7bkplist.txt Size: 1983 Blocks: 8 IO Block: 4096 regular file Device: 811h/2065d Inode: 344437 Links: 4 Access: (0777/-rwxrwxrwx) Uid: ( 1000/ marek) Gid: ( 1000/ marek) Access: 2017-04-24 15:59:44.574850700 +0200 Modify: 2016-04-04 22:03:55.000000000 +0200 Change: 2017-04-24 17:24:56.022250400 +0200 Birth: - $ stat 20170424-1724/Documents/ios7bkplist.txt File: 20170424-1724/Documents/ios7bkplist.txt Size: 1983 Blocks: 8 IO Block: 4096 regular file Device: 811h/2065d Inode: 344437 Links: 4 Access: (0777/-rwxrwxrwx) Uid: ( 1000/ marek) Gid: ( 1000/ marek) Access: 2017-04-24 15:59:44.574850700 +0200 Modify: 2016-04-04 22:03:55.000000000 +0200 Change: 2017-04-24 17:24:56.022250400 +0200 Birth: - $ stat ~/Documents/ios7bkplist.txt File: /home/marek/Documents/ios7bkplist.txt Size: 1983 Blocks: 8 IO Block: 4096 regular file Device: 2ah/42d Inode: 4413 Links: 1 Access: (0777/-rwxrwxrwx) Uid: ( 1000/ marek) Gid: ( 1000/ marek) Access: 2017-02-28 20:03:32.858085513 +0100 Modify: 2016-04-04 22:03:55.000000000 +0200 Change: 2016-11-06 19:58:14.550522987 +0100 Birth: - Basically the same thing happens when I use -c with rsync to force long file checking. Is there anything I'm overlooking?
The problem is the following (from man rsync): ... The files must be identical in all preserved attributes (e.g. permissions, possibly ownership) in order for the files to be linked together. In your case, the permissions of the files are (from your examples) Access: (0644/-rw-r--r--) # hardlink failed (original) Access: (0777/-rwxrwxrwx) # hardlink failed (backup) Access: (0777/-rwxrwxrwx) # hardlink created (original) Access: (0777/-rwxrwxrwx) # hardlink created (backup) For instance $ chmod 777 A/file $ rsync -az A/ B/ $ chmod 644 A/file $ rsync -az --link-dest=$PWD/B/ A/ C/ results in $ du -hsc A B C 965M A 965M B 965M C 2.9G total while, resetting permissions to original, results in $ chmod 777 A/file $ rsync -az --link-dest=$PWD/B/ A/ D/ $ du -hsc A B D 965M A 965M B 4.0K D 1.9G total You can try with some file to get it to work (I guess with --size-only, which should skip files of equal size). What you should do is try to figure out if you changed permissions after the last backup, or, if not, why permissions changed in your backup directory.
rsync inconsistently fails to hardlink
1,286,221,328,000
I've noticed, if a file is renamed, lsof displays the new name. To test it out, created a python script: #!/bin/python import time f = open('foo.txt', 'w') while True: time.sleep(1) Saw that lsof follows the rename: $ python test_lsof.py & [1] 19698 $ lsof | grep foo | awk '{ print $2,$9 }' 19698 /home/bfernandez/foo.txt $ mv foo{,1}.txt $ lsof | grep foo | awk '{ print $2,$9 }' 19698 /home/bfernandez/foo1.txt Figured this may be via the inode number. To test this out, I created a hard link to the file. However, lsof still displays the original name: $ ln foo1.txt foo1.link $ stat -c '%n:%i' foo* foo1.link:8429704 foo1.txt:8429704 $ lsof | grep foo | awk '{ print $2,$9 }' 19698 /home/bfernandez/foo1.txt And, if I delete the original file, lsof just lists the file as deleted even though there's still an existing hard link to it: $ rm foo1.txt rm: remove regular empty file ‘foo1.txt’? y $ lsof | grep foo | awk '{ print $2,$9,$10 }' 19698 /home/bfernandez/foo1.txt (deleted) So finally... My question What method does lsof use to keep track open file descriptors that allow it to: Keep track of filename changes Not be aware of existing hard links
You are right in assuming that lsof uses the inode from the kernel's name cache. Under Linux platforms, the path name is provided by the Linux /proc file system. The handling of hard links is better explained in the FAQ: 3.3.4 Why doesn't lsof report the "correct" hard linked file path name? When lsof reports a rightmost path name component for a file with hard links, the component may come from the kernel's name cache. Since the key which connects an open file to the kernel name cache may be the same for each differently named hard link, lsof may report only one name for all open hard-linked files. Sometimes that will be "correct" in the eye of the beholder; sometimes it will not. Remember, the file identification keys significant to the kernel are the device and node numbers, and they're the same for all the hard linked names. The fact that the deleted node is displayed at all is also specific to Linux (and later builds of Solaris 10, according to the same FAQ).
How does `lsof` keep track of open file descriptors' filenames?
1,286,221,328,000
Before I get 100s of answers that tell me it is impossible to hardlink directories in linux: yes, I know that. The file in question appeared in lost+fount after I checked the filesystem with e2fsck (its an ext4) and according to stat it /is/ a file with 2 hardlinks: # stat --format="File \"%n\" is a %F with %h hardlinks" ./#30934353 File "./#30934353" is a directory with 2 hardlinks Doh. How can I safely delete them without touching the included files? I tried rm and unlink, both told me that die directory is not empty…
Directories on ext4 file systems generally have at least 2 links. The entry in their parent directory and the . entry in themselves. See there for more details. Having said that, if the file system is inconsistent, you may very well have a directory linked in more than one place and the link count not to reflect it. For instance, with debugfs -w (on an unmounted fs), you can do link a/b c/d to force the creation of a link to a directory. You'll notice that the link count is not updated (you'd need sif a/b links_count 3 to update it): $ ls -lid a/b c/d a/b/. c/d/. 12 drwxr-xr-x 2 chazelas chazelas 1024 May 14 08:37 a/b/ 12 drwxr-xr-x 2 chazelas chazelas 1024 May 14 08:37 a/b/./ 12 drwxr-xr-x 2 chazelas chazelas 1024 May 14 08:37 c/d/ 12 drwxr-xr-x 2 chazelas chazelas 1024 May 14 08:37 c/d/./ The link count is 2 above, even though there are 3 links (to a, to c, and the "." to itself). Note that fsck would report a problem with that: Pass 2: Checking directory structure Entry 'd' in /c (1282) is a link to directory /a/b (12). Clear<y>? So it's unlikely to be your case here if the fs has just gone through a fsck. To remove one of those links, you can use debugfs -w again (after having unmounted the filesystem) and do unlink c/d in it, or use fsck. If a directory has more than one link, you can check the .. entry within it (with debugfs ls) to check which is the correct one (.. will have the same inode number as its correct parent).
How can I delete a hardlink to a directory?
1,541,016,132,000
I have a backup containing folders for daily snapshots. To save space, identical files in different snapshots are deduplicated via hard links (generated by rsync). When I'm running out of space, one option is to delete older snapshots. But because of the hard links, it is hard to figure out how much space I would gain by deleting a given snapshot. One option I can think of would be to use du -s first on all snapshot folders, then on all but the one I might delete, and the difference would give me the expected gained space. However, that's quite cumbersome and would have to be repeated when I'm trying to find a suitable snapshot for deletion. Is there an easier way? After trying out and thinking about the answers by Stéphane Chazelas and derobert, I realized that my question was not precise enough. Here's an attempt to be more precise: I have a set of directories ("snapshots") which contain files which are partially storage-identical (hard linked) with files in another snapshot. I'm looking for a solution that gives me a list of the snapshots and for each the amount of used disk storage taken up by the files in it, but without that storage which is also used by a file in another snapshot. I would like to allow for the possibility that there are hard links within each snapshot. The idea is that I can look at that list to decide which of the snapshots I should delete when I run out of space, which is a trade-off between storage space gained by deletion and value of the snapshot (e.g. based on age).
You could do it by hand with GNU find: find snapshot-dir -type d -printf '1 %b\n' -o -printf '%n %b %i\n' | awk '$1 == 1 || ++c[$3] == $1 {t+=$2;delete c[$3]} END{print t*512}' That counts the disk usage of files whose link count would go down to 0 after all the links found in the snapshot directory have been found. find prints: 1 <disk-usage> for directories <link-count> <disk-usage> <inode-number> for other types of files. We pretend the link count is always one for directories, because when in practice it's not, its because of the .. entries, and find doesn't list those entries, and directories generally don't have other hardlinks. From that output, awk counts the disk usage of the entries that have link count of 1 and also of the inodes which it has seen <link-count> times (that is the ones whose all hard links are in the current directory and so, like the ones with a link-count of one would have their space reclaimed once the directory tree is deleted). You can also use find snapshot-dir1 snapshot-dir2 to find out how much disk space would be reclaimed if both dirs were removed (which may be more than the sum of the space for the two directories taken individually if there are are files that are found in both and only in those snapshots). If you want to find out how much space you would save after each snapshot-dir deletion (in a cumulated fashion), you could do: find snapshot-dir* \( -path '*/*' -o -printf "%p:\n" \) \ -type d -printf '1 %b\n' -o -printf '%n %b %i\n' | awk '/:$/ {if (NR>1) print t*512; printf "%s ", $0; next} $1 == 1 || ++c[$3] == $1 {t+=$2;delete c[$3]} END{print t*512}' That processes the list of snapshots in lexical order. If you processed it in a different order, that would likely give you different numbers except for the final one (when all snapshots are removed). See numfmt to make the numbers more readable. That assumes all files are on the same filesystem. If not, you can replace %i with %D:%i (if they're not all on the same filesystem, that would mean you'd have a mount point in there which you couldn't remove anyway).
unique contribution of folder to disk usage
1,541,016,132,000
Consider this test case: mkdir a echo 'blah' > a/test mkdir b echo 'blah' > b/test rsync -r --link-dest=/tmp/b /tmp/a/ /tmp/c As expected, rsync creates c/test a hardlink of b/test (note the refcount of 2): # ls -l c/test -rw-r--r-- 2 root root 16 Jan 6 19:43 test Now see this: rm -r c # start over touch b/test rsync -r --link-dest=/tmp/b /tmp/a/ /tmp/c The hardlink is not created: # ls -l c/test -rw-r--r-- 1 root root 16 Jan 6 19:50 test The manpages says (emphasis mine): The files must be identical in all preserved attributes (e.g. permissions, possibly ownership) in order for the files to be linked together. However, I think that by default the filetime is not preserved and should therefore make no difference here. What is happening? Is this a bug? What can I do? My goal is to save space on a continuous integration server that hosts many branches of a repository by hard-linking all the identical files. So my actual command is: rsync -r --link-dest=/ci/master /ci-runner/build/ /ci/branch-123. This means I don't care about the times, so I thought about touching them all to the current time before the rsync but it would be a somehow crude solution and also touch does not seem to work recursively.
You are seeing the results of rsync's "quick check" algorithm which decides to transfer files based on their size and their timestamp. As detailed in man rsync: Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file’s data does not need to be updated. You have choices: If you want timestamp to be ignored and transfer files based just on changed size, you can use the --size-only option. Note that this means that rsync will not transfer a changed file if the change just so happened to leave the file with the same size. If you want rsync instead to check if the files' contents are actually identical, use --checksum. This may cause a substantial slow-down. The "quick check" algorithm is the default because it is a generally good compromise between speed and accuracy.
rsync's --link-dest option does not link because of file time
1,541,016,132,000
I use Vim 8.2 to edit my files in my Ubuntu 18.04. When I open a file, do some changes and quit with Vim, the inode number of this file will be changed. As my understanding, it's because the backup mechanism of my Vim is enabled, so each edition will create a new file (.swp file) to replace the old one. A new file has a new inode number. That's it. But I found something weird. As you can see as below, after the first vim 11.cpp, the inode has changed, 409980 became 409978. However, after creating a hard link for the file 11.cpp, no matter how I modify the file 11.cpp with my Vim, its inode number won't change anymore. And if I delete the hard link xxx, its inode number will be changed by each edition of my Vim again. This really makes me confused. $ ll -i ./11.cpp 409980 -rw-rw-r-- 1 zyh zyh 504 Dec 22 17:23 ./11.cpp $ vim 11.cpp # append a string "abc" to the file 11.cpp $ ll -i ./11.cpp 409978 -rw-rw-r-- 1 zyh zyh 508 Dec 22 17:25 ./11.cpp $ vim ./11.cpp # remove the appended "abc" $ ll -i ./11.cpp 409980 -rw-rw-r-- 1 zyh zyh 504 Dec 22 17:26 ./11.cpp $ ln ./11.cpp ./xxx # create a hard link $ ll -i ./11.cpp 409980 -rw-rw-r-- 2 zyh zyh 504 Dec 22 17:26 ./11.cpp $ vim 11.cpp # append a string "abc" to the file 11.cpp $ ll -i ./11.cpp 409980 -rw-rw-r-- 2 zyh zyh 508 Dec 22 17:26 ./11.cpp $ vim 11.cpp # remove the appended "abc" $ ll -i ./11.cpp 409980 -rw-rw-r-- 2 zyh zyh 504 Dec 22 17:26 ./11.cpp
It seems the setting backupcopy is auto (run :set backupcopy? in Vim to confirm). The main values are: yes make a copy of the file and overwrite the original one no rename the file and write a new one auto one of the previous, what works best […] The auto value is the middle way: When Vim sees that renaming file is possible without side effects (the attributes can be passed on and the file is not a link) that is used. When problems are expected, a copy will be made. In case it's not clear: yes (copy and overwrite) does not change the inode number, no (rename and write anew) does change it. In your case at first auto was like no. After ln ./11.cpp ./xxx Vim noticed there is another link and auto worked like yes.
Why didn't inode change anymore with a hard link
1,541,016,132,000
What's the difference between --link and --reflink=always? I use the following command as an mv substitute, and I am wondering if using --reflink is a better choice. command gcp -r --link --archive --verbose "${opts[@]}" "$@" # delete the sources manually
--link causes cp to create hard links instead of copying. Once the “copy” is complete, assuming it’s in the same file system (which is required for hard links), a single instance of the file is present on disk, with two or more directory entries pointing to it. This is the desired external state, i.e. the fact that several directory entries point to the same file is visible — they point to the same inode. Changes made through one of the directory entries will be visible through the other as well. --reflink=always requests an optimised copy, if possible. This can take various forms; the most famous is copy-on-write, but it can also be implemented as a server-side copy on networked file systems. Once the copy is complete, it may be the case that a single copy of the data blocks exists on disk, but there are two files, and each directory entry points to a different file. Changes made through one directory entry will not be visible through the other; each file leads a separate life (except for side-effects of the shared data blocks, e.g. disk corruption will affect both files). Put another way, --link explicitly requests the creation of new directory entries pointing to the same file, sharing subsequent changes. --reflink=always requests the creation of new files, with potential optimisations, and isolated subsequent changes. As an mv alternative, --link is more appropriate than --reflink=always — it will result in less work for the operating system.
GNU cp: What's the difference between `--link` and `--reflink=always`?
1,541,016,132,000
Every article I find about web servers suggest creating a sites-available and sites-enabled directory within apache/nginx/etc. Then, using symbolic (soft) links, create a link from the available to the enabled folder. Why use symbolic links rather than hardlinks? With hardlinks, you can move the original file (rename it) as needed without needing to recreate the link. You can still delete the sites-enabled file without ruining anything, and the user/group permissions in any sane setup will be the same for both folders. Can I safely use hardlinks instead of softlinks? Or is there a downside to hardlinks I'm not seeing? The major upside for me is not having to worry about recreating a symlink if I move/rename the original file.
I don't see any advantage to hard links. With hardlinks, you can move the original file (rename it) as needed without needing to recreate the link. That strikes me as a bug rather than a feature. If you want to disable a site (for example because you've just noticed that it has a major security hole), with symbolic links, you can just rename the sites-available entry. With hard links, and with potentially differing names, you have to hunt for the corresponding entry in sites-enabled. If you want to rename a site, do it in both directories. Otherwise it gets confusing. You can still delete the sites-enabled file without ruining anything, True with either scheme. and the user/group permissions in any sane setup will be the same for both folders. With symbolic links, you don't have to worry about ownership or permissions in the sites-enabled directory.
Should I use hardlinks for my "sites-enabled" folder instead of softlinks?
1,541,016,132,000
Setup The following sequence of commands is setup for my question. root@cd330f76096d:/# cd root@cd330f76096d:~# ls root@cd330f76096d:~# mkdir -p my_dir/my_subdir root@cd330f76096d:~# ls -hAil total 12K 6175969 -rw-r--r-- 1 root root 3.1K Oct 15 2021 .bashrc 6175970 -rw-r--r-- 1 root root 161 Jul 9 2019 .profile 7382820 drwxr-xr-x 3 root root 4.0K Sep 6 19:34 my_dir Context Notice that my_dir has three hard links, as per the output. Presumably they are: ./my_dir my_dir/. my_dir/my_subdir/.. However. . . root@cd330f76096d:~# find . -xdev -inum 7382820 ./my_dir And that's it. Only one line. Questions What am I missing and/or how does ls -l work? I'm half expecting that the reason why I can't locate any more files with find is that they refer to . and .. in which case I ask how exactly does ls -l work with references to the source code. Pre setup The example above was created in a docker container, which for convenience I'm sharing below: $ docker pull ubuntu:jammy jammy: Pulling from library/ubuntu Digest: sha256:aabed3296a3d45cede1dc866a24476c4d7e093aa806263c27ddaadbdce3c1054 Status: Downloaded newer image for ubuntu:jammy docker.io/library/ubuntu:jammy $ docker run -it ubuntu:jammy bash
A pathname that find encounters (i.e., apart from the search paths given on the command line) cannot contain a . or .. component, so your command will never show these. Why? Because the POSIX standard says so (my emphasis): Each path operand shall be evaluated unaltered as it was provided, including all trailing <slash> characters; all pathnames for other files encountered in the hierarchy shall consist of the concatenation of the current path operand, a <slash> if the current path operand did not end in one, and the filename relative to the path operand. The relative portion shall contain no dot or dot-dot components, no trailing <slash> characters, and only single <slash> characters between pathname components. ("The current path operand" mentioned above is one of the search paths on the command line.) The ls command can work out the link count of the directory because it makes a stat() call, which returns a stat structure, which contains the number of hard links. It strictly speaking does not know where the other hard links are located though.
How does `ls` find hard links?
1,541,016,132,000
I have a kind of problem. I am trying to hard-link all my dotfiles [files that customize certain apps] in one folder for ease of use, called ~/dotfiles/ , but multiple programs that I have, have entire directories for that. Some are in .config, some just have a directory at the home folder, so I tried to check whether I could hard-link a directory. After looking into it, I saw all the problems, warning, etc. about hard-linking directories, and why it's a giant nono. So I'm fully discouraged from hard-linking directories, however, I still need to do it somehow. I found a way around this, which is by creating directories within ~/dotfiles/ and hardlinking the contents of the directories INTO those created ones, but that immediately hit a brick wall called boredom and repetition. A bunch of programs had multiple dirs, some with nested directories, and I didn't feel like spending a lot of time creating directories just so that I'm able to hard-link config files. All I'm wondering is this. Is there a way to simulate a hard-link? For example, I have a directory called ~/Testconfig/, and I want to hard-link it into ~/dotfiles/. Theoretically, a hardlink would place a directory in ~/dotfiles/ with all of it's contents, including its files and its nested directories. Is there a way to achieve that without actually creating a hardlink? My idea is that a bash script could be made that automates all of this, but I know next to nothing about bash, so that would be difficult.
You could use cp -al .??* ~/dotfiles/ and let it worry about all the complexity. Directories are created and files are linked
Simulating a hard-link to a directory [duplicate]
1,541,016,132,000
How does the relative paths work in ln (-s or not)? For example if I type ln -s foo bar/banana.txt what does this mean? What is foo relative to? Because it doesn't seem to be relative to the current path. Also is it different if I remove -s or not? I've tested it out and the result doesn't make sense to me, and the man page doesn't seem to explain this. Could anyone explain?
It's different with and without -s. With -s in: ln -s path/to/file some/dir/link path/to/file is set as the symlink target of some/dir/link (or some/dir/link/file if link was a directory). A symlink is a special type of file which /contains/ a path (can be any array of non-0 bytes, some systems even allow an empty string) which is the target of the symlink. ln sets it to the first argument. Upon resolving the link (when the link is used later on), the path/to/file path will be relative to the directory the link is (hard-)linked to (some/dir here when some/dir/link is accessed via its some/dir/link path). Note that the path/to/file doesn't need to exist at the time the ln command is run (or ever). While in: ln path/to/file some/dir/link It's similar to: cp path/to/file some/dir/link The path/to/file is relative to the current working directory of the process running ln. Nothing stops you from creating several (hard-)links to a symlink. For example: $ mkdir -p a b/c b/a $ ln -s ../a b/L # b/L a symlink to "a" $ ln b/L b/c/L b/L and b/c/L are the same file: same inode but linked to two different directories. They are both symlinks with target ../a. But when b/L is resolved, that points to ./a while when b/c/L, that points to ./b/a.
Paths in ln with hard links and soft links
1,541,016,132,000
Where do I find documentation of behavior of cp and rsync commands when the destination path shares the inode with another path? In other words, when I do $ cp [options] src dest $ rsync [options] src dest and when there is dest2 that is a hard link to the same inode as dest, do these commands modify the content at this inode [Let's call it behavior (i)], or do they create a new inode, fill this inode with the content of src, and link dest to the new inode [behavior (ii)]? In the case of behavior (i), I will see the results of cp and rsync by accessing dest2, whereas I will not see them in the case of behavior (ii). I observed that the behavior depends on the options. In particular, with -a option, both cp and rsync took behavior (ii) as far as I experimented. [cp without any option took behavior (i).] I would like to know if this is guaranteed, and I wish someone to kindly point to documentation. Or, I would appreciate it if someone could kindly suggest some words or phrases to search for. Below are my experiment. First, I create a test sample: $ ls work $ ls -li work/ total 12 23227072 -rw-rw-r-- 1 norio norio 17 Oct 19 00:52 file000.txt 23227071 -rw-rw-r-- 1 norio norio 17 Oct 19 00:52 file001.txt 23227073 -rw-rw-r-- 1 norio norio 17 Oct 19 00:52 file002.txt $ cat work/file000.txt This is file000. $ cat work/file001.txt This is file001. $ cat work/file002.txt This is file002. $ cp -r work backup $ ls backup work $ ls -li backup/ total 12 23227087 -rw-rw-r-- 1 norio norio 17 Oct 19 00:53 file000.txt 23227065 -rw-rw-r-- 1 norio norio 17 Oct 19 00:53 file001.txt 23227092 -rw-rw-r-- 1 norio norio 17 Oct 19 00:53 file002.txt $ cat backup/file000.txt This is file000. $ cat backup/file001.txt This is file001. $ cat backup/file002.txt This is file002. $ $ cp -al backup backupOld $ ls backup backupOld work $ ls -li backupOld/ total 12 23227087 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file000.txt 23227065 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file001.txt 23227092 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file002.txt $ cat backupOld/file000.txt This is file000. $ cat backupOld/file001.txt This is file001. $ cat backupOld/file002.txt This is file002. $ Thus far, I created files under backupOld as hard links to the files of the same name under backup. Now, I modify files under work and copy them to backup by cp, cp -a, and rsync -a. $ echo "Hello work 000." >> work/file000.txt $ echo "Hello work 001." >> work/file001.txt $ echo "Hello work 002." >> work/file002.txt $ cat work/file000.txt This is file000. Hello work 000. $ cat work/file001.txt This is file001. Hello work 001. $ cat work/file002.txt This is file002. Hello work 002. $ $ cat backup/file000.txt This is file000. $ cat backup/file001.txt This is file001. $ cat backup/file002.txt This is file002. $ cat backupOld/file000.txt This is file000. $ cat backupOld/file001.txt This is file001. $ cat backupOld/file002.txt This is file002. $ $ ls -li work/ total 12 23227072 -rw-rw-r-- 1 norio norio 33 Oct 19 00:57 file000.txt 23227071 -rw-rw-r-- 1 norio norio 33 Oct 19 00:57 file001.txt 23227073 -rw-rw-r-- 1 norio norio 33 Oct 19 00:57 file002.txt $ ls -li backup/ total 12 23227087 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file000.txt 23227065 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file001.txt 23227092 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file002.txt $ ls -li backupOld/ total 12 23227087 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file000.txt 23227065 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file001.txt 23227092 -rw-rw-r-- 2 norio norio 17 Oct 19 00:53 file002.txt $ $ cp work/file000.txt backup $ cp -a work/file001.txt backup $ rsync -a work/file002.txt backup $ $ ls -li backup total 12 23227087 -rw-rw-r-- 2 norio norio 33 Oct 19 01:00 file000.txt 23227094 -rw-rw-r-- 1 norio norio 33 Oct 19 00:57 file001.txt 23227095 -rw-rw-r-- 1 norio norio 33 Oct 19 00:57 file002.txt $ $ ls -li backupOld total 12 23227087 -rw-rw-r-- 2 norio norio 33 Oct 19 01:00 file000.txt 23227065 -rw-rw-r-- 1 norio norio 17 Oct 19 00:53 file001.txt 23227092 -rw-rw-r-- 1 norio norio 17 Oct 19 00:53 file002.txt $ cp overwrote the content of inode 23227087 shared by backup/file000.txt and backupOld/file000.txt, whereas cp -a and rsync -a created new inodes for backup/file001.txt and backup/file002.txt, respectively, to hold the new contents copied from work/file001.txt and work/file002.txt. $ cat backup/file000.txt This is file000. Hello work 000. $ cat backup/file001.txt This is file001. Hello work 001. $ cat backup/file002.txt This is file002. Hello work 002. $ cat backupOld/file000.txt This is file000. Hello work 000. $ cat backupOld/file001.txt This is file001. $ cat backupOld/file002.txt This is file002. $
cp’s behaviour is specificied by POSIX. -a isn’t specified by POSIX, but it implies -R which is. When copying a single file, without -R, and the target exists, A file descriptor for dest_file shall be obtained by performing actions equivalent to the open() function defined in the System Interfaces volume of POSIX.1-2017 called using dest_file as the path argument, and the bitwise-inclusive OR of O_WRONLY and O_TRUNC as the oflag argument. Thus the target is opened, truncated, and its contents replaced with those of the source; the inode doesn’t change, and any hard linked file is affected. With -R, The dest_file shall be created with the same file type as source_file. A new file is created. rsync’s behaviour is documented in the description of the --inplace option (see man rsync): This option changes how rsync transfers a file when its data needs to be updated: instead of the default method of creating a new copy of the file and moving it into place when it is complete, rsync instead writes the updated data directly to the destination file. Thus by default, rsync creates new files instead of updating existing ones.
Hard link as destination of cp and rsync
1,541,016,132,000
I realise that every file is a hard link. This is what I mean precisely: if an inode has more than one file pointing to it, how can I copy the inode so that every file is pointing to a separate inode with the same content? For example: echo "Example" > one ln one two How can I make the file two have the same contents as one, without sharing an inode? I would like to "reduplicate", if you like, the files.
This can be achieved with the command find -samefile filename -exec sed -i ';;' {} \; or if you now the inode number of the file find -inum inode -exec sed -i ';;' {} \; Note both these commands only find the files with matching inodes in subdirectories of the current working directory. If you need to search all files on your file system, you will need to run this command from the root directory. The first part find -samefile filename, finds all the files which share the same inode. Then it executes sed -i ';;' which copies the file to a file with the same name (note we use the sed script ';;' instead of ';', otherwise, find will interpret the argument ; as the end of the -exec command).
How can I convert a hard link to a normal file? [duplicate]
1,541,016,132,000
I've checked the manpages, the mount, the permissions ... (edit: combined history into one sequence as requested. Starting to seem a not-simple problem. Nothing new since last edit, just bundled up all pretty) ~/sandbox/6$ editfunc doit ~/sandbox/6$ -x doit + doit + find . + cp /bin/ln /bin/id . + sudo chown jthill:jthill id ln + chmod g+s id ln + mkdir protected + chmod 770 protected + touch data + set +xv ~/sandbox/6$ ls -A data id ln protected ~/sandbox/6$ ls -Al total 92 -rw-r--r-- 1 jthill jthill 0 Nov 8 02:39 data -rwxr-sr-x 1 jthill jthill 31432 Nov 8 02:39 id -rwxr-sr-x 1 jthill jthill 56112 Nov 8 02:39 ln drwxrwx--- 2 jthill jthill 4096 Nov 8 02:39 protected ~/sandbox/6$ sudo su nobody [nobody@home 6]$ ./id uid=619(nobody) gid=617(nobody) egid=1000(jthill) groups=617(nobody) [nobody@home 6]$ ./ln ln protected ./ln: failed to create hard link ‘protected/ln’ => ‘ln’: Operation not permitted [nobody@home 6]$ ./ln data protected ./ln: failed to create hard link ‘protected/data’ => ‘data’: Operation not permitted [nobody@home 6]$ ln ln protected ln: failed to create hard link ‘protected/ln’ => ‘ln’: Permission denied [nobody@home 6]$ ln data protected ln: failed to create hard link ‘protected/data’ => ‘data’: Permission denied [nobody@home 6]$ exit ~/sandbox/6$
Found it: If sysctl fs/protected_hardlinks is set, hard links by someone not the owner (and without CAP_FOWNER), must be: not special not setuid not executable setgid both readable and writable according to fs/namei.c. Some guy on SO wanted to have a dropbox folder people could add to but not see into (I think that's a Windows feature), I figured this was one of the few places a setgid would be good and the smoketest drove me here. Thanks to all and especially Anthon who suggested checking the source. (edit: sysctl spelling)
setgid binary doesn't have permission, mount's right, I'm missing something, but what, please?
1,541,016,132,000
Consider the following transcript of a user-namespaced shell running with root privileges (UID 0 within the namespace, unprivileged outside): # cat /proc/$$/status | grep CapEff CapEff: 0000003cfdfeffff # ls -al total 8 drwxrwxrwx 2 root root 4096 Sep 16 22:09 . drwxr-xr-x 21 root root 4096 Sep 16 22:08 .. -rwSr--r-- 1 nobody nobody 0 Sep 16 22:09 file # ln file link ln: failed to create hard link 'link' => 'file': Operation not permitted # su nobody -s /bin/bash -c "ln file link" # ls -al total 8 drwxrwxrwx 2 root root 4096 Sep 16 22:11 . drwxr-xr-x 21 root root 4096 Sep 16 22:08 .. -rwSr--r-- 2 nobody nobody 0 Sep 16 22:09 file -rwSr--r-- 2 nobody nobody 0 Sep 16 22:09 link Apparently the process has the CAP_FOWNER permission (0x8) and thus should be able to hardlink to arbitrary files. However, it failes to link the SUID'd test file owned by nobody. There is nothing preventing the process from switching to nobody and then linking the file, thus the parent namespace does not seem to be the issue. Why can't the namespaced UID 0 process hardlink link to file without switching its UID?
The behavior described in the question was a bug, which has been fixed in the upcoming Linux 4.4.
Why can't a UID 0 process hardlink to SUID files in a user namespace?
1,541,016,132,000
Why the hard link doesn't corrupt if we remove the original file? If I remove the original file then the softlink gets corrupt but hard link doesn't so why it does't corrupt
It is because hardlinks are essentially references to the same file, and there's no "original" file in terms of hardlinks. They point to same data structure on the disk (the inode that contains next to all metadata of the file). Whereas softlinks point to filename and not the data structure describing the file.
Why hard link doesn't corrupt if we remove the original file? [duplicate]
1,541,016,132,000
When I executed the command both commands gave the same output. I created a soft link and a hard link for a file but still both commands gave the same output. Is there a difference between find -H and find -L?
find is not going to treat hard links specially except insofar as the -links test is concerned.  Symbolic links to files are going to be treated very similarly, too. I would read the find man page to you, but I assume that you've already read it.  Man pages are written in a cryptic language that is hard for beginners to understand.  An example would probably help.  Do this: $ mkdir dir1 dir2 dir3 $ touch dir1/file1 dir2/file2 dir3/file3 $ ln -s dir2 two $ cd dir1 $ ln -s ../dir3 three $ cd .. $ ls -lR # I have deleted my user name from the below. .: total 1 drwxr-xr-x 1 0 Sep 4 13:08 dir1 drwxr-xr-x 1 0 Sep 4 13:08 dir2 drwxr-xr-x 1 0 Sep 4 13:08 dir3 lrwxrwxrwx 1 4 Sep 4 13:08 two -> dir2 ./dir1: total 1 -rw-r--r-- 1 0 Sep 4 13:08 file1 lrwxrwxrwx 1 7 Sep 4 13:08 three -> ../dir3 ./dir2: total 0 -rw-r--r-- 1 0 Sep 4 13:08 file2 ./dir3: total 0 -rw-r--r-- 1 0 Sep 4 13:08 file3 $ find dir1 two dir1 dir1/file1 dir1/three two $ find -P dir1 two # This is the default; i.e., same as the above. dir1 dir1/file1 dir1/three two $ find -H dir1 two dir1 dir1/file1 dir1/three two two/file2 $ find -L dir1 two dir1 dir1/file1 dir1/three dir1/three/file3 two two/file2 Note that: In the default behavior (i.e., the -P behavior), find does not follow either symbolic link.  two (in the top-level directory) and dir1/three are simply reported as objects. Under -H, the symbolic link two → dir2 is followed (i.e., we get to see file2, which is in dir2) because two is specified on the find command line.  Note that dir1/three is still reported as an object. Under -L, both symbolic links are followed.  We get to see file2, because the two → dir2 link is followed, and we get to see file3, because the dir1/three → ../dir3 link is followed. If it's not perfectly clear to you now, try running the find commands in my example with -ls at the end (as an alternative to the default -print) and pay particular attention to the ways two and three are listed.  You will notice that symbolic links to files are also reported differently under the different options. Here's another example: $ ln -s /bin/sh mysh $ find . -size +9 $ find -H . -size +9 $ find -L . -size +9 ./mysh The symbolic link ./mysh is small.  It points to /bin/sh, which is a fairly large file.  Testing with -size, ./mysh is treated as being small under -P (default) and -H, but it is treated as being large under -L, because -L means "look at the file that the link points to". Yet another example: find . -type f (and find -H . -type f) will find plain files only. find . "(" -type f -o -type l ")" will find plain files and (all) symbolic links. find -L . -type f will find plain files and symbolic links that point to plain files.  (Also, if the directory tree contains any symbolic links to directories, those directories will also be searched.)
What is the difference between "find -H" and "find -L" command?
1,541,016,132,000
I am using Ubuntu Linux and, just for fun, I want to create a hardlink to a directory (as seen here). Because I'm just doing this for fun, I'm not looking for any sort of pre-developed directory-hardlinking software that someone else wrote, I want to know how to do it myself. So, how do I directly, manually, modify an inode? Ideally I would like the answer as a Linux command that I can run from the Bash command line, but if there is no way to do it from there I would also accept information on how to do it in C or (as a last resort) assembly.
That depends on the filesystem. For ext4, you can do this with debugfs as follows: dennis@lightning:/tmp$ dd if=/dev/zero of=ext4.img bs=1M count=100 104857600 bytes (105 MB) copied, 0.645009 s, 163 MB/s dennis@lightning:/tmp$ mkfs.ext4 ext4.img mke2fs 1.42.5 (29-Jul-2012) ext4.img is not a block special device. Proceed anyway? (y,n) y ... Writing superblocks and filesystem accounting information: done dennis@lightning:/tmp$ mkdir ext4 dennis@lightning:/tmp$ sudo mount ext4.img ext4 dennis@lightning:/tmp$ mkdir -p ext4/test/sub/ dennis@lightning:/tmp$ sudo umount ext4 dennis@lightning:/tmp$ debugfs -w ext4.img debugfs 1.42.5 (29-Jul-2012) debugfs: link test test/sub/loop ^D dennis@lightning:/tmp$ ls ext4/test/sub/loop/sub/loop/sub/loop/sub/loop/sub/loop/ total 1 drwxrwxr-x 2 dennis dennis 1024 mrt 26 12:15 sub Notes: you cannot link directly to the parent, so foo/bar can't be a link to foo, hence the extra directory. You should not run debugfs on mounted filesystems. If you do, you will need to unmount/mount after making changes. Tools like find and ls still won't loop: dennis@lightning:/tmp$ find ext4 ext4 ext4/lost+found find: `ext4/lost+found': Permission denied ext4/test ext4/test/sub find: File system loop detected; `ext4/test/sub/loop' is part of the same file system loop as `ext4/test'.
How do I manually modify an inode?
1,541,016,132,000
I'm working on an assignment for my college course, and one of the questions asks for the command used to create a hard link from one file to another so that they point to the same inode. We were linked a .pdf file to refer to, but it doesn't explain said process. Is it any different from creating a standard hard link?
Hard links are not "between" the files, there's one inode, with >1 entries in various directories all pointing to that one inode. ls -i should show the inodes, then experiment around with ln (hard link) and ln -s (soft or symbolic): $ touch afile $ ln -s afile symbolic $ ln afile bfile $ ls -1 -i afile symbolic bfile 7602191 afile 7602191 bfile 7602204 symbolic $ readlink symbolic afile $
Two hard linked files share inode
1,541,016,132,000
Say I have a directory with these permissions: drwxrwx--- Inside this directory, a file with these permissions: -rw-rw-rw- Is the file readable/writable by everyone or not ? If not, how secure is this access restriction? What if a random user makes a link to my file inside his home directory? Could he access the file then? Or could he access the file by guessing its inode number and using some system calls on inodes?
Yes, a file in a directory is only accessible to users who have the execute permission on the directory. It's like leaving jewelry in an unlocked drawer inside a locked house: the jewelry is under lock. A random user cannot create a hard link to a file, only the owner file. If the file has multiple hard links, some of which are in a publicly accessible directory, then the file will be publicly accessible. But that has to be set up by the owner of the file. Anyone can create symbolic links that happen to point to a file, but that doesn't allow them to access the file. Symbolic links do not bypass permissions. If the directory is world-executable at some point and there are processes that have the file or a parent directory opened at the time you restrict the permissions on the directory, then those processes still have the file open afterwards. However if they close it (or move out to another directory) they won't be able to reopen it (or change directory back in). Similarly, a setuid or setgid process may open the file or change to the directory, then drop its permissions. All of this requires the cooperation of the file or directory owner. There is no way to open a file via its inode. The fact that this would allow to bypass restrictive permissions such as this case is the main reason why this feature doesn't exist.
Is a -rw-rw-rw- file really inaccessible inside a drwxrwx--- directory?
1,541,016,132,000
I know how hard links and symlinks work and I know why hard links can't be used for directories but in this case, is it some kind of exception? For example I do: ls -al Documents total 8 drwxr-xr-x 2 piotr piotr 4096 cze 28 11:19 . drwxrwx--- 17 piotr piotr 4096 lip 2 16:41 .. . is a hard link to Documents itself and .. is a hard link to my home directory so hey, it's illegal
As someone said in a comment on the question, just because hard links to directories aren't permitted (i.e., by the ln command), does not mean they are not possible. The superuser can actually use the "-d" or "-F" option to the ln command to force the creation of a hard link to a directory (though the man page says it will "probably" fail due to filesystem restrictions - not sure what that's about, and I'm not going to try it on one of my own systems to see...). Hard links to directories are not permitted because they can create loops for programs that try to traverse the directory structure. In any directory, . and .. are hard links to that directory, and its parent, respectively - these are "well known" special cases and anything that tries to traverse the filesystem knows to account for that. But it is certainly technically possible to create a hard link to a directory if you're persistent - it's just not advisable.
Why . and .. are hard links to directories while in *nix systems hard links are not allowed for directories?
1,541,016,132,000
I am trying to write a shell script to create link for file from my dotfiles repo to my home folder. I want to use hard link if possible because it cannot be broken when moving it to somewhere in the same filesystem with HOME. But if I clone the dotfiles to another filesystem, I have to use symlink instead. So, how to create hard link for file if possible, else use symlink in shell script?
ln source target 2>/dev/null || ln -s source target 2>/dev/null || exit 1 or, slightly more "interactively" (chattier), if ! ln source target 2>/dev/null; then echo 'failed to create hard link, trying symbolic link instead' >&2 if ! ln -s source target 2>/dev/null; then echo 'that failed too, bailing out' >&2 exit 1 fi fi Remove the redirections to /dev/null to see the error messages displayed by ln (if any).
Create hard link if possible, else use symlink
1,541,016,132,000
I used cp -uav to update a copy of a git repo, including uncommitted files. Why did it say it's removing files? It looks like this: $ cp -uav repos copy removed 'copy/repos/h/.git/objects/e6/9de29bb2d1d6434b8b29ae775ad8c2e48c5391' removed 'copy/repos/h/.git/objects/3b/b3f834dd037db9298b10d71e0cd7383000fa1c' removed 'copy/repos/h/.git/objects/49/6d6428b9cf92981dc9495211e6e1120fb6f2ba' removed 'copy/repos/h/.git/objects/2b/bf350cea1fb4fd036235d7e6c36eb600e68885' $ rpm -q --whatprovides `which cp` coreutils-8.25-17.fc25.x86_64
I can reproduce the above messages as follows: mkdir test; cd test mkdir repos; cd repos mkdir g; cd g git init touch a git add a git commit -m test cd .. git clone g h cd .. mkdir copy cp -ua repos copy cp -uav repos copy The running the cp -ua command under strace will show that it is indeed removing (unlink) the files it says. What's happened is that the objects in repo/h/.git/objects are hardlinks of the ones in repo/g/.git/objects. (In my original case, I was copying a repo which contained sub-repos, which were originally created as clones of the main repo). cp -a means cp --preserve, which is documented as --preserve[=ATTR_LIST] preserve the specified attributes (default: mode,ownership,timestamps), if possible additional attributes: context, links, xattr, all The unlink happens as part of hardlink preservation: linkat(AT_FDCWD, "copy/repos/g/.git/objects/2b/bf350cea1fb4fd036235d7e6c36eb600e68885", AT_FDCWD, "copy/repos/h/.git/objects/2b/bf350cea1fb4fd036235d7e6c36eb600e68885", 0) = -1 EEXIST (File exists) unlink("copy/repos/h/.git/objects/2b/bf350cea1fb4fd036235d7e6c36eb600e68885") = 0 linkat(AT_FDCWD, "copy/repos/g/.git/objects/2b/bf350cea1fb4fd036235d7e6c36eb600e68885", AT_FDCWD, "copy/repos/h/.git/objects/2b/bf350cea1fb4fd036235d7e6c36eb600e68885", 0) = 0 As to exactly why it generates messages which confused me so? It seems like -u (--update) is not quite implemented in this code. It's mainly a performance optimization to avoid re-copying data unecessarily. Making hardlinks doesn't require copying any data. We can see other scenarios in the documentation where cp must remove files as well: -f, --force if an existing destination file cannot be opened, remove it and try again (this option is ignored when the -n option is also used) In the case of -f, I can understand that it might want to show the specific files that it has to "force". I suppose it might also be useful to show deletion, in case cp is interrupted. Otherwise, users would be unlikely to realize that a file could have been deleted from the destination (as an intermediate step). The ultimate question is why it did not also show a message when it re-created the links, which would be less confusing. I suspect this is a quirk of the -u option.
Why did `cp -uav` of a git repo show "removed" for some files?
1,541,016,132,000
It is relatively common to use a "hard link tree" to create a second backup of a folder that is effectively just a copy of the files that have changed since the original backup. For example, rsync has a command line option --link-dest to achieve this. My question is whether there is an easy way to see the extra disk space used by such a "hard link tree"? Because the hard links are the same file, just performing du on the new tree will show the total size of all the files, including the ones that are hard links, and thus share the disk space of the original files.
After some more experimentation, it seems du is more "clever" than I expected. If you give it the two trees as arguments, then it displays the size of the second tree relative to the first: du -sh backup-Jan backup-Feb 242G backup-Jan 24G backup-Feb Where if you just give it the second tree, it shows the whole size: du -sh backup-Feb 245G backup-Feb And if you give the arguments in reverse it does the expected thing and shows the full size for the newer backup, and the relative size for the older backup du -sh backup-Feb backup-Jan 245G backup-Feb 21G backup-Jan I assumed this was going to be a much harder thing to find the answer for!
Determine extra size of hard link tree
1,541,016,132,000
I noticed that the rule /usr/sbin/shutdown -- gen_context(system_u:object_r:shutdown_exec_t,s0) labels /usr/sbin/shutdown shutdown_exec_t when /usr/sbin is a directory. But it doesn't restore the same label when /usr/sbin is a symbolic link to bin and shutdown is in /usr/bin. Why? If /usr/sbin is a symbolic link to bin, is there an easy way to give the files that are supposed to be in /usr/sbin the correct contexts? It looks like that if an inode has 2 hard links to it, and the paths have different default file contexts, the resulting file context will differ depending on which path is given to restorecon. What context will it have if I relabel the whole filesystem? Is it deterministic? Is it ok or good to do so?
restorecon doesn't handle symbolic links just the way it handles files. According to the manual page (a little old, so useful as a starting point): Note restorecon does not follow symbolic links. This was observed in a bug report, Bug 825221: restorecon disregards custom rules for sym links, with these pertinent comments: Daniel Walsh 2012-05-29 13:54:13 EDT We just fixed this in F17. Daniel Walsh 2012-08-15 14:01:52 EDT restorecon is doing a realpath on the file, so it is translating the file via realpath. We are doing this so that a symbolic link attack will not cause the file to get mislabeled. Karel Srot 2012-08-23 06:14:05 EDT Just tested this on RHEL6.3 and Fedora 17. On RHEL6 restorecon doesn't restore the context of the symlink (when symlink is the actual item). On Fedora 17 the context is restored. I belive this is the problem mentioned in #c6. Dan, could you please confirm that this is what's is going to be fixed? Thank you. Miroslav Grepl 2012-08-23 09:59:41 EDT Ok, we discovered bug in the policy. We do not have the following rule on RHEL6 allow setfiles_t file_type : lnk_file { read getattr relabelfrom relabelto } ; We have just allow setfiles_t file_type : lnk_file { getattr relabelfrom relabelto } ; So I am switching this bug to selinux-policy component. Somewhat later (2015), in Some questions about using restorecon on symlink , Stephen Smalley commented Yes, they resolve to different inodes (a symbolic link is its own file/inode in Linux, separate and independent of the target). The symbolic link SELinux label only controls access to the symlink (i.e. the ability to unlink, rename, or read it), not access to its target (which is controlled based on the target's label). So... barring some further rule, the symbolic links are (mostly) irrelevant to the permissions on the target that you're concerned about.
How does restorecon handle links?
1,541,016,132,000
I currently am using Xamarin Studio, which has a bug in this version. It adds 2 parameters to an executable, which causes the output to flood with error messages, slowing down the build time from a minute to at least 10 minutes. Is there a way I can move the original executable and create a bash script or a link, which removes the 2 offending parameters, and put it in its place? So Xamarin would run the command as usual, but the 2 offending parameters wouldn't be passed to the original command. say it's /usr/bin/ibtool --errors --warnings --notices --output-format xml1 --minimum-deployment-target 7.0 --target-device iphone --target-device ipad --auto-activate-custom-fonts --sdk iPhoneSimulator9.0.sdk --compilation-directory Main.storyboard, I'd like to: Move ibtool to ibtool_orig Put a link or script in place of ibtool, which removes the offending parameters and passes it along to ibtool_orig , giving me the following command: /usr/bin/ibtool_orig --errors --output-format xml1 --minimum-deployment-target 7.0 --target-device iphone --target-device ipad --auto-activate-custom-fonts --sdk iPhoneSimulator9.0.sdk --compilation-directory Main.storyboard (notice that ibtool is now ibtool_orig and --errors --warningsis gone) Any ideas?
The canonical way is a loop shaped like: #! /bin/sh - for i do # loop over the positional parameters case $i in --notices|--warnings) ;; *) set -- "$@" "$i" # append to the end of the positional parameter # list if neither --notices nor --warnings esac shift # remove from the head of the positional parameter list done exec "${0}_orig" "$@" You can also replace #! /bin/sh - with the ksh, zsh, yash or bash path and replace exec with exec -a "$0" so ibtool_orig be passed /path/to/ibtool as argv[0] (which it might use in its error messages or to reexecute itself).
link to an executable and remove some parameters
1,541,016,132,000
I want a very simple example of how can a hard link break the file system structure. I saw somewhere that people say it's because of loops, however, I can make a loop with a soft link and so I still want to know what makes hard link break the file system?
It's hard links to directories that can break the filesystem structure. Hard links to other types of files aren't a problem. For example: mkdir foo ln foo foo/self rmdir foo rmdir foo doesn't actually remove the directory since it has a remaining link — the self entry inside foo itself. foo has become detached from the filesystem; it can't be reached anymore, but it still exists. Forbidding hard links to directories prevents this problem. There are garbage-collected filesystems where a detached directory tree is automatically reclaimed, but they've never really taken (it's a significant effort for hardly any benefit). Another problem is with tools that traverse a directory tree. If hard links to directories are allowed, then these tools would have to remember all previously-seen directories and skip them, or else they would loop forever when encountering a directory which is a subdirectory of itself (or of a child of itself, etc.). Loops with symbolic links aren't a problem. If the target of the symbolic link is removed, the existence of the symlink doesn't matter. In recursive traversals, symlinks aren't followed (unless explicitly requested).
Can a hard link break the file system structure?
1,541,016,132,000
On openSUSE Tumbleweed 20210606 with kernel GNU/Linux 5.12.9-1-default I tried making a hard link of file from /cust to ~/backup: df /cust && df ~/backup && ln -P /cust/customization.tar ~/backup/ and get a result with error message: Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 706523136 158883972 546393196 23% / Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda3 706523136 158883972 546393196 23% /home ln: failed to create hard link '/home/luli/backup/customization.tar' => '/cust/customization.tar': Invalid cross-device link Why it said that from /dev/sda3 to /dev/sda3 is cross-device and where can I get more details? Thanks.
ln without options creates a hard link as documented in the manual page for link, especially the section explaining error EXDEV, which contains the remark link() does not work across different mount points, even if the same filesystem is mounted on both Although I realize that the paragraph below does not address the problem, I won't remove it from my answer. It might still be useful for some readers. A hard link points to an inode number in the same filesystem and can therefore not be created across filesystems. You can use a symbolic link instead (-s option).
about ln command : condition of cross-device
1,541,016,132,000
A follow-up from this question. My further reading on Docker storage drivers revealed that the overlay driver merges all the image layers into lower layers using a hard link implementation which cause excessive inode utilization. Can someone explain this? As far as I know, creating hard links does not create a new inode.
OverlayFS is a union filesystem, and there are two storage drivers at the Docker level that make use of it: the original/older version named overlay and the newer version named overlay2. In OverlayFS, there is a lower-level directory which is exposed as read-only. On top of this directory is the upper-level directory, which allows read-write access. Each of these directories is called a layer. The combined view of both the lower-level and upper-level directories is presented as a single unit, called the 'merged' directory. The newer overlay2 storage driver natively supports up to 128 such layers. The older overlay driver is only able to work with two layers at a time. Since most Docker images are built using multiple layers, this limitation is fairly significant. To work around this limitation, each layer is implemented as a separate directory that simulates a complete image. To examine the differences on my test system, I pulled the 'ubuntu' image from Docker Hub and examined the differences in directory structure between the overlay2 and overlay drivers: [root@testvm1 overlay2]$ ls */diff 4864f14e58c1d6d5e7904449882b9369c0c0d5e1347b8d6faa7f40dafcc9d231/diff: run 4abcfa714b4de6a7f1dd092070b1e109e8650a7a9f9900b6d4c3a7ca441b8780/diff: var a58c4e78232ff36b2903ecaab2ec288a092e6fc55a694e5e2d7822bf98d2c214/diff: bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var c3f1a237c46ed330a2fd05ab2a0b6dcc17ad08686bd8dc49ecfada8d85b93a00/diff: etc sbin usr var [root@testvm1 overlay]# ls */root/ 001311c618ad7b94d4dc9586f26e421906e7ebf5c28996463a355abcdcd501bf/root/: bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 048f81f400f7d74f969c4fdaff6553c782d12c04890ad869d75313505c868fbc/root/: bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var 8060f0c647f24050e1a4bff71096ffdf9665bff26e6187add87ecb8a18532af9/root/: bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var fbdef944657234468ee55b12c7910aa495d13936417f9eb905cdc39a40fb5361/root/: bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var In the overlay representation, each layer simulates a complete image, while the overlay2 layers only contain the exact differences between layers. In the overlay driver's approach, hard links are used as a way to save space between the different layers. However, this method is still not perfect, and new inodes are required when the image data contains special files such as symbolic links and character devices. This can quickly add up to large number of inodes. The inode distribution betwen the overlay2 and overlay drivers on my test system are as shown below. [root@testvm1 overlay2]$ du --inodes -s * 8 4864f14e58c1d6d5e7904449882b9369c0c0d5e1347b8d6faa7f40dafcc9d231 27 4abcfa714b4de6a7f1dd092070b1e109e8650a7a9f9900b6d4c3a7ca441b8780 3311 a58c4e78232ff36b2903ecaab2ec288a092e6fc55a694e5e2d7822bf98d2c214 1 backingFsBlockDev 25 c3f1a237c46ed330a2fd05ab2a0b6dcc17ad08686bd8dc49ecfada8d85b93a00 5 l [root@testvm1 overlay]# du --inodes -s * 3298 001311c618ad7b94d4dc9586f26e421906e7ebf5c28996463a355abcdcd501bf 783 048f81f400f7d74f969c4fdaff6553c782d12c04890ad869d75313505c868fbc 768 8060f0c647f24050e1a4bff71096ffdf9665bff26e6187add87ecb8a18532af9 765 fbdef944657234468ee55b12c7910aa495d13936417f9eb905cdc39a40fb5361 The total count of inodes on overlay2 comes to 3378 on my system. Using overlay, this count goes up to 5615. This value is considering a single image and with no containers running, so a large system with a number of docker containers and images could quickly hit the inode limit imposed by the backing filesystem (XFS or EXT4, where the /var/lib/docker/overlay directory is located). Due to this reason, the newer overlay2 storage driver is currently the recommended option for most new installations. The overlay driver is deprecated as of Docker v18.09 and is expected to be removed in a future release.
Overlay storage driver internals
1,541,016,132,000
I am searching for a simple way to perform a full system scan using clamav on a machine that also has Timeshift based snapshooting enabled. As suggested by this answer on the Ubuntu site, I was using a command like: clamscan -r --bell -i -exclude-dir="^/sys" / (note: the -exclude-dir="^/sys" param was suggested to me by another user that pointed out that /sys is a virtual directory and probably best excluded from scans to avoid possible read-access errors) The command works as expected, 'check all files on the computer, but only display infected files and ring a bell when found'. This has an evident problem: "all files on the computer" includes the "/timeshift" directory, which is the directory Timeshift use to store snapshot data. Now, taken from Timeshift official page: In RSYNC mode, snapshots are taken using rsync and hard-links. Common files are shared between snapshots which saves disk space. Each snapshot is a full system backup that can be browsed with a file manager. To put it simply: Timeshift duplicates the changed files, and uses hard-links to reference the unchanged ones. As far as my understanding go, this means that the "first" snapshot is probably a full copy of the filesystem (obviously excluding any file/path that Timeshift is configured to ignore) while any following snapshot only includes the changed files and mere links to to unchanged ones. The problem: under standard settings, clamscan will also scan EVERY file in the /timeshift folder. While I am fine with scanning the changed files which are actual real files... scanning the links seems a waste since basically it means scanning the same file multiple times - one for the snapshot the file was first changed for, and then one for each link to the unchanged file in the following snapshots. I am therefore searching for a simple way to exclude those hard-links from the scan. man clamscan shows the existence of a --follow-file-symlinks option, but even doing clamscan -r --bell -i -exclude-dir="^/sys" --follow-file-symlinks=0 / doesn't seem to work. After all, as far as my understanding go, that option only excludes symlinks, while Timeshift is using hard-links. So, my question is: is there any way to perform a full system scan while skipping scanning the hard-linked files in the /timeshift directory while at the same time scanning the real ones? (as a bonus side-question: would the same be possible using the clamtk UI too?)
Hard links to files are indistinguishable from what you call "real files" - it's actually same file which lies in two directories. Best solution in your case would be just to add one more -exclude-dir="^/timeshift" parameter to the clamscan command.
Using clamav efficiently when timeshift snapshots are present
1,541,016,132,000
How can I achieve this files & folders permissions scenario: Consider these folder: Folder A: 640 root apache /var/www/A/ Folder www: 640 root apache /var/www/ and this linux user: id user1: uid=1000(user1) gid=1000(user1) groups=1000(user1) I want to allow linux user user1 read/write access JUST to folder A, BUT don't change folder A owner or group. I have tried these scenarios but none of them were desirable: add user1 to group apache: cons: user1 will be able to read other files at /var/www/. hard-link a folder(ex. folder B) in user1's home to /var/www/A/ and set proper permissions on folder B rather than A: cons: hard-links to directories not possible on cross-devices. add user1 to sudoers: cons: complexity of user1, plus, user1 may broke owner/group policy and/or permissions of folder A by human-error. any idea?
I'm assuming you want to give user1 access to /var/www/A because you visit the content managed by user1 via http://your.domain/A? Why not have apache redirect the content to a directory under the users home directory? Alias /A/ /path/to/users/homedir/A This will achieve the result and not have to change group/user ownership/permissions of the /path/to/users/homedir/A. However if you want to do it by file permissions, you will only be able to achieve it by changing ownership/permissions of /var/www and /var/www/A. Create a 'new group' (most unixes provide a groupadd command), add both apache and user1 to it. Change the group ownership of /var/www/A to this 'new group' (with either chown or chgrp). Set the permissions to disallow access to rw for 'new group' and 'other users' to the /var/www directory (chmod 711 /var/www) and give access to the 'new group' for /var/www/A (chmod g+rwx /var/www/A). References: Apache Alias Directive
FS permission scenario
1,541,016,132,000
I'm transitioning a large fileset from a filesystem with a high _PC_LINK_MAX (maximum number of hardlinks per inode) to a lower one. In particular, I'm messing about with Amazon EFS, which has a maximum of 175, as stated here. So I'd like to have the input be a set of files with link counts as high as 250 rejiggered, so that the inodes get split, so that the max is 100 links each. Is there a clever invocation of, say, hardlink that can do this? Or perhaps an option to rsync -aH or maybe cp -a which can help? ...otherwise, some hackery is in order...
The situation is tricky. Imagine the maximum links is 5 and you have 12 files a01 to a12 all hard-linked together. You need to split out a01..a05 and a06..a10 and a11..a12, where a06 and a07 etc are still hard-linked together, but not to a01. Here's a bash script using rsync that runs on an example source directory (src=/usr/libexec/git-core/) on my system which has 110 hard-links. It simulates a maximum number of 50 links (max) in the destination directory realdest by the function sim. In a real case you would just ignore the too many links errors, and not use this function. After the initial normal rsync (with errors), the list of missing files is created by using rsync -ni, extracting the filenames in function calctodo into /tmp/todo. There is then a loop where we rsync the missing files, again ignoring too many links errors (which you would have if you had more than 2*175 links in the original directory). The successfully created files are hard-linked amongst themselves. The new list of missing files is calculated. This is repeated until there are no more files. src=/usr/libexec/git-core/ realdest=/tmp/realdest #rm -fr "$realdest" max=50 sim(){ find ${1?} -links +$max | sed "1,${max}d" | xargs --no-run-if-empty rm } calctodo(){ sed -n '/^hf/{ s/[^ ]* //; s/ =>.*//; p }' >/tmp/todo } rsync -aHR "$src" "$realdest"; sim "$realdest" rsync -niaHR "$src" "$realdest" | calctodo while [ -s /tmp/todo ] do mv /tmp/todo /tmp/todo.old rsync -aHR --files-from=/tmp/todo.old / "$realdest"; sim "$realdest" rsync -niaHR --files-from=/tmp/todo.old / "$realdest" | calctodo done You may need to revise this if you have filenames with " => ", newlines and so on. Note, you can find the maximum number of links supported by a filesystem by getconf LINK_MAX /some/directory
handy script to reduce hardlink count?
1,532,588,432,000
As we all know, the ln command creates a link, with the default being a hard link and the -s option creating a symlink. The general syntax is ln [-s] OLD NEW, where OLD is the file you are linking to and NEW is the new file you are creating. Hard links can not be created for directories, as a hard link could be created between folders inside each other & I suppose computers do not yet have the resources to check for this without a SERIOUS slowdown. When creating the link, the path of both files must be written out, and can be absolute or relative. You can mix relative & absolute filepaths, i.e. have a relative path for the new file/folder & an absolute path for the old one. When creating a hard link with a relative path, the paths of both files are relative to the current folder, while for a symbolic link the path of the linked-to file/folder is relative to its parent folder but the path of the old file/folder is relative to the current folder. Why this is is "relative" to my question. For example, say we am in the HOME folder, /home/user, also known as ~, and create 2 folders, new and new2, with the file file in the folder new. If we try ln -s new/file new2/file, the result is a broken link from ~/new2/file to the currently nonexistent ~/new2/new/file. However, if we instead run ln -s ../new/file new2/file, we get the expected result, which is a link from ~/new2/file to ~/new/file. So, my question: Why is the file path for the OLD file/folder of a symlink relative to its parent, while the other 3 paths (hard link OLD, NEW files, symlink NEW file/folder) are relative to the current folder? All this is on Fedora, but I'm sure it applies to most UNIX-based OS's. EDIT E Carter Young seems to hit the nail on the head with regard to my 2nd question (as well as my 1st question, which was wrong anyway). It seems that for a symbolic link, the target doesn't have to exist yet, so the system has to make its path relative to the link rather than the current directory. However, why can't the shell parse out that path when you're running the command, rather than forcing the user to figure out what the path is & enter it him/herself? The shell seems to parse pretty well, so is this a case of legacy issues? Performance issues? What?
Read your man page: Question 1 = 1st Form, this is because in linux all items are considered files, even directories. As an example, use your text editor to "open" /etc/, ie: nano -w /etc/ nano will politely tell you /etc/ is a directory Since it's technically legal to create never ending symlinks. In the old days, before the bounds check was written, I could have an FHS system with 2 files named /etc, one being a file and one being a directory, and the system knew the difference (See the haha note in the chromiumos developer guide: There is a file system loop because inside ~/trunk you will find the chroot again. Don't think about this for too long. If you try to use du -s ${HOME}/chromiumos/chroot/home you might get a message about a corrupted file system. This is nothing to worry about, and just means that your computer doesn't understand this loop either. (If you can understand this loop, try something harder. I Dare you, click on something harder :) In order to prevent the looping ln requires the full path. Question 2 can be answered by reading the man page again Look at the last sentence: DESCRIPTION In the 1st form, create a link to TARGET with the name LINK_NAME. In the 2nd form, create a link to TARGET in the current directory. In the 3rd and 4th forms, create links to each TARGET in DIRECTORY. Create hard links by default, symbolic links with --symbolic. By default, each destination (name of new link) should not already exist. When creating hard links, each TARGET must exist. Symbolic links can hold arbitrary text; if later resolved, a relative link is interpreted in relation to its parent directory. Re: Edit: "However, why can't the shell parse out that path when you're running the command, rather than forcing the user to figure out what the path is & enter it him/herself?" Consider this example: Application A installs Library version 1.0.a. You build applications X,Y,Z that depend on Library A. Application A finds a bug, updates it and saves the library as 1.0.1.2.a. Since Applications X,Y,and Z still use library 1.0 if i replace 1.0 directly w/ 1.0.1.2, I'll get breakage, but if I symbolically link version 1.0.1.2 to version 1.0, nothing breaks, ln -s /usr/lib64/libfoo-1.0.1.2.a /usr/lib64/libfoo-1.0.a and applications X,Y, and Z get the new bugfix from the library applied to them too, because the shell follows the link from 1.0 to 1.0.1.2 but calls it 1.0. In cases like this you don't want the shell assuming the path as you increase the chance for system-wide breakage. BTW on 64-bit systems /usr/lib is linked to /usr/lib64, to remedy the example I just gave on a large scale, ie 32 bit applications expect libraries to be installed at /usr/lib, and on a 64 bit system there are no pure 32 bit libraries so /usr/lib is linked to /usr/lib64 like so: ln -s /usr/lib64 /usr/lib
Seemingly Inconsistent Behavior for "ln" & "ln -s"
1,532,588,432,000
Imagine I have a file something/a.txt, which I hardlink from b.txt. Now, if I cp b.txt c.txt, is c.txt a hard link to a.txt, or is it a copy of the contents of a.txt?
Hardlinks are a completely different concept from other kinds of links or references. A hardlink is another name to the same inode (a bit simplified: the file contents and metadata). E.g. if you hardlink a.txt from b.txt, both names a.txt and b.txt are equal names to the same file. After hardlinking you cannot distinguish anymore if a.txt or b.txt was the original file name. Both names point to the same file. That means cp b.txt c.txt will copy the file contents exactly as if you did cp a.txt c.txt.
What happens when you copy a hardlink?
1,532,588,432,000
I'm using rsync to backup some of my files: rsync -aEN --delete --link-dest="$CURR/" "$SOURCE/" "$NEW/" The --link-dest option works fine with most files, but not with symlinks. When I was writing a clean-up script for old backups, I noticed that unchanged symlinks are not hard-linked, but rather copied. Now I'm wondering: Is there a way to make rsync hard-link unchanged symlinks as well? And if not: Is it intentional or a bug in rsync? I'm using rsync version 3.1.1 on Mac OS 10.11. Edit: It seems to be a problem in Mac OS X. For some reason HFS+ seems not to support hard-links to symlinks.
The filesystem on macOS (HFS+) does not support hard links to symbolic links: $ touch file $ ls -l file -rw-r--r-- 1 kk staff 0 Jun 17 18:35 file $ ln -s file slink $ ls -l file slink -rw-r--r-- 1 kk staff 0 Jun 17 18:35 file lrwxr-xr-x 1 kk staff 4 Jun 17 18:36 slink -> file The following would ordinarily create a hard link to a symbolic link, and is even documented in the ln manual on macOS to do so (EDIT: no it isn't, unless you have GNU coreutils installed and read the wrong manual, doh!): $ ln -P slink hlink $ ls -l file slink hlink -rw-r--r-- 1 kk staff 0 Jun 17 18:35 file lrwxr-xr-x 1 kk staff 4 Jun 17 18:38 hlink -> file lrwxr-xr-x 1 kk staff 4 Jun 17 18:36 slink -> file You can see by the ref count (1) that no new name was created for slink (would have been 2 for both slink and hlink if it had worked). Also, stat tells us that hlink is a symbolic link with 1 inode link (not 2): $ stat hlink File: 'hlink' -> 'file' Size: 4 Blocks: 8 IO Block: 4096 symbolic link Device: 1000004h/16777220d Inode: 83828644 Links: 1 Access: (0755/lrwxr-xr-x) Uid: ( 501/ kk) Gid: ( 20/ staff) Access: 2016-06-17 18:38:18.000000000 +0200 Modify: 2016-06-17 18:38:18.000000000 +0200 Change: 2016-06-17 18:38:18.000000000 +0200 Birth: 2016-06-17 18:38:18.000000000 +0200 EDIT: Since I was caught using GNU coreutils, here's the tests again with /bin/ln on macOS: $ touch file $ /bin/ln -s file slink $ /bin/ln slink hlink # there is no option corresponding to GNU's -P $ ls -l file slink hlink -rw-r--r-- 2 kk staff 0 Jun 17 18:59 file -rw-r--r-- 2 kk staff 0 Jun 17 18:59 hlink lrwxr-xr-x 1 kk staff 4 Jun 17 18:59 slink -> file The hard link is pointing to file rather than to slink. On e.g. Linux and OpenBSD (the other OSes I use), it is possible to do this, which results in $ ls -l file slink hlink -rw-rw-r-- 1 kk kk 0 Jun 17 18:35 file lrwxrwxrwx 2 kk kk 4 Jun 17 18:43 hlink -> file lrwxrwxrwx 2 kk kk 4 Jun 17 18:43 slink -> file (notice "2")
rsync --link-dest not working as expected with symlinks
1,532,588,432,000
I understand why hard links on directories are dangerous (loops, problems for rmdir because of the parent-directory-link) and have read the other questions on that topic. And so I assumed that hard links on directories apart from . and .. are not used. And yet I see the following on CentOS 5 & 6: # ls -id /etc/init.d/ 459259 /etc/init.d/ # ls -id /etc/rc.d/init.d/ 459259 /etc/rc.d/init.d/ # ls -id /etc/init.d/../ 458798 /etc/init.d/../ # ls -id /etc/rc.d/ 458798 /etc/rc.d/ # ls -id /etc/ 425985 /etc/ In other words 2 different paths to directories pointing to the same inode and the parent of /etc/init.d/ pointing to /etc/rc.d/ instead of /etc/. Is this really a case of hard-linked directories? If not, what is it? If yes, why does Red Hat do that? Edit: I'm sorry for asking a stupid question, I should have been able to see that it's a symlink. Not enough coffee today, it seems.
That's soft-link, not hard link. Symbolic links point to other files. Opening a symbolic link will open the file that the link points to. Removing a symbolic link with rm will remove the symbolic link itself, but not the actual file. This is indicated by the letter l at the beginning of the permissions lrwxrwxrwx. 1 root root 11 Aug 10 2016 init.d -> rc.d/init.d Also all the rc0.d to rc6.d are symlinks to rc.d/rc0.d
Is /etc/init.d hard-linked on CentOS?
1,532,588,432,000
I want, only using "basic" commands (for maximum portability) (i.e., something that would work on AIX / Linux / etc., not just something using a recent nicety ^^), to find all the files (symlinks, hardlinks and combinations thereof) pointing to a specific file/dir. Be careful to not rush to answer find / -ls | grep .... : it will miss many cases. See my links below, mixing hardlinks, symlinks, relative-and-absolute paths (and also play with symlinks "././././." possibilities) ie hardlinks and symlinks can be "nested", ad-inifinitum... some symlinks could be with the full path, others with a relative path, those paths (relative or absolute) could be very complex (ex: /tmp/././../etc/file ) a symlink could lead to a file, which hardlink to another, which is a symlink to a third, which ends up [after some more iteration] to the final destination... In the end, I "just" need to find out a way to know what is the "final destination" of any file/link (ie, which inode will be accessed in the end?). But it's really tough (unless some magical function will tell me "the final destination inode is : ...". That's what I need!) I thought I could simply use '-H' or '-L' options of find, but (I'm probably dumb...) it didn't work... yet. Any info welcomed (but please, using find/ls/etc, not some "nice utility only available on linux") Try to create some different links to the "/tmp/A" directory, and find a way to find and list them all: mkdir /tmp/A /tmp/B ln -s /tmp/A/ /tmp/B/D #absolute symlink, with a training "/" ln -s ../A /tmp/B/E #relative symlink ln -s /tmp/A/. /home/F #absolute symlink, with a trailing "/." ln -s ../tmp/A/./. /var/L #let's get craaaaaaazy! symlinks are "fun"... ln /tmp/B/D /etc/G #Hard-link to the symlink file /tmp/B/D, which itself symlink to /tmp/A/ witch ends up being the same as "/tmp/A" in most cases. ln /etc/G /etc/G2 #and a duplicate of the previous one, just to be sure [so /etc/G, /etc/G2, and /tmp/B/D have same inode, but it's a different inode from the final destination... :( ln -s etc/G /thatsenough #symlink again, for further chaining and then some tries: find -H / -ls # show inodes of the links and symlinks, not of the "final" directory find -L / -ls # same thing (I do try that on an OLD AIX ...) find -H / -inum 123456 -ls #if /tmp/A/. has inode 123456. This lists *only* hardlinks to /tmp/A, ie will only print a subset (and /tmp/A have to be a file in that case) I expected to see the final inode (123456) in front of all the paths, in one of those invocation (I also added '-follow' to both), but I always see the inodes of the links, not of the "final destination" (i.e., /tmp/A) What could I use to find out the "final inode" I end up accessing? [the OS manages it, but can a "simple" command tell me beforehand "through that file, you will open that final inode!"?]
ls has a -L option that will effectively chase symlinks and show the perms, owner, inode, etc. of the ultimate object. [It does this by doing stat(target) instead of lstat(target)]. For best results, run as root or as someone who has read access to relevant mounted filesystems. So in your case, try the following: find / -exec ls -iLd {} | grep inodenum [copied from my comment, per request of the OP.]
thoroughly find all links (hard and symlinks, and any combination thereof) leading to a file/dir [duplicate]
1,532,588,432,000
I've seen people on the internet talk about using hardlinks to force files to stay on disk as a backup, even if they're deleted in their original location. Would this work for a directory too? Why or why not? Assume that I'm using an ext4 filesystem, if it matters, but I'd also be interested in answers for other (UNIX/inode-based) filesystems, especially btrfs.
It would not work. A hard link does not preserve the contents of files, just the pointer to those contents. So in case of files, file modifications are not preserved, and for directories that means changes in the contents of directories would not be preserved either. As (down under) each file is deleted individually. Even if you could hard link a directory, it would just be empty afterwards all the same. Hardlinks are usually disallowed for directories in the first place. Symlinks for directories are already problematic, there are hacks in place to prevent a infinite symlink loop to be followed down to deep. At least for symlinks they're easily identified and simply ignored, most programs that walk directory trees (such as find) ignore them completely (never follow them) by default. Hardlinked directories would be harder to detect and keep track of, as you cannot tell which one you already visited, you'd have to check for each directory whether it's one of the already visited ones. Most programs don't do this as they simply expect that by convention this thing does not exist in the first place. If you still need to hardlink directories for some reason, there is something that does something very similar, and that's mount --bind olddir newdir. Bind mounts do not have some of the pitfalls, e.g. no infinite structures as the mount is locked to one place and does not repeat unto itself. In exchange it has others (other submounts do not appear in this tree either). Which is a great feature if you're looking for files hidden by other mounts. There is no preservation of contents in either case, for that you always need a real copy.
If I hardlink to a directory, will the contents be "preserved" as if I hard-linked to every file?
1,532,588,432,000
I've tried setting up a script to hardlink my files to my box.com account (as it's a backup of my music library). As I want to run it automatically to sync my music across several devices, I wanted to use rsync as I'm on Mac OS X 10.7.4 (if anyone cares). The script I came up with however only copies the files instead of hardlinking them (the available disc space lowers when I start the script). What I'm trying to achieve is the box.com app syncing something outside its actual folder. This is the script I use: rsync -azluPhmt --progress --link-dest=./iTunes ./iTunes/Users/admin/Box.com/iTunes --delete-during --exclude="*Album Artwork*"
According to the rsync man page section for --link-dest=DIR, If DIR is a relative path, it is relative to the destination directory. I am guessing that you assumed it would be relative to the current working directory. You probably meant to write: rsync -azluPhmt --progress --link-dest="$PWD/iTunes" …
rsync hardlink attempt copies
1,532,588,432,000
This is regarding generic UFS. From what I understand when an absolute path is given (eg: /home/userU/file.txt) the disk is accessed for each directory and the file. Hence in this case the disk is accessed 4 times 1 For /, 1 for home/, 1 for /userU, 1 for file.txt My questions are If a hard link /hL is given, pointing to the inode of the above file, in what order is the disk accessed? If a soft link /sL is given, pointing to the above file, in what order is the disk accessed? Assume that no inodes or any other data are cached initially in all three cases.
Background Say we have the following directory setup: $ ll total 0 -rw-r--r-- 2 root root 0 Jul 29 23:36 afile.txt -rw-r--r-- 2 root root 0 Jul 29 23:36 hL lrwxrwxrwx 1 root root 9 Jul 30 01:22 sL -> afile.txt Now let's look at your 2 questions. Questions If a hard link /hL is given, pointing to the inode of the above file, in what order is the disk accessed? With hardlinks, they possess the same inode reference as the original file/directory that they're pointing to. So there is no additional HDD access to read them. For example: $ stat hL | head -3 File: ‘hL’ Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fd00h/64768d Inode: 667668 Links: 2 vs. $ stat afile.txt | head -3 File: ‘afile.txt’ Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fd00h/64768d Inode: 667668 Links: 2 The only difference between these 2 is the name. So either path will incur the same number of HDD accesses. If a soft link /sL is given, pointing to the above file, in what order is the disk accessed? With soft links however, there is an additional HDD access. This additional access would be against the directory's metadata, where the file sL resides. This would then return details stating that this file is in fact a symbolic link, and it's pointing to another file/directory. For example: $ stat sL | head -3 File: ‘sL’ -> ‘afile.txt’ Size: 9 Blocks: 0 IO Block: 4096 symbolic link Device: fd00h/64768d Inode: 681295 Links: 1 Here we can see it's of type 'symbolic link' and it's pointing to afile.txt. Notice too that it has a different inode (681295 vs. 667668), further proof that it's going to cost an additional read. So what's the read orders? If you use strace against the Bash shell itself where you're running commands against these files/directories you can get an idea of how things work. [pid 18098] stat("/tmp/adir/hL", {st_mode=S_IFREG|0644, st_size=0, ...}) = 0 [pid 18098] open("/tmp/adir/hL", O_RDONLY) = 3 [pid 18098] fstat(3, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0 Here's output from the command more /tmp/adir/hL. For /tmp/adir/hL: stat/open (/) → stat/open (tmp) → stat/open (adir) → stat/open (hL) For /tmp/adir/sL: stat/open (/) → stat/open (tmp) → stat/open (adir) → stat/open (sL) → stat/open (afile.txt) Further details The Wikipedia page on Symbolic links eludes to all this as well: Although storing the link value inside the inode saves a disk block and a disk read, the operating system still needs to parse the path name in the link, which always requires reading additional inodes and generally requires reading other, and potentially many, directories, processing both the list of files and the inodes of each of them until it finds a match with the link's path components. Only when a link points to a file in the same directory do "fast symlinks" provide significantly better performance than other symlinks. References Symbolic Links - Wikipedia
How many times is the disk accessed?
1,532,588,432,000
I am a new Linux user learning from Arch Linux and recently Linux From Scratch (7.7). I set up a new installation of AL to be my LFS host; I manually (and also with the provided bash script) checked prerequisite packages on my host. In my case, I am confident I resolved all discrepancies except for linking /usr/bin/yacc to /usr/bin/bison. The provided script results in yacc is bison (GNU Bison) 3.0.4, as opposed to /usr/bin/yacc -> /usr/bin/bison. Because the latter was the output format for checked symbolic links, I assumed the script was telling me yacc is prepared but employing a different kind of link. I investigated more about Linux file systems and took away a cursory understanding that actual data is described by inodes (metadata), which are in turn pointed to by the (abstract) files we interact with. Files that simply point to the same inode are considered hard links (although, I think the files are independent of each other). I ran sudo ls -il /usr/bin | less and found yacc and bison had slightly different inode numbers (152077 and 152078, respectively). Does this mean they are not hard linked, or am I misinterpreting the script output and require a fix? Edit: Relevant commands from bash script: bison --version | head -n1 if [ -h /usr/bin/yacc ]; then echo "/usr/bin/yacc -> `readlink -f /usr/bin/yacc`"; elif [ -x /usr/bin/yacc ]; then echo yacc is `/usr/bin/yacc --version | head -n1` else echo "yacc not found" fi
When you did the ls -il /usr/bin, you were listing file names and matching inode numbers. In this context, it's probably best to think of "file name" as separate from "inode", and to think of the inode as the file. The "inode" is typically an on-disk data structure containing metadata (permissons, ownership, creation time, access time, etc) and the disk blocks that contain the file's data. Depending on the file system, inodes can be located strategically around the disk, or live in a database of sorts. Most of the time, there's a fast algorithm to go from inode number to the disk block the inode is in, so that lookup is quite rapid. From this standpoint, every file name is just a "hard link". A directory just matches up file names and inode numbers. No distinction is made between a "real file name" and a "hard link". So your file names /usr/bin/yacc and /usr/bin/bison are matched to different inode numbers means that those two names refer to different metadata, and different file data. Speaking casually, the files are not hard links in the sense that only one file name matches each of the two inodes, but from a technical sense, both file names are hard links, they're each just the single hard link to the inode. As far as your script and the "almost identical inode numbers", yacc and bison are related. On my Arch laptop: 1032 % ls -li /usr/bin/yacc /usr/bin/bison 1215098 -rwxr-xr-x 1 root root 394152 Jan 23 2015 /usr/bin/bison* 1215097 -rwxr-xr-x 1 root root 41 Jan 23 2015 /usr/bin/yacc* 1033 % file /usr/bin/yacc /usr/bin/yacc: POSIX shell script, ASCII text executable 1034 % cat /usr/bin/yacc #! /bin/sh exec '/usr/bin/bison' -y "$@" The file names yacc and bison identify inode numbers only one apart, probably because they got created one after the other. The file name for yacc does not represent a symbolic link or a hard link. A symbolic link would show up differently in the ls -li output, two hard links would each represent the same inode number. But yacc is related to bison in that it's a shell script that invokes bison. That's why your script gives output something like: bison (GNU Bison) 3.0.4 yacc is bison (GNU Bison) 3.0.4 Your script invokes /usr/bin/yacc which actually just executes /usr/bin/bison.
Can files with different inum be hard linked?
1,532,588,432,000
On my Linux machine, I have the following file: drwxr-xr-x 2 jsgdr Unix_31 4096 Dec 2 17:46 jsgdr How to change the permission 2 to 4 so that I will this: drwxr-xr-x 4 jsgdr Unix_31 4096 Dec 2 17:46 jsgdr
The number you are talking about doesn't refer to the permissions. mkdir demo cd demo ls -ld drwxr-xr-x 2 root root 4096 Dec 2 10:21 . So, the number 2 here refers to, the entry for that directory in its parent directory; the directory's own entry for . However, if you want to see 4, you could see it when, mkdir sub_demo{1,2} ls -ld drwxr-xr-x 4 root root 4096 Dec 2 10:23 . As you could see, we are seeing the number 4 because we have 2 subdirectories that got created. So now 4 represents, the entry for that directory in its parent directory; the directory's own entry for . the .. entries in the 2 sub-directories inside the directory. You could find a detailed explanation from my other answer here.
Change file hard link count
1,532,588,432,000
My question begins with: Do I actually have hard links on my disk at all (except for "." and ".." of course)? I'm not sure how I would find that out? If no, the question is already answered. If yes, I'd like to adjust my backup routine (currently using rsync -a) to preserve my hard links. I've seen the option -H, but the man page worries me with a few statements, e.g.: -a, --archive ... Note that -a does not preserve hardlinks, because finding multiply-linked files is expensive... -H, --hard-links ... It does NOT currently endeavor to break already existing hard links on the destination that do not exist between the source files. ... Note that rsync can only detect hard links between files that are inside the transfer set... So what should I do, if I have hard links on my disk?
If you have a standard distribution you will have few hard links , so you generally don't need to worry too much. /bin, sbin, /lib/modules, and /usr have a bunch of hard links so if you are backing them up you may want to use the -H option. Backup directories may also contain hard links. Otherwise, you shouldn't have hard links unless you create them yourself. To find directories with hard linked files try the command (substitute / with other mount points if needed): sudo find / -xdev ! -links 1 ! -type d | xargs -n 1 dirname | sort -u
Backing up hard links (rsync)?
1,532,588,432,000
what is the best way to mirror an entire directory, say original/, to a new directory, say mirror/, which has the structure mirror/data/ and mirror/tree/, such that every file in the directory original/ or in any of its subdirectories is hardlinked to a file in mirror/data whose filename is a unique identifier of its content, say a hash of its content, and which is symlinked to from a point in mirror/tree whose relative path corresponds to the relative path of the original file in original, such that it can be easily restored? is this feature perhaps implemented by some tool in existence? – one that allows to flexibly choose the command for creating a unique identifier for a file by its content. for instance, say there is only one file original/something, which is a textfile containing the word “data”. then i want to run a script or command on original, such that the result is: $ tree original mirror original └── something mirror ├── data │   └── 6667b2d1aab6a00caa5aee5af8… └── tree └── original └── something -> ../../data/6667b2d1aab6a00caa5aee5af8… 5 directories, 3 files here, the file 667b… is a hard link to original/something and its filename is sha256sum hash of that file. note that i have abbreviated the filename for legibility. i want to be able to perfectly restore the original by its mirror. i know i can write a script to do that, but before i do that and maybe make a mistake and lose some data, i want to know if there is any tool out there that already implements this safely (i didn’t find any so far) or if there are any pitfalls. background: i want to keep an archive of a directory that tracks renames, but i don't need versioning. i know that git-annex can do that with a lot of overhead using git repositories, but i only need its way to mirror the contents of a directory using symlinks for the directory structure to files whose file names are hashes of their content. then i could use git-diff to track renames. i don't fully understand what git-annex is doing so i don't want to trust it with archiving my data. so i'm looking for a lighter alternative that is less intrusive.
It's not so difficult to implement. On a GNU system (for ln -r and sha1sum -z) and with zsh: mkdir -p mirror/{data,tree} && find original -type f -exec sha1sum -z {} + | while IFS= read -rd '' rec; do sum=$rec[1,40] file=$rec[43,-1] ln -f -- $file mirror/data/$sum && mkdir -p -- mirror/tree/${${file#*/}:h} && ln -rs mirror/data/$sum mirror/tree/${file#*/} done Note that it makes no attempt to preserve metadata (ownership, permission, mtime/atime, ACLs, extended attributes) of directories. And if several files have the same content, which one will end up linked in mirror/data will be more or less random as it will depend on the order in which find reports them which is not deterministic. Also note that empty directories and files that are neither directories nor regular (such as symlinks, fifos, devices...) won't be copied across. Copying the directory structure across including special files and with as much metadata as possible can be done using GNU tar: set -o pipefail mkdir -p mirror/{data,tree} && ( cd original && find . ! -type f -print0 | tar -cf - --xattrs --null --verbatim-files-from --no-recursion -T - ) | ( cd mirror/tree && tar -xpf - --xattrs ) && find original -type f -exec sha1sum -z {} + | while IFS= read -rd '' rec; do sum=$rec[1,40] file=$rec[43,-1] ln -f -- $file mirror/data/$sum && ln -rs mirror/data/$sum mirror/tree/${file#*/} done Though beware that creating those symlinks in those directories will update their last modification time.
mirror a directory tree by hard links for file contents and symlinks for directory structure
1,532,588,432,000
I have an example to better illustrate what I'm talking about: $ touch tas $ ln -s /etc/leviathan_pass/leviathan3 /tmp/l2/tas ln: failed to create symbolic link '/tmp/l2/tas': File exists Basically I can only symlink if the file I want to link doesn't exist. I understand this issue when talking about hard links - there's no way of linking two different files as it would lead to an inode conflict (so the file must be created at the time the command is running, to assure, and I'm presuming, they both "point" to the same inode). Now when talking about soft links it doesn't make sense to me, symlinks have nothing to do with the inodes, so what could be the problem? Thanks in advance for any help.
The command ln won’t clobber existing files by default. You can use ln -sf TARGET LINK_NAME to force overwriting the destination path (LINK_NAME) with a symlink. You can use ln -f TARGET LINK_NAME to overwrite LINK_NAME with a hard link to, your explanation doesn’t make any sense about inode conflict. It just replaces the file. You are partially right that the target has to exist first for hard links.
Why can't I symlink a preexisting file to a target file? [duplicate]
1,532,588,432,000
So, I have a file, for this instance we shall call it $HOME/Documents/hello.txt. I will write some text inside it: Hello, welcome to my text file And I will hard-link this file here: $HOME/Documents/Backup/hello.txt. Okay, great, this is now hard-linked. If I write to the original file, the hard-link will be updated: echo "Hello again" >> $HOME/Documents/hello.txt cat $HOME/Documents/hello.txt Hello, welcome to my text file Hello again cat $HOME/Documents/Backup/hello.txt Hello, welcome to my text file Hello again Now, my problem is, whenever I open either file (either of the hard-linked counterparts) with lots of programs that create temporary files, it loses its link relationship, and neither file will update the other anymore. So, what can I do in this situation? Note: I can not use symlinks in this situation, because I am using my hard link for Github to backup some files, and Git doesn't follow symlinks.
As mosvy already said in this comment, most editors do the edits in a copy of the original file which they replace (delete) later. While this increases security, it breaks hard links. However, some editors like for example GNU Emacs can be configured to perform file edits in place, which means that they directly alter the original file, like you did in the shell. For example this Question and the corresponding answer discuss exactly your problem with respect to Gnu Emacs. So your editor's configuration would be the first point to look at. Since you need the hard link only (?) for Git—unfortunately you are not very detailed on your workflow—, it is likely that you can use Git hooks to reestablish a correct hard link immediately before committing what you subsequently like to push to GitHub: The pre-commit hook seems to be a promising candidate for that. See man page githooks(5) for details.
What to do when hard link is lost because of my text editor
1,532,588,432,000
To save space and time I copied a large project tree on a network drive as hard links, i.e. cp -a -r --link proj proj_B (background: it's huge, needs to be rebuilt from two incompatible environments, and doesn't have good support for specifying intermediate and product locations. So this was a quick hack to get a rebuild in environment "B": after copying clean and rebuild from "proj_B/obj". Both environments are under LinuxMint 16) The problem with this approach is that edits won't be (reliably) shared between these trees, e.g. saving an edit to "proj/foo.cpp" will leave it pointing to a new inode and "proj_B/foo.cpp" will still point to the old one (maybe from the loss-avoidance pattern of "save temp; mv orig temp2; mv temp orig; rm temp2"). For sharing source I guess I need symbolic links for the source directories (but not simply a symlink of the project root, since the binary directories need to be kept apart), e.g. something like: cp -a -r --symbolic-link proj proj_B followed by unlinking the binary directories (except that recursive symlink copying fails with "can make relative symbolic links only in current directory". But something similar could be done with "find -exec", or just capitulating and writing a script) But before doing that I wanted a sanity check: is there a better tool for this all along (e.g. some warlock-grade combination of rsync flags)? Or is this sharing approach doomed to end in tears and lost data and I should resign myself to using two copies (and lots of cursing when I find I forgot to push/pull latest changes between them)?
I wouldn't use hard links. Some editors break hard links when they save files, others don't, and some can be configured. However, preserving hard links when saving a file implies that the file is written in place, which means that if the system crashes during the write, you will be left with an incomplete file. This is why the save-to-new-file-and-move-in-place is preferable — but this breaks hard links. In particular, most version control software breaks hard links. So hard links are out. A forest of symbolic links doesn't have this problem. You need to ensure that you point your editor to the master copy or that the editor follows symlinks. You can create the symlink forest with cp -as. However, if you create new files, cp -as is inconvenient to create the corresponding symlinks (it'll do the job, but drown you in complaints that the target already exists). You can use a simple shell loop. for env in environement1 environment2 environment3; do cd "../$env" find ../src \ -type d -exec mkdir {} + \ -o -exec sh -c 'ln -s "$@" .' _ {} + done
Sharing a project tree between environments
1,532,588,432,000
How do you write a bash one-liner that will find binary files with identical contents, permissions, and owner on the same ext4 file-system, from the current working directory recursively, and replace all files with older access times with hard links to the latest accessed file and report saved disk space in kibibytes? What I achieved until now is not fully sufficient for the requirements of the objective. #! /bin/sh fdupes -r -p -o 'time' . | xargs file -i | grep binary | awk '{print $1}' | awk '{print substr($0,3)}' | sed 's/.\{1\}$//' | xargs rdfind -makehardlinks true
hardlink may not satisfy all requirements for this, but it can be used for what it is, to make the hardlinks. It can accept file arguments, not only directories, and it seems it is always linking a group of identical files to the first in order. Also it will ignore zero size files. fdupes selects exactly what needed, but does not output real file arguments but a paragraph-mode output, with groups of identical files, every group is ended with an empty line. So in order to be sure that the exact selections of fdupes will be hardlinked, we have to call hardlink separately once per paragraph. To avoid the case where two pairs of the same identicals exists for different owners or with different permissions. And of course files have to be filtered for binaries. #!/bin/bash unset arr i while IFS= read -r f; do # move file to array if binary if file -i "$f" | grep -q "charset=binary"; then arr[++i]="$f" fi # if end of paragraph and array has files, hardlink and unset array if [[ "$f" == "" && "${arr[@]}" ]]; then printf "\n => Hardlink for %d files:\n" "$i" hardlink -n -c -vv "${arr[@]}" unset arr i fi done < <(fdupes -rpio time .) hardlink with -n parameter simulates and does not write anything, so test the above as is and remove -n later. Also filenames with newlines are not handled, testing with whitespaces seems ok.
Finding duplicate files using bash script
1,532,588,432,000
I had an interview, where the interviewer asked what operations raise the link count of a file, besides ln and its underlying syscall, I didn't know. He stated that opening a file will increase the link count by one to prevent deletion of an opened file. I did not agree that he is correct, why would vi need the temp .swp files then? Is he correct? What operations can raise a file's link count other than ln?
He's wrong.  The only thing that increases a file's link count is the link system call.  (Or editing the raw file system with a hex editor.)
What operations raise link count
1,532,588,432,000
I am new to mlbackup/rsync and the concept of hard links, so I am a little confused after creating a backup data set via mlbackup. So here's the scenario: I am backing up "folder A" to "folder B". Inside "folder A", I have file "X", "Y", and "Z"; each file is 5mb therefore "folder A" is 15mb in size. I run mlbackup and the files are backed up to "folder B" for the first time. Now "folder A" and "folder B" are 15mb each. Without any changes to "folder A", I run mlbackup again. A new "folder B" backup is created. It reads 15mb again (in Finder, Mac OSX 10.8). Now I know the new "folder B" is just hard links to the original data, so I go to terminal and did a du -sh folder B and it reads only a couple of kb. This is to be expected, right? So my first "folder B" is 15mb, and second "folder B" is a few kb. Now my question is this -- in Finder, both "folder B" are 15mb each. So say if I wanted my "folder B" backups to be located in an external drive that only has 16mb of free space, what will happen? According to Finder, the total of the two "folder B" will be 30mb. But we all know in reality it is only 15mb (from first "folder B") plus a few more kbs (second "folder B")? I know this is a pretty confusing question, but I really want to understand how it all works. Please let me know if there is anything I can clarify further.
That is the expected behavior. The Finder does not check if files are hardlinks or real files and just adds the sizes. You do get the correct sizes with du as you already discovered. You can copy the backup folder to an external volume that way, but it will grow in size to what the Finder shows you as it is not capable of copying the hardlinks as hardlinks. You can use rsync (and I recommend using the rsync 3.0.9 that comes bundled with mlbackup as it will take care of all the HFS+ metadata, compressed forks and other stuff.) To answer your question in full, yes you can end up with backups on a volume that will give you a total size in the Finder that is larger than the volume itself. This is a known limitation of the Finder. The concept of hard links is simply explained. Think of a file as a dog. Each directory that contains that file has a leash to the dog's collar. A hard link is just another leash to the same dog. As long as at least one leash is connected the file stays. Once all the leashes are detached the dog runs away, meaning the file got actually deleted. When you delete a hard-link the file stays in the file system as long as at least one link is established. FYI: mlbackup uses rsync with the --hard-links option top copy files and does get a full score on the backup bouncer test. MacLemon (Author of mlbackup)
mlbackup / rsync / hard links data size
1,520,307,090,000
I've read this link, now I simply want to know why there are lots of hard link in /usr. For example, in my Ubuntu server, installed git, I found the command git here: /usr/bin/git. I execute ls -l /usr/bin/git and get the output as below: -rwxr-xr-x 119 root root 11178080 Mar 6 03:48 /usr/bin/git As you see, there are 119 hard links... Why do we need 119 hard links here? More generally speaking, as we have the environment variable PATH and the executable files have been put into /usr/bin/, also, we can create soft links for some reason of compatibility, we can execute them anytime and anywhere, why are there some hard links in usr? Part of output of find /usr -samefile /usr/bin/git: /usr/libexec/git-core/git-prune /usr/libexec/git-core/git-diff-index /usr/libexec/git-core/git-ls-remote /usr/libexec/git-core/git-merge-recursive /usr/libexec/git-core/git-push /usr/libexec/git-core/git-update-index /usr/libexec/git-core/git-check-mailmap /usr/libexec/git-core/git-interpret-trailers /usr/libexec/git-core/git-archive /usr/libexec/git-core/git-upload-archive /usr/libexec/git-core/git-rev-parse /usr/libexec/git-core/git-ls-files /usr/libexec/git-core/git-am All of hard links of /usr/bin/git are found in /usr/libexec/git-core/.
The git links have nothing to do with the PATH, they’re a space-saving measure. Generally speaking, in most cases for “installed” software, hard links are preferable to symbolic links when possible, because they’re more efficient and resilient. You’ll see quite a few binaries in /usr/bin with hard links, including perl, and that’s fine. git packages do tend to use symbolic links instead, because of the large number of links involved and the problems that can cause. If you install git from source, it will use hard links by default if at all possible; you can disable that by adding NO_INSTALL_HARDLINKS=1 to the make install command’s arguments.
Why are there lots of hard links in /usr [closed]
1,520,307,090,000
My error: ln "99700.fa821246f01ef7f3d86a503e33de5753b50640d69de790fd3db5a5dc31ffa45d1dc64a93f950379ee432aa27cbb0593e6e50ddbb6f8a7e279afaf90cec961233.png" /home/anon/foo.png # ^ works fine ^ ln "99700.fa821246f01ef7f3d86a503e33de5753b50640d69de790fd3db5a5dc31ffa45d1dc64a93f950379ee432aa27cbb0593e6e50ddbb6f8a7e279afaf90cec961233.png" /tmp/foo.png ln: failed to create hard link '/tmp/foo.png' => '99700.fa821246f01ef7f3d86a503e33de5753b50640d69de790fd3db5a5dc31ffa45d1dc64a93f950379ee432aa27cbb0593e6e50ddbb6f8a7e279afaf90cec961233.png': Invalid cross-device link This answer here states: https://unix.stackexchange.com/a/108558/79280 it's most likely that your /home directory isn't on the same partition as the /root directory. You can easily check this with cat /etc/fstab hardlinks cannot be created between different partitions, only symlinks can. However as you can see: cat /etc/fstab  1 ✘ # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a device; this may # be used with UUID= as a more robust way to name devices that works even if # disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> UUID=A189-F750 /boot/efi vfat umask=0077 0 2 UUID=11bc9e0c-1727-4df7-b357-0fc11f66444a swap swap defaults,noatime 0 0 UUID=acc6e22f-a4d8-4766-bf57-ae13838bd5a8 / xfs defaults,noatime 0 1 This indicates that /tmp/ and /home/ should be on the mount point. Is this perhaps a bug, or does /tmp/ have special behaviour here that I am not aware of? Also; is this specific to xfs? I don't have a machine to test this on for ext4. Edit: Using the recommended commands from the comments. cat /proc/self/mountinfo  1 ✘ 24 29 0:22 / /proc rw,nosuid,nodev,noexec,relatime shared:5 - proc proc rw 25 29 0:23 / /sys rw,nosuid,nodev,noexec,relatime shared:6 - sysfs sys rw 26 29 0:5 / /dev rw,nosuid,relatime shared:2 - devtmpfs dev rw,size=8009848k,nr_inodes=2002462,mode=755,inode64 27 29 0:24 / /run rw,nosuid,nodev,relatime shared:12 - tmpfs run rw,mode=755,inode64 28 25 0:25 / /sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime shared:7 - efivarfs efivarfs rw 29 1 259:3 / / rw,noatime shared:1 - xfs /dev/nvme0n1p3 rw,attr2,inode64,logbufs=8,logbsize=32k,noquota 30 25 0:6 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:8 - securityfs securityfs rw 31 26 0:26 / /dev/shm rw,nosuid,nodev shared:3 - tmpfs tmpfs rw,inode64 32 26 0:27 / /dev/pts rw,nosuid,noexec,relatime shared:4 - devpts devpts rw,gid=5,mode=620,ptmxmode=000 33 25 0:28 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime shared:9 - cgroup2 cgroup2 rw,nsdelegate,memory_recursiveprot 34 25 0:29 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:10 - pstore pstore rw 35 25 0:30 / /sys/fs/bpf rw,nosuid,nodev,noexec,relatime shared:11 - bpf none rw,mode=700 36 24 0:31 / /proc/sys/fs/binfmt_misc rw,relatime shared:13 - autofs systemd-1 rw,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=1148 37 26 0:21 / /dev/mqueue rw,nosuid,nodev,noexec,relatime shared:14 - mqueue mqueue rw 38 26 0:32 / /dev/hugepages rw,relatime shared:15 - hugetlbfs hugetlbfs rw,pagesize=2M 39 25 0:7 / /sys/kernel/debug rw,nosuid,nodev,noexec,relatime shared:16 - debugfs debugfs rw 40 25 0:11 / /sys/kernel/tracing rw,nosuid,nodev,noexec,relatime shared:17 - tracefs tracefs rw 41 25 0:33 / /sys/kernel/config rw,nosuid,nodev,noexec,relatime shared:18 - configfs configfs rw 42 36 0:34 / /proc/sys/fs/binfmt_misc rw,nosuid,nodev,noexec,relatime shared:19 - binfmt_misc binfmt_misc rw 44 25 0:35 / /sys/fs/fuse/connections rw,nosuid,nodev,noexec,relatime shared:20 - fusectl fusectl rw 142 29 0:37 / /tmp rw,nosuid,nodev shared:63 - tmpfs tmpfs rw,size=8019684k,nr_inodes=409600,inode64 149 29 259:1 / /boot/efi rw,relatime shared:77 - vfat /dev/nvme0n1p1 rw,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 445 27 0:43 / /run/user/1000 rw,nosuid,nodev,relatime shared:242 - tmpfs tmpfs rw,size=1603936k,nr_inodes=400984,mode=700,uid=1000,gid=1001,inode64 514 445 0:45 / /run/user/1000/gvfs rw,nosuid,nodev,relatime shared:278 - fuse.gvfsd-fuse gvfsd-fuse rw,user_id=1000,group_id=1001 531 445 0:46 / /run/user/1000/doc rw,nosuid,nodev,relatime shared:297 - fuse.portal portal rw,user_id=1000,group_id=1001 and proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime) dev on /dev type devtmpfs (rw,nosuid,relatime,size=8009848k,nr_inodes=2002462,mode=755,inode64) run on /run type tmpfs (rw,nosuid,nodev,relatime,mode=755,inode64) efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime) /dev/nvme0n1p3 on / type xfs (rw,noatime,attr2,inode64,logbufs=8,logbsize=32k,noquota) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000) cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=1148) mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M) debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime) tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime) configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,nosuid,nodev,noexec,relatime) fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime) tmpfs on /tmp type tmpfs (rw,nosuid,nodev,size=8019684k,nr_inodes=409600,inode64) /dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro) tmpfs on /run/user/1000 type tmpfs (rw,nosuid,nodev,relatime,size=1603936k,nr_inodes=400984,mode=700,uid=1000,gid=1001,inode64) gvfsd-fuse on /run/user/1000/gvfs type fuse.gvfsd-fuse (rw,nosuid,nodev,relatime,user_id=1000,group_id=1001) portal on /run/user/1000/doc type fuse.portal (rw,nosuid,nodev,relatime,user_id=1000,group_id=1001)
Your data indicates that /tmp is a separate filesystem (tmpfs): 142 29 0:37 / /tmp rw,nosuid,nodev shared:63 - tmpfs tmpfs rw,size=8019684k,nr_inodes=409600,inode64 You can disable this behaviour by: sudo systemctl mask tmp.mount In which case /tmp will belong to your root filesystem. You'll have to reboot.
On xfs, why can't I hardlink to /tmp/, giving the error "Invalid cross-device link" when my fstab indicates tmp is on the same partition?
1,520,307,090,000
Does anyone know how to specify the file type of a hard link? Is this even possible? For example, I want to link to an HTML file (with content type text/html) in my website directory, so I used ln path/to/html/file.html path/to/file/in/my/website/directory.html, but the file type is detected as XML (via file path/to/file/in/my/website/directory.html). The site is stored on an S3 bucket if it matters.
A hard link means you simply add a second name for exactly the same file. Afterwards you cannot decide which name was first. File names do not have a file type or content-type like text/html. The content type is something you web server makes up. It usually does so by looking at the extension of the file name. Have a look at the documentation of your web server. The file command is something else. It will look at the content of the file and "guess" what the content looks like. If you have two file names linking to the same file, the file command sees the same content and therefore will give the same output.
How to specify the file type in a hard link [closed]
1,520,307,090,000
I'm controlling a Linux based NAS through SSH on a Windows computer. What I'm trying to do is use the cp -al command to rapidly copy folders from one directory to another, hardlinking all the files inside. Currently what I do is I list the directory with ls, then I highlight it with my mouse, copy it with right click, then paste it back in, for example if I'm hardlinking a directory called "Madagascar.2005.1080p.BluRay.DTS.x264" I would copy only the name of that directory, then manually write out cp -al "sourceDir/Movie/ then I paste in the "Madagascar.2005.1080p.BluRay.DTS.x264". I do the same for the second directory to build the whole command: cp -al "sourceDir/Madagascar.2005.1080p.BluRay.DTS.x264" "dstDir/Movie/Madagascar.2005.1080p.BluRay.DTS.x264" This whole process is pretty cumbersome, and feels unnatural when everything else done on linux shell is done without the mouse. Is there a better way of building this long command without having to copy and paste things using the mouse, or to type out entire names manually? edit: I do not want to delete the source directory. The source directory looks like this aproximately: 'Lost S04 720p BluRay DTS x264 Lost.S03.720p.BluRay.DTS.x264 'Million.Dollar.Extreme.Presents.World.Peace.S01.1080p' 'The Simpsons S01 1080p DSNP WEBRip DDP5.1 x264' 'The Simpsons S02 1080p DSNP WEBRip DDP5.1 x264' I will be changing the source or destination directories frequently.
So, first of all, you clearly are trying to work as if you had a proper file manager. That's not a crime, and you should use one :) Traditionally, Midnight Commander is the tool you should use, mc; chances are you can directly install it on your NAS, even! In this day and age, ranger (you say you're on debian, so sudo apt install ranger it is.) is probably closer to what you want, it feels a lot more modern. You can select multiple files in any "pane" with space, and then press @ to run a command on them: You'll see :shell %s at the bottom of the screen. The %s will be expanded to all the selected file names when you hit enter, so just type in your command, so that there's :shell cp -al -t /dstDir/Movie %s and hit enter. That's it. However, ranger is a mighty tool (and so is mc, which at least has more prominent menus!), and 10 minutes with a tutorial will probably make a world of a difference. without having to copy and paste things using the mouse, or to type out entire names manually? You need to have heard about tab-completion that all but the most rudimentary shells have! Type the first letters of any file name, hit tab. If there's only one file that starts like that, it gets completed. If there's multiple choices, press tab twice to get a list of candidates. (If you're using zsh with oh-my-zsh, or some of the many other good shell extenders instead of stock bash, you will get to enjoy even more handy completion).
What is the fastest way of copying long paths for the cp command? [duplicate]
1,520,307,090,000
This page on inodes has been exceptional help in grasping the surface-level concept of file systems. On the same page, the author has inserted this snippet demonstrating that each file or directory has at least 2 names (and hard links): /tmp/junk$ ls -id .. 327681 .. /tmp/junk$ cd .. /tmp$ ls -id . 327681 . We can see that /tmp has 3 hard links: Presumably, an inode for the filename “tmp” The same inode for the name “..” The same inode for the name “.” My question: can the “junk” file in the /tmp directory also have 3 names (and hard links) if it is given a child directory? For example, /tmp/junk/paper_balls. My hypothesis: If the “junk” file becomes a parent, it can be invoked with .. but relatively, meaning the current working directory (from which .. is typed) would have to be within the directory path /tmp/. The answer to my question is probably too advanced.
The initial number of hard links is 1 for a file and 2 for a directory (the first link is its name in the parent folder, and the second hard link is .). The link count for a directory goes up by one each time a subdirectory is created in it (due to .. in each subdirectory). This count can be easily viewed with ls -l. It is the second value. Take a look: ~/x$ ls -la total 16 drwxr-xr-x 2 tomasz tomasz 4096 Sep 24 00:08 . drwxr-xr-x 54 tomasz tomasz 4096 Sep 24 00:11 .. -rw-r--r-- 1 tomasz tomasz 19 Sep 23 18:45 1 -rw-r--r-- 1 tomasz tomasz 19 Sep 23 18:45 2 ~/x$ mkdir d ~/x$ ls -la total 20 drwxr-xr-x 3 tomasz tomasz 4096 Sep 24 00:11 . drwxr-xr-x 54 tomasz tomasz 4096 Sep 24 00:11 .. -rw-r--r-- 1 tomasz tomasz 19 Sep 23 18:45 1 -rw-r--r-- 1 tomasz tomasz 19 Sep 23 18:45 2 drwxr-xr-x 2 tomasz tomasz 4096 Sep 24 00:11 d ~/x$ mkdir d/dd ~/x$ ls -la total 20 drwxr-xr-x 3 tomasz tomasz 4096 Sep 24 00:11 . drwxr-xr-x 54 tomasz tomasz 4096 Sep 24 00:11 .. -rw-r--r-- 1 tomasz tomasz 19 Sep 23 18:45 1 -rw-r--r-- 1 tomasz tomasz 19 Sep 23 18:45 2 drwxr-xr-x 3 tomasz tomasz 4096 Sep 24 00:11 d The second value for d went up from 2 to 3 after creating d/dd within it. See mosvy's comments below for a wider view.
Can every file really own at least 2 names (and thus 2 hard links)?
1,520,307,090,000
Can one force ln not to follow a soft link in its first argument? For example, in the following I would like hard to be a hard link to the soft link soft: $ mkdir dir $ ln -s dir soft $ ln soft hard ln: soft: Is a directory I know about ln -h, but this only prevents ln from following soft links in the second argument.
By default on your system, ln resolves the source fully if it is a symbolic link. There is a standard option, -P, that prevents it from doing this: $ mkdir dir $ ln -s dir soft $ ls -l total 4 drwxr-xr-x 2 myself wheel 512 Sep 21 22:39 dir lrwxr-xr-x 1 myself wheel 3 Sep 21 22:39 soft -> dir $ ln -P soft hard $ ls -il total 4 129605 drwxr-xr-x 2 myself wheel 512 Sep 21 22:39 dir 129606 lrwxr-xr-x 2 myself wheel 3 Sep 21 22:39 hard -> dir 129606 lrwxr-xr-x 2 myself wheel 3 Sep 21 22:39 soft -> dir The POSIX specification for the ln utility says: If source_file is a symbolic link: If the -P option is in effect, actions shall be performed equivalent to the linkat() function with source_file as the path1 argument, the destination path as the path2 argument, AT_FDCWD as the fd1 and fd2 arguments, and zero as the flag argument. This text is mostly gibberish unless you know what linkat() is. The OpenBSD manual translates this into -P When creating a hard link and the source is a symbolic link, link to the symbolic link itself. The -P option overrides any previous -L options. ... and the GNU manual says -P --physical If -s is not in effect, and the source file is a symbolic link, create the hard link to the symbolic link itself. On platforms where this is not supported by the kernel, this option creates a symbolic link with identical contents; since symbolic link contents cannot be edited, any file name resolution performed through either link will be the same as if a hard link had been created. Interestingly, GNU ln on (Ubuntu) Linux has this in the manual: Using -s ignores -L and -P. Otherwise, the last option specified controls behavior when a TARGET is a symbolic link, defaulting to -P. Whereas on OpenBSD and macOS (and presumably on other systems as well), the same GNU ln manual says Using -s ignores -L and -P. Otherwise, the last option specified controls behavior when a TARGET is a symbolic link, defaulting to -L. (Another reason to always read the manual on the system you're using rather than on some random page on the internet, which seems to happen far too often.)
Hard link to soft link
1,520,307,090,000
I have a large directory structure that has many hard links from the first hard link which lives in a different directory structure. For example dir1 has the following structure : [dir1]$ tree . ├── dir_inside │ ├── file1 │ └── file2 └── other_dir ├── file1 └── file2 Now lets suppose dir2 exists outside dir1 and all files in dir1 e.g. file1 and file2 are hard links of files that primarily existed in dir2 i.e. files in dir2 existed before dir1 was created. When calculating the size of dir1 I would go for the command du -sh dir1 since du will only count hard-links only once. Ok so far so good, but by counting only once this means that I am actually counting not from the first hard link which lives in dir2. So let's say du -sh dir2 is 2G in size. dir1 will also be 2G in size since the hard link will be counted once in that directory structure. As far as my knowledge in hard-links concern I believe hard-links do not actually have the same file size as the first inode created right ? I would really enjoy getting some clarification on getting the directory size of hard-link files that live in different directories, thus getting an estimation of the real disk space the hard-links are occupying.
I think you misunderstand the concept of a hard link. A file[name] is a pointer to an inode, a hard link is exactly the same. There is no reference to the original file. du cannot know if a file was created as a hard link from another file. du can only filter if multiple pointers to an inode appear in a single du call: du -sh dir1 dir2
du counts hard links only once but hard links have the same size as the first hard link?
1,520,307,090,000
I have a few questions regarding links in UNIX Can I say Soft links in UNIX are analogous to shortcuts in windows? Difference between copying and hard-linking? Can anyone give me a use-case where I should prefer hard-linking over copying? I'm so messed up right now. Any help is highly appreciated
The basic thing is, that copying makes a copy of the file, and linking (soft or hard) does not. As an abstraction model, think of your directory as a table with: filename where the file is content of the file --------------------------------------------------------- a.txt sector 13456 abcd b.txt sector 67679 bcde When I copy a file, cp a.txt c.txt, I get the following: filename where the file is content of the file --------------------------------------------------------- a.txt sector 13456 abcd b.txt sector 67679 bcde c.txt sector 79774 abcd When I hard-link a file ln b.txt d.txt, I get the following: filename where the file is content of the file --------------------------------------------------------- a.txt sector 13456 abcd b.txt sector 67679 bcde c.txt sector 79774 abcd d.txt sector 67679 bcde So, now b.txt and d.txt are exactly the same file. If I add a character f to d.txt, it will also appear in b.txt The problem with hard linking is that you can only do it on the same filesystem. Therefore, most people use soft links, ln -s a.txt e.txt: filename where the file is content of the file --------------------------------------------------------- a.txt sector 13456 abcd b.txt sector 67679 bcde c.txt sector 79774 abcd d.txt sector 67679 bcde e.txt sector 81233 "Look at where a.txt is located" As a first order approximation, soft links are a bit like shortcuts in Windows. However, soft links are a part of the filesystem, and will therefore work with every program. Windows shortcuts are just a file that is interpreted by explore.exe (and some other programs). But Windows programs need to do something in interpreting the shortcut, where as in Linux, soft links are handled automatically. Most uses of links use soft links, because they are more flexible, can point to other filesystems, can be used with NFS et cetera. The one use-case I have seen for hard links is to make sure that a file is not deleted by a user. The sysadmin created hard-links in a "pointer" directory and when the user inadvertently rm-ed a file (which apparently happened a lot there) he could in no-time restore the file without the use of tape, without double disk space etc. That works as follows: filename where the file is content of the file --------------------------------------------------------- a.txt sector 13456 abcd b.txt sector 67679 bcde When the user types rm a.txt, the table will be: filename where the file is content of the file --------------------------------------------------------- b.txt sector 67679 bcde All reference to a.txt is lost. The disk space may be reclaimed for other files. However, if the sysadmin keeps a copy of links to important files, the tabel will be: filename where the file is content of the file --------------------------------------------------------- a.txt sector 13456 abcd b.txt sector 67679 bcde link.a.txt sector 13456 abcd link.b.txt sector 67679 bcde When a user now types rm a.txt, the table becomes: filename where the file is content of the file --------------------------------------------------------- b.txt sector 67679 bcde link.a.txt sector 13456 abcd link.b.txt sector 67679 bcde Because there is still a reference to the file starting at 13456, the disk space of the file will not be marked as free. So the file is still there. When the user now asks if it would be possible somehow restore the a.txt, the sysadmin simply dose ln link.a.txt a.txt and the file a.txt re-appears! And with its latest edits too. (of course, the link.a.txt is in another directory on the same filesystem and this doesn't mean that you can forget about backups, but at that time and place, it was a useful option).
Linking and Copying
1,520,307,090,000
I got a weird hardlink at centos 6.5 vps server. It's man made, I assume, but I'm not the one who did it. df tells some info. [root@root]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/simfs 209715200 128660820 81054380 62% / none 4194304 4 4194300 1% /dev none 4194304 0 4194304 0% /dev/shm /dev/simfs 209715200 128660820 81054380 62% /var/www/username/data/www/test.site.biz/photo ls -li tells nothing useful [root@vz65646 test.site.biz]# ls -li total 7952 79435160 drwxr-xr-x 2 someuser someuser 8130560 Oct 25 20:52 photo Hardlinked folder is photo. By mistake I rm -rf test.site.biz which led to bad stuff happen. Namely, photo directory in other place went clean. I assume restoring data is not possible. Yet, I'd like to figure out what happened out here so I won't repeat the same mistake twice. Any hints are much appreciated.
You have two mounted filesystems with similar characteristics: same device name, same disk usage. These are very likely to be, in fact, the same device. This can happen if you mount the same network filesystem in different locations, for example. Given that this is a local filesystem, as sourcejedi identified in a comment, this is very likely to be a bind mount, created by a command like mount --bind /origin /var/www/username/data/www/test.site.biz/photo. If your system is recent enough, you can use findmnt to confirm that it's a bind mount. But anyhow, most filesystem types can't be mounted at the same time at different locations, so having the same device is proof enough that this is a bind mount. A bind mount provides a view of a directory tree in a different location. In terms of accessing the files under the bind mount, it's similar to having a symbolic link in the tree, i.e. /var/www/username/data/www/test.site.biz/photo/somefile is the same file as /origin/somefile, as if /var/www/username/data/www/test.site.biz/photo was a symbolic link to /origin. But /var/www/username/data/www/test.site.biz/photo is not a symbolic link, it's a directory. Since /var/www/username/data/www/test.site.biz/photo is a directory, a recursive traversal descends into it. So rm -rf deleted the files under /original, because /original and /var/www/username/data/www/test.site.biz/photo are the same directory that just happen to be shown in different locations.
Simfs hardlinks whereabouts
1,520,307,090,000
I run grep some-string -r . &. While it is running in bkg, I cd to another directory. It seems that grep interprets the hard link . differently then. What happens before and after I change the current directory? Will both the original and the new directories not be searched completely? I wonder if . as a command line argument to a command is only dereferenced at the start of running the command, or is dereferenced whenever it is used by the program during its running?
Each process has its own "current working directory", which can't be changed from outside the process. So when you do grep some-string -r . & your shell starts grep in the background, and grep's current working directory is initialised to the same value as the shell's at that moment. grep's definition of . here is its own current directory, not anything else's; the shell has no part in the argument's interpretation. Subsequently changing the shell's directory using cd has no impact on grep...
. as a command line argument to a command running in background
1,520,307,090,000
I have a work laptop that I will soon have to return to my employer. Having foreseen this, I ordered a second internal disk and mounted it at /home/<user> so that I can just pull it out and mount it in the next machine without having to go through the whole ritual of copying files, etc. However, I've created a few projects with hard links -- all files are on this secondary mounted disk, so it's like ~/project-one/orig-file.txt => ~/project-two/linked-file.txt I know that such links don't work across file systems, i.e. you can't link from the root and a mounted disk (Google gets me a lot of articles about this), but the question is, will pulling this disk and mounting it in another system break these hard links? Will potentially use the same distro and /home/<user> directory if that makes a difference. Of course, I will really find out when it comes time to swap out the disk, but it will be good to mentally prepare for what to expect.
So, you have a hard disk, formatted with a filesystem, in which there are hard links (confined within the filesystem, of course). If you remove this hard disk from a system and mount it in another system, it will continue working exactly as before, provided that both systems recognize correctly the filesystem. If it's the same distro, or even two different Linux distros, it's sure to be okay. As added by @Hans-Martin_Mosner, it does not even need to be mounted on the same mountpoint as the old system, as all hard links are inside the inode structure of the filesystem. Note that hard links aren't anything obscure or bizarre -- a normal filesystem is full of them, for instance the .. in every subdirectory that links to the parent dir.
Hard links on a mounted disk
1,520,307,090,000
This is something I imagine I might have to submit a patch or feature request for, but I'd like to know if it is possible to create a hardlink to a file, that when that hardlink which was not the original file is editted, that it would be copied first before it was actually editted? Which major filesystem would this apply to? Thanks.
After you create a hard link to a file, there are just two links to one file. While you may remember which link was first and which was second, the filesystem doesn't. So it is just possible for an editor to determine whether there is more than one link to a file or not. An editor may or may not preserve the link when it saves the new file. What you may want is a filesystem that supports cp --reflink. That way you get a space efficient copy, but when you change the copy, your original file is not modified.
How can I have it so, that when hardlinks which are not the original, are editted, that they would first be copied then editted?
1,520,307,090,000
I have a daily rsync script backing up my data on an external hard drive at /mnt/X (root of hard drive). I am using --link-dest to use hard links and avoid duplicating data. I need to move all my daily backups from /mnt/X to /mnt/X/backups without loosing the hard links. Later I will need to change the script to backup in the new dest directory which is /mnt/X/backups and look for previous day backup in the same directory. How would you suggest me to do the move?
You don't have to do anything special. Simply, mv /mnt/X/* to /mnt/X/backups/ (You will get an error about not being able to move backups to itself). A hard link is basically an inode number. Files that are hard-linked have the same inode number. However you move them around within the same file-system, the inode number does not change. So there is no special action needed. Try it for yourself with some simple files in /tmp first: /tmp $ mkdir aa /tmp $ touch aa/f /tmp $ ln aa/f aa/g /tmp $ mkdir aa/new /tmp $ mv aa/* aa/new mv: cannot move 'aa/new' to a subdirectory of itself, 'aa/new/new' /tmp $ ls -il aa/new/ 13185910 -rw-r--r-- 2 0 Apr 11 13:32 f 13185910 -rw-r--r-- 2 0 Apr 11 13:32 g
Move daily backup directories (made by rsync) to another directory in the same partition
1,520,307,090,000
we have the following folders / links /files under /usr/hdp/2.6.0.3-8/zookeeper folder -rw-r--r--. 1 root root 794542 Apr 1 2017 zookeeper-3.4.6.2.6.0.3-8.jar drwxr-xr-x. 6 root root 4096 Mar 28 2018 doc drwxr-xr-x. 3 root root 17 Mar 28 2018 etc drwxr-xr-x. 2 root root 4096 Mar 28 2018 lib drwxr-xr-x. 3 root root 17 Mar 28 2018 man lrwxrwxrwx. 1 root root 29 Mar 28 2018 zookeeper.jar -> zookeeper-3.4.6.2.6.0.3-8.jar lrwxrwxrwx. 1 root root 26 Mar 28 2018 conf -> /etc/zookeeper/2.6.0.3-8/0 drwxr-xr-x. 2 root root 4096 Oct 16 17:07 bin [root@master01 zookeeper]# pwd /usr/hdp/2.6.0.3-8/zookeeper we want to copy all content under /usr/hdp/2.6.0.3-8/zookeeper to other machine - lets say - master02 machine what is the right command to copy the content under /usr/hdp/2.6.0.3-8/zookeeper , from current machine to target machine ( save all links and permissions )
You are probably looking for the much used -a option of rsync: -a, --archive archive mode; equals -rlptgoD (no -H,-A,-X) Which will provide what you need: -r, --recursive recurse into directories -l, --links copy symlinks as symlinks -p, --perms preserve permissions -t, --times preserve modification times -g, --group preserve group -o, --owner preserve owner (super-user only) -D same as --devices --specials --devices preserve device files (super-user only) --specials preserve special files Add the -v option for verbosity and you get: rsync -av /usr/hdp/2.6.0.3-8/zookeeper/ master02:/usr/hdp/2.6.0.3-8/zookeeper You may want to add the -delete option to cleanup destination directory: --delete delete extraneous files from dest dirs
what is the right approach to copy folder content that include links
1,520,307,090,000
I have a NAS devices, and I've mounted 2 shared folders from this one device. QNAP Ubuntu 18.04 fstab config: //192.168.0.10/Media /media/QNAP_Media cifs credentials=/home/user/.smbcredentials,iocharset=utf8,vers=3.0 0 0 //192.168.0.10/Rdownload /media/QNAP_Rdownload cifs credentials=/home/user/.smbcredentials,iocharset=utf8,vers=3.0 0 0 mount | column -t //192.168.0.10/Media on /media/QNAP_Media type cifs (rw,relatime,vers=3.0,cache=strict,username=admin,domain=,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.0.10,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) //192.168.0.10/Rdownload on /media/QNAP_Rdownload type cifs (rw,relatime,vers=3.0,cache=strict,username=admin,domain=,uid=0,noforceuid,gid=0,noforcegid,addr=192.168.0.10,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1) When I try to hard-link from one share to another, I get the invalid cross-device link error. As far as I can tell, these shares exist on the same partition, which is what one user suggested. Any ideas why else I may not be able to create hard-links from one share to another?
You cannot hardlink between different underlying devices. See if softlinks work.
Invalid cross-device linked. Multiple shares from the same NAS
1,520,307,090,000
I have a user account at my local library (they use openslx), in which I can store files. My actual home folder is "mounted" (I'm not sure how) in /home/[my_userID]/PERSISTENT instead of /home/[my_userID]. After logging in, an xterm window is started, the window manager is openbox. With logging out, everything not stored in PERSISTENT is deleted in /home/[my_userID]. (The complete message is shown below.) When I change the configs of mousepad, e. g., the config file is stored in /home/[my_userID]/PERSISTENT/.config/Mousepad and I have to copy the file(s) manually for storing my configuration for the next session. To solve this, I've copied all the dotfolders from PERSISTENT to $HOME right after every login, but I'm sure there's a faster and way more elegant way to deal with this issue. Is there a way to link to the dirs with a single entry in .bash_history ("command")? ATTENTION: This is the non-persistent home directory! Files saved here will be lost on shutdown. Your real home is under /home/sj126/PERSISTENT Please save your files there.
After some time, I've figured out that I could use symbolic links to speed up the configuration thing at least. rm -dfr .cache .config .local&&ln -fs PERSISTENT/.bash_aliases PERSISTENT/.bash_history PERSISTENT/.bash_logout PERSISTENT/.cache PERSISTENT/.config PERSISTENT/.ICEauthority PERSISTENT/.local PERSISTENT/.ssh PERSISTENT/.vim PERSISTENT/.viminfo PERSISTENT/.vimrc PERSISTENT/.xinputrc PERSISTENT/.xsession-errors ~&&openbox --reconfigure&&gnome-terminal&exit The command first deletes default dirs in /home/[my_userID] to bypass/circumvent a write error due to creating a link with the same destination/path (/home/[my_userID]/.config, e. g.) as an existing directory or file. Second, the symlinks are created. This changes neither the behaviour or appearance of the session, nor that of programs (mousepad, e. g.). Third, the window manager gets the new configuration, which is also stored in /home/[my_userID]/PERSISTENT/.config. Fourth, a terminal with tabbing is started for a more comfortable session usage and the no more needed instance of xterm is terminated. Mind the single & in front of exit. A double one would exit your xterm not until your gnome-terminal is terminated. To use this command, just store it in your /home/[my_userID]/PERSISTENT/.bash_history as your first command. Watch out to have an unlimited HISTFILESIZE in your /home/[my_userID]/PERSISTENT/.bashrc or a way larger one than you will need. Otherwise, the command will be lost when your maximum history size is reached. Having a time stamp for this command is optional. I'm not sure right now wether you can easily change your configs with ln -s or wether you need ln for getting write permissions to your config files.
Link files and directories, in target directory, with cross-device links
1,520,307,090,000
If someone uses the command "cp -alr" on a directory located in a non-path preserving merged pool will it always create a directory on the same disk with all of its contents as hard links? For example, if we created a merged pool with category.create=mfs and on the following directory tree we run "cp -alr /romance/TitanicMovie/ /comedy/" what would happen? Would it create a directory on disk1 called comedy and a hard link to Titanic.mkv or because disk2 has more free space would it copy Titanic.mkv to the directory on disk2 called comedy? /mnt/ ├── disk1/ (Physical Drive mounted, 1TiB in size) | ├── romance/ | | ├── TitanicMovie | | ├──Titanic.mkv ├── disk2/ (Physical Drive mounted, 1TiB in size) | ├── comedy/ ├── storage/ (mergerFS mounted using: /mnt/disk1:/mnt/disk2 /mnt/storage fuse.mergerfs dropcacheonclose=true,defaults,allow_other,minfreespace=50G,fsname=mergerFS,func.mkdir=all,category.create=mfs 0 0 | ├── romance/ | | ├── TitanicMovie | | ├──Titanic.mkv | ├── comedy/
The specific details on how rename and link work are in the docs. If you're not using path preservation then it clones the path on the same branch (if needed) and performs the rename or link.
Hard links with MergerFS
1,520,307,090,000
I discovered the pax command recently, and was amazed at how efficient it is when copying locally from one SSD to another, for instance. For local backup, I'm contemplating replacing a rsync solution with pax, however pax seems to be missing the --link-dest option that is so convenient for incremental backups (creating hard links to the previously backuped version for identical files). Questions is there a workaround to have pax + ? give a behavior similar to rsync --link-dest... (maybe better than 1. make hard links for identical files 2. run pax -k) any other efficient command / solution to reach a similar result efficiently?
The pax program is not useful for incremental backups. The new POSIX.1-2001 extended headers for the tar format (called "pax"-format) that have been taken from the Solaris TAR enhancements from 1997 however are a really good base for incremental backups since this concept allows to be enhanced to archive all possible meta data. It seems that you like to do cumulative incremental backups/restores locally in order to mirror a filesystem locally. star is perfectly suited for your wish, see http://schilytools.sourceforge.net/man/man1/star.1.html The instructions on how to do cumulative incrementals is currently on page 53 under the section SYNCHRONIZING FILESYSTEMS Note that if you are on an operating system with a slow filesystem cache (like Linux) or using a transactional filesystem like ZFS, it is recommended to use the option -no-fsync or extraction will be extremely slow since star by default extracts files in the secure mode that allows to detect file system write errors while flushing cached data. Also use the option -pax-o binary on the left (create) side to avoid path name conversion problems in case that there are files in the filesystem that different locales than your current shell. The method that star uses is the same as with ufsdump/ufsrestore: Star manages a file /etc/tardumps with time stamps levels and filesystem names for the create side of the incremental backups. For the extract side of the incremental restores, star manages a file star-symtable in the root directory of the extract filesystem. This database contains a list of old inode numbers and the related new inode numbers in order to be able to detect renamed and removed files. Star has been massively tested with incremental dumps and restores during more than 10 years and never caused any problem.
pax command for incremental backup with hard links similar to rsync
1,390,858,111,000
I've got my Toshiba laptop (Satellite A300) connected to my TV via HDMI. Using VLC 2.2.6, video works just fine. Currently, I'm trying to output sound to the TV's speakers. aplay -l shows the HDMI playback device as the third one: **** List of PLAYBACK Hardware Devices **** card 0: Intel [HDA Intel], device 0: ALC268 Analog [ALC268 Analog] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 1: ALC268 Digital [ALC268 Digital] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: Intel [HDA Intel], device 3: HDMI 0 [HDMI 0] Subdevices: 1/1 Subdevice #0: subdevice #0 I was able to play a test sound on the TV using speaker-test -D plughw:0,3 -c 2 after I've unmuted S/PDIF in alsamixer. Yet, when playing a file with VLC, the only option in Audio → Audio Device is "Built-in Audio Analog Stereo". At the moment, sound is played using the laptop's speakers. How can I have VLC output the sound to the TV's speakers?
With pavucontrol (GUI) Turns out, I had to switch the profile of "Built-in Audio" to HDMI. I can do that with pavucontrol, install it with pacman -S pavucontrol. Now, sound works perfectly on the TV speakers. Since pavucontrol uses PulseAudio, this has to be installed as well: pacman -S pulseaudio. After restarting (PulseAudio's systemd job probably needed to be up), pavucontrol can connect to PulseAudio. With pulsemixer (TUI) F3 to go to Cards mode, Enter and use the arrow keys or j/k to select the adequate output, probably Digital Stereo (HDMI) Output. If it does not work, double-check in F1 Output mode if the card is not muted (m to toggle mute state). Thanks a lot to user Quasímodo for this solution! With pactl (command line) As described here, you can set the profile also from the command line with pactl set-card-profile 0 output:hdmi-stereo
Sound via HDMI on Arch Linux
1,390,858,111,000
I am trying to use the HDMI output on a PC (HP ZBook) with Debian (stretch). I have configured Bumblebee, it works well (glxinfo and optirun glxinfo report the expected information, and I tested complicated GLSL shaders that also work as expected). Now I would like to be able to plug a videoprojector on the HDMI. I have read here [1] that intel-virtual-output can be used to configure it when the HDMI is connected on the NVidia board (using a VIRTUAL output that can be manipulated by xrandr). However, intel-virtual-output says: no VIRTUAL outputs on ":0" When I do xrandr -q, there is no VIRTUAL output listed, I only have: Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 8192 x 8192 eDP-1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 345mm x 194mm 1920x1080 60.02*+ 59.93 1680x1050 59.95 59.88 1600x1024 60.17 ... other video modes ... 400x300 60.32 56.34 320x240 60.05 DP-1 disconnected (normal left inverted right x axis y axis) HDMI-1 disconnected (normal left inverted right x axis y axis) DP-2 disconnected (normal left inverted right x axis y axis) HDMI-2 disconnected (normal left inverted right x axis y axis) My installed version of xserver-xorg-video-intel is: xserver-xorg-video-intel_2.99.917+git20160706-1_amd64.deb Update (Sat. Dec. 09 2016) I have updated Debian, and now X crashes when second monitor is active when I starting some applications (for instance xemacs). Sat. Dec. 17 2016: Yes, found out ! (updated the answer). Update (Wed Sep 27 2017) The method works in 99% of the cases, but last week I tried a beamer that only accepts 50Hz modes, and could not get anything else than 60Hz (so it did not work). Anybody knows how to force 50Hz modes ? Update (Tue 01 Oct 2019) Argh! Broken again: After updating X and the NVidia driver, optirun now crashes (/var/log/Xorg.8.log says crash in Xorg, OsLookupColor+0x139). Update (07 Oct 2019) Found a temporary fix (updated answer). [1] https://github.com/Bumblebee-Project/Bumblebee/wiki/Multi-monitor-setup
Yes, found out ! To activate VIRTUAL output of the intel driver, you need to create a 20-intel.conf file in the Xorg configuration directory (/usr/share/X11/xorg.conf.d under Debian stretch, found out by reading /var/log/Xorg.0.log) Section "Device" Identifier "intelgpu0" Driver "intel" Option "VirtualHeads" "2" EndSection My /etc/bumblebee/xorg.conf.nvidia is as follows: Section "ServerLayout" Identifier "Layout0" Option "AutoAddDevices" "true" Option "AutoAddGPU" "false" EndSection Section "Device" Identifier "DiscreteNvidia" Driver "nvidia" VendorName "NVIDIA Corporation" Option "ProbeAllGpus" "false" Option "NoLogo" "true" Option "AllowEmptyInitialConfiguration" EndSection Section "Screen" Identifier "Screen0" Device "DiscreteNVidia" EndSection Some explanations: it needs a "Screen" section, else it tries to use the Intel device declared in 20-intel.conf (that we just added before, oh my...). It also needs "AllowEmptyInitialConfiguration" to remain able to start with optirun when no external monitor is attached. With this configuration and starting intel-virtual-output, I was able to access my HDMI port. Yeehaa !!! Troubleshooting: if optirun or intel-virtual-output do not work, take a look at /var/log/Xorg.8.log (bumblebee creates an X server with display :8 used internally). Notes I read at several places that KeepUnusedXServer should be set to true and PMMethod to none in /etc/bumblebee/bumblebee.conf, I did not do that and it works fine. If I do that, it works, but then the discrete GPU remains on even after exiting an optirun-ed application or killing intel-virtual-output, which I did not want. More notes Something else that made me bang my head on the wall was deactivating Nouveau and starting the Intel X server: it needs to be done by flags passed to the kernel, specified in GRUB parameters. In /etc/defaults/grub, I have the following line: GRUB_CMDLINE_LINUX_DEFAULT="quiet blacklist.nouveau=1 i915.modeset=1 gfxpayload=640x480 acpi_backlight=vendor acpi_osi=! acpi_osi=\"Windows 2009\"" (beware the quotes and escaped quotes). Some explainations: it avoids loading nouveau (that is incompatible with the Nvidia X server), and tells the Intel driver to go to graphics mode right at boot time. If you do not do that, then the Intel X server cannot start, and it falls back to a plain old VESA server with CPU-side 3D rendering. The acpi_xxx flags are required on this specific machine to overcome a BIOS bug that makes it crashing when going in graphics mode with the discrete GPU off. Note that it is specific to this particular notebook (HP ZBook portable workstation), it may be unnecessary or differ for other laptops. Update (Dec 6 2017) With the latest Debian distro (Buster), "915.modeset=1 gfxpayload=640x480" is unnecessary. To remove nouveau, I needed also to create a nouveau.conf file in /etc/modprobe.d with "blacklist nouveau" in it, then recreate the ramdisk with "update-initramfs -u". Reboot and make sure "nouveau" is not loaded anymore with "lsmod |grep nouveau". Update (Dec 17 2016) With the latest xorg-server (1.19), there seems to be a problem in a RandR function that manages Gamma when used with intel-virtual-output. Here is the procedure to patch the Xserver and get it to work: sudo apt-get build-dep xserver-xorg-core apt-get source xorg-server edit hw/xfree86/modes/xg86RandR12.c Line 1260, insert "return" (so that the function xf86RandR12CrtcComputeGamma() does nothing) dpkg-buildpackage -rfakeroot -us -uc cd .. sudo dpkg -i xserver-xorg-core_n.nn.n-n_amd64.deb (replace the n.nn.n-n with the correct version), reboot and Yehaa !! works again ! (but it's a quick and dirty fix) Update filed a bug report (was already known, and was just fixed): https://bugs.freedesktop.org/show_bug.cgi?id=99129 How I figured out: Installed xserver-xorg-core-dbg and did gdb /usr/lib/xorg/Xorg <xorg pid> from another machine through ssh. Update (Jan 11 17) Seems that the bug is now fixed in the latest Debian packages. Update (Jan 24 18) When you want to plug a beamer for doing a presentation and need to configure everything right before starting (intel-virtual-output + xrandr), it can be stressful. Here is a little script that does the job (disclaimer: a lot of room for improvement, regarding style etc...): # beamer.sh: sets Linux display for doing a presentation, # for bumblebee configured on a laptop that has the HDMI # plugged on the NVidia board. # # Bruno Levy, Wed Jan 24 08:45:45 CET 2018 # # Usage: # beamer.sh widthxheight # (default is 1024x768) # Note: output1 and output2 are hardcoded below, # change according to your configuration. output1=eDP1 output2=VIRTUAL1 # Note: I think that the following command should have done # the job, but it does not work. # xrandr --output eDP1 --size 1024x768 --output VIRTUAL1 --size 1024x768 --same-as eDP1 # My guess: --size is not implemented with VIRTUAL devices. # Thus I try to find a --mode that fits my needs in the list of supported modes. wxh=$1 if [ -z "$wxh" ]; then wxh=1024x768 fi # Test whether intel-virtual-output is running and start it. ivo_process=`ps axu |grep 'intel-virtual-output' |egrep -v 'grep'` if [ -z "$ivo_process" ]; then intel-virtual-output sleep 3 fi # Mode names on the primary output are simply wxh (at least on # my configuration...) output1_mode=$wxh echo Using mode for $output1: $output1_mode # Mode names on the virtual output are like: VIRTUAL1.ID-wxh # Try to find one in the list that matches what we want. output2_mode=`xrandr |grep $output2\\\. |grep $wxh |awk '{print $1}'` # There can be several modes, take the first one. output2_mode=`echo $output2_mode |awk '{print $1}'` echo Using mode for $output2: $output2_mode # Showtime ! xrandr --output $output1 --mode $output1_mode --output $output2 --mode $output2_mode --same-as $output1 update (10/07/2019) A "fix" for the new crash: write what follows in a script (call it bumblebee-startx.sh for instance): optirun ls # to load kernel driver /usr/lib/xorg/Xorg :8 -config /etc/bumblebee/xorg.conf.nvidia \ -configdir /etc/bumblebee/xorg.conf.d -sharevts \ -nolisten -verbose 3 -isolateDevice PCI:01:00:0 \ -modulepath /usr/lib/nvidia/nvidia,/usr/lib/xorg/modules/ (replace PCI:nn:nn:n with the address of your NVidia card, obtained with lspci) Run this script from a terminal window as root (sudo bumblebee-startx.sh), keep the terminal open, then optirun and intel-virtual-output work as expected (note: sometimes I need to run xrandr in addition to make the screen/videoprojector detected). Now I do not understand why the very same command started from bumblebee crashes, so many mysteries here ... (but at least it gives a temporary fix). How I figured out: wrote a 'wrapper' script to start the xserver, declared it as XorgBinary in bumblebee.conf, made it save the command line ($*) to a file, tried some stuff involving LD_PRELOADing a patch to the XServer to fix the crash in osLookupColor (did not work), but when I tried to launch the same command line by hand, it worked, and it continued working without my patch (but I still do not understand why). Update 11/15/2019 After updating, I experienced a lot of flickering, making the system unusable. Fixed by adding kernel parameter i915.enable_psr=0 (in /etc/defaults/grub, then sudo update-grub). If you want to now, PSR means 'panel self refresh', a power-saving feature of intel GPUs (that can cause screen flickering).
Do not manage to activate HDMI on a laptop (that has optimus / bumblebee)
1,390,858,111,000
I have my machine connected over HDMI to a receiver. But when I try to use more than two channels with PulseAudio, I only get two. pacmd list cards shows the card, but does not show an HDMI profile with more than two channels. I have confirmed that 7.1 sound works via ALSA: pasuspender -- speaker-test -D hdmi -c 8 -m FL,FC,FR,RR,RRC,RLC,RL,LFE Produces static that goes around the room.
In PulseAudio, each sound card has a profile set associated with it. A profile set contains multiple profiles, and those are the profiles that you see when listing the cards (or when looking in the various PulseAudio GUIs). There is a default profile, which primarily contains things useful for analog sound output. There is also an extra-hdmi profile that is automatically applied to some HDMI outputs, and will give options up to 5.1. Both of these profiles are unfortunately in /usr/share/pulseaudio/alsa-mixer/profile-sets, and thus you can't really edit them (I filed Debug bug 736708 about this.) According to the documentation, you could disable udev-based autodiscovery, and manually configure everything—that let's you specify the full path to a profile. But it turns out, that while it isn't documented, udev can specify a full path, too. Set up a udev rule to assign a profile set You assign a profile set in a udev rule by setting the PULSE_PROFILE_SET udev environment variable. Its documented to only take a file in the aforementioned /usr subdirectory, but a full path works as well. In my case, I created this rule: # cat /etc/udev/rules.d/95-local-pulseaudio.rules ATTRS{vendor}=="0x8086", ATTRS{device}=="0x1c20", ENV{PULSE_PROFILE_SET}="/etc/pulse/my-hdmi.conf" You will need to use the appropriate PCI vendor and device numbers, which you can easily obtain from lspci -nn. After creating the udev rule, you can apply it immediately with udevadm trigger -ssound. You will probably want to rebuild your initramfs as well (update-initramfs -u) Confirm that the udev rule took effect with udevadm info --query=all --path /sys/class/sound/card0 (use the appropriate card number, of course). You should see E: PULSE_PROFILE_SET=/etc/pulse/my-hdmi.conf in the output. If not, do not continue. It won't work. Something is wrong with your udev rules (or maybe you didn't trigger them—you could always try rebooting). Create the /etc/pulse/my-hdmi.conf file Note: The channel map is apparently system-specific. You'll need to experiment to get it right for your system. I was lucky, my 7.1 layout just involves dropping the final items to build 5.1, 4.0, etc. Instructions are below. This is a lot of copy & paste, mostly. Each section differs in (a) name, (b) description, (c) channel map, (d) [optional] priority. [General] auto-profiles = yes [Mapping hdmi-stereo] device-strings = hdmi:%f channel-map = front-left,front-right description = Digital Stereo (HDMI) priority = 4 direction = output paths-output = hdmi-output-0 [Mapping hdmi-surround-40] device-strings = hdmi:%f channel-map = front-left,front-right,rear-left,rear-right description = Digital Quadrophonic (HDMI) priority = 1 direction = output paths-output = hdmi-output-0 [Mapping hdmi-surround-51] device-strings = hdmi:%f channel-map = front-left,front-right,rear-left,rear-right,front-center,lfe description = Digital Surround 5.1 (HDMI) priority = 2 direction = output paths-output = hdmi-output-0 [Mapping hdmi-surround-71] description = Digital Surround 7.1 (HDMI) device-strings = hdmi:%f channel-map = front-left,front-right,rear-left,rear-right,front-center,lfe,side-left,side-right priority = 3 direction = output paths-output = hdmi-output-0 Now, to test this: Restart PulseAudio: pulseaudio -k, as your normal user, assuming you're using per-user daemons (the default). Start it up again, even a simple aplay -l will work. Switch to the 7.1 profile. Personally, I used pactl set-card-profile 0 "output:hdmi-surround-71" to do this, but a GUI will work perfectly well, too. Run speaker-test -c 8 -t w. It should start announcing speaker names, hopefully the correct name out of each speaker. If it the names don't come from the correct speaker, you'll have to change the channel-map to get them right. After each channel map change, you must restart PulseAudio again. Bonus! More useful settings In /etc/pulse/daemon.conf, there are a few settings you may want to change: enable-remixing — If this is on, a stereo signal will have its left channel played out of all three of your left speakers, and its right channel out of your right speakers. If off, it'll only come out the front two. Note that you can also change the profile to stereo (to only send stereo sound out the HDMI port, and let your receiver decide how to map it to speakers). enable-lfe-remixing — Similar, but for remixing to the LFE (subwoofer) channel. default-sample-format — If your HDMI setup supports greater than 16-bit audio, you may want to increase this to s32le (from the default s16le). default-sample-rate, alternate-sample-rate — You may want to swap these (and maybe even disable 44.1KHz entirely) if you mostly use DVD-source material which is typically 48KHz. Or, if your HDMI receiver supports it, you can go all the way up to 192KHz. Note that 176KHz has the nice property of being an even multiple of both 44.1 and 48KHz. See below for how to determine what your receiver supports default-sample-channels — Doesn't really seem to matter. Profile probably overrides it... Naturally, you'll have to restart PulseAudio after changing this file. Bonus Again! Seeing What Your Receiver Supports There are eld.* files in /proc/asound which tell you what the other end of an HDMI link claims to support. For example: # cat /proc/asound/card0/eld#3.0 monitor_present 1 eld_valid 1 monitor_name TX-SR606 connection_type HDMI eld_version [0x2] CEA-861D or below edid_version [0x3] CEA-861-B, C or D manufacture_id 0xcb3d product_id 0x863 port_id 0x0 support_hdcp 0 support_ai 1 audio_sync_delay 0 speakers [0x4f] FL/FR LFE FC RL/RR RLC/RRC sad_count 8 sad0_coding_type [0x1] LPCM sad0_channels 2 sad0_rates [0x1ee0] 32000 44100 48000 88200 96000 176400 192000 sad0_bits [0xe0000] 16 20 24 sad1_coding_type [0x1] LPCM sad1_channels 8 sad1_rates [0x1ee0] 32000 44100 48000 88200 96000 176400 192000 sad1_bits [0xe0000] 16 20 24 sad2_coding_type [0x2] AC-3 sad2_channels 8 sad2_rates [0xe0] 32000 44100 48000 sad2_max_bitrate 640000 sad3_coding_type [0x7] DTS sad3_channels 8 sad3_rates [0xc0] 44100 48000 sad3_max_bitrate 1536000 sad4_coding_type [0x9] DSD (One Bit Audio) sad4_channels 6 sad4_rates [0x40] 44100 sad5_coding_type [0xa] E-AC-3/DD+ (Dolby Digital Plus) sad5_channels 8 sad5_rates [0xc0] 44100 48000 sad6_coding_type [0xb] DTS-HD sad6_channels 8 sad6_rates [0x1ec0] 44100 48000 88200 96000 176400 192000 sad7_coding_type [0xc] MLP (Dolby TrueHD) sad7_channels 8 sad7_rates [0x1480] 48000 96000 192000 So you can see my receiver supports LPCM (Linear PCM, i.e., uncompressed audio) at up to 8 channels, 192KHz, 24-bit sound. It also supports AC3, DTS, DSD, DD+, DTS-HD, and Dolby TrueHD. So if I have files encoded in those, I can pass-through those formats (if my media player supports it, of course. mpv probably does).
How do I configure PulseAudio for 7.1 Surround Sound over HDMI?
1,390,858,111,000
I have a notebook running Kubuntu Precise (12.04) which I occasionally use for watching videos. When I do, I plug in an HDMI cable connected to an A/V receiver with an HDMI monitor attached to it. When I watch videos this way, I still need to use the notebook display when I'm interacting with the system to control playback, etc. The text on the HDMI monitor is hard to read from where I sit. When I plug in the HDMI cable, Kubuntu detects it, but I have to go through a weird dance sequence (that works, but is convoluted) to get it setup correctly every time for both video and audio. To fix this, I'm trying to write a bash script with xrandr to do it right the first time. I got the basic idea from Peoro's answer to this U&L Q&A titled: A tool for automatically applying RandR configuration when external display is plugged in. About my script My script (included below) works, but needs improvement. It sets the video mode correctly for the HDMI monitor, but the LVDS1 monitor (on the notebook) changes to display only the upper left portion of the desktop - which is a problem because it cuts off window scroll bars on the right and the taskbar on the bottom. I tried fixing this with --scale, but my first attempt messed things up sufficiently that I had to reboot to get a working display back. Is there a way to make both displays show the same content, but with each one using it's own separate preferred resolution? Or, at least, a way to set the notebook display so that the whole desktop is still accessible when the HDMI display is in use? Since I'm debugging the script, it isn't cleaned up yet. I may want to make it do more later. My script #!/bin/bash ## hdmi_set ## Copyleft 11/13/2013 JPmicrosystems ## Adapted from ## https://unix.stackexchange.com/questions/4489/a-tool-for-automatically-applying-randr-configuration-when-external-display-is-p ## Answer by peoro # setting up new mode for my VGA ##xrandr --newmode "1920x1080" 148.5 1920 2008 2052 2200 1080 1089 1095 1125 +hsync +vsync ##xrandr --addmode VGA1 1920x1080 ##source $HOME/bin/bash_trace # default monitor is LVDS1 MONITOR=LVDS1 # functions to switch from LVDS1 to HDMI and vice versa function ActivateHDMI { echo "Switching to HDMI" ##xrandr --output HDMI1 --mode 1920x1080 --dpi 160 --output LVDS1 --off ##xrandr --output HDMI1 --same-as LVDS1 xrandr --output HDMI1 --mode 1920x1080 xrandr --output LVDS1 --mode 1366x768 MONITOR=HDMI1 } function DeactivateHDMI { echo "Switching to LVDS1" xrandr --output HDMI1 --off --output LVDS1 --auto MONITOR=LVDS1 } # functions to check if VGA is connected and in use function HDMIActive { [ $MONITOR = "HDMI1" ] } function HDMIConnected { ! xrandr | grep "^HDMI1" | grep disconnected } ## MONITOR doesn't do anything because it's not preserved between script executions # actual script ##while true ##do if HDMIConnected then ActivateHDMI fi if ! HDMIConnected then DeactivateHDMI fi ##sleep 1s ##done Output from xrandr Here's what xrandr sees: bigbird@ramdass:~$ xrandr Screen 0: minimum 320 x 200, current 1366 x 768, maximum 8192 x 8192 LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 194mm 1366x768 60.0*+ 1360x768 59.8 60.0 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA1 disconnected (normal left inverted right x axis y axis) HDMI1 connected (normal left inverted right x axis y axis) 1920x1080 60.0 + 1680x1050 60.0 1280x1024 60.0 1440x900 59.9 1280x720 60.0 1024x768 60.0 800x600 60.3 720x480 59.9 640x480 60.0 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis)
You should probably simply use kscreen instead, which should solve all your issues. It will remember the settings of a previously connected screen and will restore them, once you connect it again. If you still have such issues while still using kscreen, it should be worth a bug report. As Kubuntu 12.04 is quite old, you probably should have a look at this.
How to write bash script to configure my displays when HDMI is plugged in or unplugged
1,390,858,111,000
I have a Monitor connected to my machine through HDMI. Now if anyone were to switch off the Monitor, through either the Soft Buttons on it, or by removing it's Power Cord, I wish to be notified and run a Shell Script. I tried many ways to identify when a monitor is switched on or off (It's always connected). The only technique that comes close is: # ddccontrol -p When the external monitor is connected, this returns all kinds of details about the monitor. I could write a script to parse the output for that. However this seems like a unreliable technique for un-supervised usage. Is there any way through which I could obtain a Yes/No answer to whether the Monitor is Switched On/Off? EDIT: It would be preferable if I can get a message on status change. Since this will be running continuously for days, I do not wish to poll for the status of the monitor. Instead in case it is switched off, I would like to be informed through a message.
I don't see anything wrong with parsing the output of ddccontrol. DDC is the right way to get the information you want. Unlike with VGA, where DDC was created, the HDMI connector was designed to include DDC from the start. They even went back and modified the DDC standard to add more features for HDMI, calling it E-DDC. On Linux, the userland tool for accessing DDC info is ddccontrol, so the fact that it doesn't have a flag that makes it do what you want out of the box is no reason to avoid using what's currently provided. If anything, it's an invitation to crack the code open and provide a patch. Meanwhile, here's a short Perl script to limp by with: #!/usr/bin/perl # monitor-on.pl my $CMD = open '-|', 'ddclient -p' or die "Could not run ddclient: $!\n"; local $/ = undef; # slurp command output my $out = <$CMD>; if ($out =~ m/> Power control/) { if ($out =~ m/id=dpms/) { print "asleep\n"; } elsif ($out =~ m/id=on/) { print "on\n"; } elsif ($out =~ m/id=standby/) { print "off\n"; } else { print "missing?\n"; } } else { # Monitor is either a) not DDC capable; or b) unplugged print "missing!\n"; } This script is untested. I don't have any non-headless ("headed"?) Linux boxes to test with here. If it doesn't work, the fix should be obvious. It could be made smarter. It won't cope with multiple monitors right now, and it's possible its string parsing could be confused, since it doesn't check that the power status strings it searches for are within the > Power control section.
Detect if HDMI Monitor is switched off
1,390,858,111,000
OS: GNOME 3.30.2 on Debian GNU/Linux 10 (64-bit) My laptop has no output from the HDMI port. The monitor shows "NO INPUT DETECTED". Previously I had Kubuntu installed and before that I had windows 10, Both worked fine, which means this is not a hardware issue. I have tried: Using the package "ARandR" to scan for new displays. Plugging in different monitors and HDMI cords. Booting the machine with the display plugged in. SPECS: LAPTOP: Acer Nitro 7 (AN715-51) GPU: GeForce GTX 1650 CPU: Intel Core i7-9750H Output of xrandr: Screen 0: minimum 320 x 200, current 1920 x 1080, maximum 8192 x 8192 eDP-1 connected primary 1920x1080+0+0 (normal left inverted right x axis y axis) 344mm x 193mm 1920x1080 60.01*+ 60.01 59.97 59.96 59.93 1680x1050 59.95 59.88 1600x1024 60.17 1400x1050 59.98 1600x900 59.99 59.94 59.95 59.82 1280x1024 60.02 1440x900 59.89 1400x900 59.96 59.88 1280x960 60.00 1440x810 60.00 59.97 1368x768 59.88 59.85 1360x768 59.80 59.96 1280x800 59.99 59.97 59.81 59.91 1152x864 60.00 1280x720 60.00 59.99 59.86 59.74 1024x768 60.04 60.00 960x720 60.00 928x696 60.05 896x672 60.01 1024x576 59.95 59.96 59.90 59.82 960x600 59.93 60.00 960x540 59.96 59.99 59.63 59.82 800x600 60.00 60.32 56.25 840x525 60.01 59.88 864x486 59.92 59.57 800x512 60.17 700x525 59.98 800x450 59.95 59.82 640x512 60.02 720x450 59.89 700x450 59.96 59.88 640x480 60.00 59.94 720x405 59.51 58.99 684x384 59.88 59.85 680x384 59.80 59.96 640x400 59.88 59.98 576x432 60.06 640x360 59.86 59.83 59.84 59.32 512x384 60.00 512x288 60.00 59.92 480x270 59.63 59.82 400x300 60.32 56.34 432x243 59.92 59.57 320x240 60.05 360x202 59.51 59.13 320x180 59.84 59.32 Output of xrandr --listproviders: Providers: number : 1 Provider 0: id: 0x43 cap: 0xf, Source Output, Sink Output, Source Offload, Sink Offload crtcs: 3 outputs: 1 associated providers: 0 name:modesetting Output of lspci -nn | grep VGA: 00:02.0 VGA compatible controller [0300]: Intel Corporation UHD Graphics 630 (Mobile) [8086:3e9b] 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation Device [10de:1f91] (rev a1) Output of aplay -l: card 0: PCH [HDA Intel PCH], device 0: ALC255 Analog [ALC255 Analog] Subdevices: 0/1 Subdevice #0: subdevice #0 Output of lshw -c video: *-display description: VGA compatible controller product: NVIDIA Corporation vendor: NVIDIA Corporation physical id: 0 bus info: pci@0000:01:00.0 version: a1 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vga_controller bus_master cap_list rom configuration: driver=nvidia latency=0 resources: irq:154 memory:a3000000-a3ffffff memory:90000000-9fffffff memory:a0000000-a1ffffff ioport:5000(size=128) memory:a4000000-a407ffff *-display description: VGA compatible controller product: Intel Corporation vendor: Intel Corporation physical id: 2 bus info: pci@0000:00:02.0 version: 00 width: 64 bits clock: 33MHz capabilities: pciexpress msi pm vga_controller bus_master cap_list rom configuration: driver=i915 latency=0 resources: irq:128 memory:a2000000-a2ffffff memory:b0000000-bfffffff ioport:6000(size=64) memory:c0000-dffff
You have a laptop with two GPUs, using Nvidia's "Optimus" technology. The low-power CPU-integrated Intel iGPU is physically wired to output to the laptop's internal display, while the HDMI output is wired to the more powerful Nvidia discrete GPU. The device ID 10de:1f91 indicates the Nvidia GPU is GeForce GTX 1650 Mobile / Max-Q. The Nvidia codename for that GPU is TU117M. The laptop may or may not have the capability of switching the outputs between GPUs; if such a capability exists, vga_switcheroo is the name of the kernel feature that can control it. You would then need to have a driver for the Nvidia GPU installed (either the free nouveau or Nvidia's proprietary driver; since the Nvidia GPU model is pretty new, the support for it in nouveau is still very much work-in-progress), then trigger the switch to Nvidia before starting up the X server. If there is no output switching capability (known as "muxless Optimus"), then you would need to pass the rendered image from the active GPU to the other one in order to use all the outputs. With the drivers (and any required firmware) for both the GPUs installed, the xrandr --listproviders should list two providers instead of one, and then you could use xrandr --setprovideroutputsource <other GPU> <active GPU> to make the outputs of the other GPU available for the active GPU. Unfortunately, the Nvidia proprietary driver seems to be able to participate in this sharing only in the role of the active GPU, so when using that driver, you might want to keep two different X server configurations to be used as appropriate. One configuration would be for using with external displays (and probably with power adapter plugged in too) with the Nvidia GPU as the active one, feeding data through the iGPU for the laptop's internal display The other configuration would be appropriate when using battery power and don't need maximum GPU performance: in this configuration, you would use the Intel iGPU as the active one, and might want to entirely shut down the Nvidia GPU to save power (achievable with the bumblebee package). If you want some select programs to have more GPU performance, you could use the primus package to use the Nvidia GPU with no physical screen attached to render graphics, and then pass the results to the Intel iGPU for display. With Kubuntu, you probably were asked about using proprietary drivers on installation and answered "yes", so it probably set up one of the configurations described above for you. But Debian tends to be more strict about principles of Open Source software, so using proprietary drivers is not quite so seamless. Generally, the combination of the stable release of Debian (currently Buster) and the latest-and-greatest Nvidia GPU tends not to be the easy way to happy results, because the Debian-packaged versions of Nvidia's proprietary drivers tend to lag behind Nvidia's own releases: currently the driver version in the non-free section of Debian 10 is 418.116, and the minimum version required to support GeForce GTX 1650 Mobile seems to be 430. However, the buster-backports repository has version 440 available. To use it, you'll need to add the backports repository to your APT configuration. In short, add this line to /etc/apt/sources.list file: deb http://deb.debian.org/debian buster-backports non-free Then run apt-get update as root. Now your regular package management tools should have the backports repository available, and you could use apt-get -t buster-backports install nvidia-driver to install a new enough version of the Nvidia proprietary driver to support your GPU.
Debian 10 [Buster]: HDMI Input Not detected