date
int64
1,220B
1,719B
question_description
stringlengths
28
29.9k
accepted_answer
stringlengths
12
26.4k
question_title
stringlengths
14
159
1,580,829,756,000
How can i create gpt marks on working system. NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ad4p3 ONLINE 0 0 0 ad6p3 ONLINE 0 0 0 I want ad4p3 to disk0 and ad6p3 to disk1 NAME STATE READ WRITE CKSUM zroot ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 disk0 ONLINE 0 0 0 disk1 ONLINE 0 0 0
You can either use gpart or glabel. As you've already created the partitions, gpart modify -i <index> -l diskX is probably the best way to do it. Be aware that with ZFS on FreeBSD you'll have to refer to these as gpt/disk0 and gpt/disk1, not just disk0 and disk1. I'd suggest: removing one half of the mirror from the zpool applying the label re-add to the mirror and wait for the mirror to resilver Then repeat for the other half.
FreeBSD zfs create gpt
1,580,829,756,000
I am trying to create a swap partition in my script using parted based on the Arch Linux guidance. https://wiki.archlinux.org/title/Parted#Partition_schemes Somehow it is always sort of skipping the file system type and instead using it as partition label. Running parted manually creates the correct filesystem, linux-swap(v1). root@NAS[~]# parted /dev/sdb GNU Parted 3.4 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel gpt Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? y (parted) mkpart Partition name? []? File system type? [ext2]? linux-swap Start? 0% End? 100% Running parted using the command below creates an "empty" filesystem and gives it the partition label "linux-swap". root@NAS[~]# parted /dev/sdb -s mklabel gpt -- mkpart linux-swap 0% 100% Comparison parted (manually) - CORRECT FILESYSTEM ================= root@NAS[~]# parted /dev/sdb print Model: QEMU QEMU HARDDISK (scsi) Disk /dev/sdb: 4295MB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 4294MB 4293MB linux-swap(v1) parted (script command) - WRONG FILESYSTEM ======================= root@NAS[~]# parted /dev/sdb print Model: QEMU QEMU HARDDISK (scsi) Disk /dev/sdb: 4295MB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 4294MB 4293MB linux-swap What am I missing?
You cannot create swap space within the parted command. You can set up the partition label to indicate that it is swap but it isn't really: dd bs=1M count=100 if=/dev/zero >/tmp/100m.img 100+0 records in 100+0 records out 104857600 bytes (105 MB, 100 MiB) copied, 0.687057 s, 153 MB/s lo=$(losetup --show --find /tmp/100m.img); echo $lo /dev/loop0 parted $lo --script --align optimal unit MiB mklabel gpt mkpart primary linux-swap 1 100% print Model: Loopback device (loopback) Disk /dev/loop0: 200MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1.00MiB 100MiB 99.0MiB linux-swap(v1) primary Now if the swap partition had been prepared, mkswap would warn that it has overwritten it, but it doesn't: mkswap ${lo}p1 Setting up swapspace version 1, size = 99 MiB (103784448 bytes) no label, UUID=1abc5a9d-9c2e-452f-be16-d63f7e8e6af1 Just to evidence an overwrite ("wiping old swap signature"), let's repeat the swap space preparation: mkswap ${lo}p1 mkswap: /dev/loop0p1: warning: wiping old swap signature. Setting up swapspace version 1, size = 99 MiB (103784448 bytes) no label, UUID=2af1524b-101b-4e30-bdc0-2dfcadc1cde8 Finally, tear down the loopback device and delete the temporary image file: losetup -d $lo rm -f /tmp/100m.img The conclusion here is that parted does not (and indeed cannot) prepare a swap partition for immediate use.
parted: create swap partition on gpt disk (one liner) not working
1,580,829,756,000
In the last 2 months, I have installed Linux Mint 19.1 on two laptops. On the older of the two, a Samsung Rv511 (ca. 2011) using MBR partitioning, there have been no problems at all. Linux Mint has worked very well. With a 2018 HP Pavilion with UEFI and GPT partitioning, the reverse has been true. Linux Mint crashed after 2 weeks of use. Thereafter, the laptop would boot, show the small Mint icon then go to the Grub 2 menu. At the root prompt, I changed the boot order placing the USB stick first. This enabled me to boot from the USB stick and reinstall Mint. Yesterday, Mint crashed again. This was preceded by a warning message that the update manager was not working (coincidental?). Legacy support and Secure boot were disabled on both occasions. The HDD was partitioned with primary partitions for root (/), eti, boot/grub, home and swap. From what I've checked out online, it seems that the boot/grub partition is not necessary on a UEFI machine. However, not having a boot/grub partition leads to the installation process hanging immediately after the partitions have been defined. I'm at a loss as to what to do next. EDIT: The laptop is an HP 15-cs0057tx Sorry for my inexact terms, by "crashed" I meant not booting. The boot/grub folder was not included among the partition definitions. The result was that the installation process froze immediately after the continue button was clicked. I kept an edited copy of the log file at the time of the first booting failure. Below is a heavily edited version. The starred lines are error messages (hilited in red in the log file). The other lines are hilited white comment lines. Linux version 4.15.0-48-generic (buildd@lgw01-amd64-036) (gcc version 7.3.0 (Ubuntu 7.3.0-16ubuntu3)) #51-Ubuntu SMP Apr 3 08:28:49 UTC 2019 (Ubuntu 4.15.0-48.51-generic 4.15.18) ... Secureboot could not be determined (mode 0) ... Kernel command line: Boot_Image=/boot/vmlinuz-4.15.0-generic root=UUID=60980aba-8d360-4i43-ba01-56b7fa029850 ro quiet splash ... ENERGY_PERF_BIAS: Set to 'normal', was 'performance' ENERGY_PERF_BIAS: View and update with x86_energy_perf_policy(8) ... VFS: Disk quotas dquote_6.6.0 ... Initialise system trusted keyrings Assymetric key parser '509' registered ... Key type dns_resolver registered ... Loaded compiled-in X.509 certificates Loaded X.509 cert 'Build time autogenerated kernel key: e70.....707 Loaded UEFI:db cert 'Microsoft Windows Production PCA 2011: a92.....f53' linked to secondary sys keyring Loaded UEFI:db cert 'Microsoft Corporation UEFI CA 2011: 13a.....bd4' linked to secondary sys keyring Loaded UEFI:db cert 'Hewlett-Packard Company: HP UEFI Secure Boot 2013 DB key: 1d7.....bec' linked to secondary sys keyring *Couldn't get size: 0x800000000000000e ... *sd 1:0:0:0 [sdb] No Caching Mode page found *sd 1:0:0:0 [sdb] Assuming drive cache: write through ... *PKCS#7 signature not signed with trusted key nvidia: loading out-of-tree module taints kernel. nvidia: module license 'NVIDIA' taints kernel. Disabling lock debugging due to kernel taint nvidia: module verification failed: signature and/or required key missing - tainting kernel ... *fsck failed with with exit status 4. ... *Failed to start File System Check on /dev/disk/by-uuid/2ad686b0-e77b-47da-bb44-5934b5fa6541. Thank you for your interest. EDIT: I overrode the Mint 19.1 default swap file (from ignorance actually, I was not aware that Mint 19.1 created its own swap file. I've been using Ubuntu for years). The partitions were not encrypted.
*fsck failed with with exit status 4. Exit status 4 from the fsck command means "filesystem contains errors that could not be corrected". *Failed to start File System Check on /dev/disk/by-uuid/2ad686b0-e77b-47da-bb44-5934b5fa6541. And on another filesystem, the filesystem check did not even start successfully. The above messages suggest the system disk might be failing. You might want to boot the system from some external media, perhaps a Linux Live DVD/USB, and check the SMART health information of the disk, with e.g. smartctl -a /dev/sda or similar (adjust /dev/sda to refer to your actual system disk). The rest of the messages don't seem critical to me. The signature & kernel tainting messages are simply caused by you using the proprietary NVidia GPU driver. Since the driver module isn't signed with the same key as the rest of the kernel modules, and the NVidia module signing key hasn't been explicitly whitelisted, the system warns you about it but lets it happen. If Secure Boot was enabled, the module would be prevented from loading. "Tainting kernel" just means any kernel crash messages will be flagged as "non-open source modules in use, this will be hopeless to debug at kernel level unless the problem is reproduced without them."
On UEFI laptop, Linux Mint has crashed twice
1,580,829,756,000
On my Debian 8 server HDD /dev/sda crashed. mdadm informed me via email and I had the disk replaced. After the server was back up I copied over my GPT from using sgdisk -R /dev/sdb /dev/sda. The second I hit "Enter" on my keyboard I realized my mistake. So now I have an empty GPT on both disks. My question is if it is possible to re-create the GPT on /dev/sdb as the server is still running as I did not reboot since copying the wrong GPT? I did a backup with sfdisk -d /dev/sdb > sdb.partition.table before the faulty HDD was replaced. But as I did not do a backup with sgdisk the backup is completely useless, if I am correct? Additionally I have this output from fdisk -l from before copying the GPT: Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 454774BD-960F-45C6-8C82-AE5C156444E0 Device Start End Sectors Size Type /dev/sdb1 4096 33558527 33554432 16G Linux RAID /dev/sdb2 33558528 34607103 1048576 512M Linux RAID /dev/sdb3 34607104 5860533134 5825926031 2.7T Linux RAID /dev/sdb4 2048 4095 2048 1M BIOS boot Partition table entries are not in disk order. Disk /dev/md0: 16 GiB, 17171349504 bytes, 33537792 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md1: 511.7 MiB, 536543232 bytes, 1047936 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/md2: 2.7 TiB, 2982739705856 bytes, 5825663488 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes
After researching a bit and after trying out tools such as testdisk, I did not find a defintive way to restore my original GPT on /dev/sdb So I tried using cgdisk and it was successfull, as I still had the original "sector layout" of /dev/sdb noted down: Disk identifier: 9F95A04D-3ECB-144D-B2A0-55CDD986072B Device Start End Sectors Size Type /dev/sdb1 4096 33558527 33554432 16G Linux RAID /dev/sdb2 33558528 34607103 1048576 512M Linux RAID /dev/sdb3 34607104 5860533134 5825926031 2.7T Linux RAID /dev/sdb4 2048 4095 2048 1M BIOS boot With this information I created partitions with the same starting sectors, the same number of sectors used and the same file system types as stated above. After writing the GPT on /dev/sdb, fdisk -l /dev/sdb gave me the same output as above. I then copied over the GPT by using sgdisk -R /dev/sda /dev/sdb (this time in the correct order) and fdisk -l /dev/sda showed me the exact same "sector layout" for /dev/sda as for /dev/sdb: Disk identifier: 4CB38488-8B72-44AA-8449-4E4692165893 Device Start End Sectors Size Type /dev/sdb1 4096 33558527 33554432 16G Linux RAID /dev/sdb2 33558528 34607103 1048576 512M Linux RAID /dev/sdb3 34607104 5860533134 5825926031 2.7T Linux RAID /dev/sdb4 2048 4095 2048 1M BIOS boot All that was left to do, was to resync the RAID-Volumes using mdadm and re-install grub2. After resyncing was done, as mentioned, I re-installed grub2 on /dev/sda (I re-installed it /dev/sdb too, just to be sure) and generated a new devicemap. (I had to flush the HDD buffers to avoid the grub2 core image warnings, though) I rebooted the server and it came up again just fine. IMPORTANT: I only did this GPT tinkering because I have a complete backup of my server as I was not 100% sure this would work and I could have destroyed my partitions.
(Re-)Create GPT from existing partitions on Debian 8
1,580,829,756,000
We have a Centos server as a virtual machine where the / path is getting full. So we want to resize the partition from 50GB to 70GB. I followed this guide https://www.thomas-krenn.com/de/wiki/LVM_vergr%C3%B6%C3%9Fern So the first step was to increase the size in the VM preferences. After this I used cfdisk to create a new parition. BEFORE: sda1 NC Primary GPT 53687.10 * Pri/Log Free Space 21474.84 * AFTER Writing: sda1 NC Primary GPT 53687.10 * sda2 Primary Linux 21474.84 * As the guide said I first didn't do a restart and used the command partprobe. partprobe Error: The backup GPT table is not at the end of the disk, as it should be. This might mean that another operating system believes the disk is smaller. Fix, by moving the backup to the end (and removing the old backup)? Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the space (an extra 41943040 blocks) or continue with the current setting? Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Device or resource busy). As a result, it may not reflect all of your changes until after reboot. After this message I was not sure and did finally a restart. Then I tried to initialize the new partition as PV [root]# pvs PV VG Fmt Attr PSize PFree /dev/sda3 vg_atcrushftp lvm2 a-- 49.31g 10.00g and got this error message: [root]# pvcreate /dev/sda2 Can't open /dev/sda2 exclusively. Mounted filesystem? Now I was not sure and did a df to look for it [root]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/vg_atcrushftp-lv_root 35G 8.4G 25G 26% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda2 477M 121M 331M 27% /boot /dev/sda1 200M 260K 200M 1% /boot/efi //192.168.0.53/pictures [root]# df -T Filesystem Type 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_atcrushftp-lv_root ext4 36380264 8720856 25804740 26% / tmpfs tmpfs 1962068 0 1962068 0% /dev/shm /dev/sda2 ext4 487652 123566 338486 27% /boot /dev/sda1 vfat 204580 260 204320 1% /boot/efi A mount command shows this [root]# mount /dev/mapper/vg_atcrushftp-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/sda2 on /boot type ext4 (rw) /dev/sda1 on /boot/efi type vfat (rw,umask=0077,shortname=winnt) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) It looks like I can't use the created partition because it is the /boot. Does this mean I destroyed the information on /boot? Before I started I created a snapshot. So I can recover the old state. What should I do next? I want to resize the partition without losing data.
Resize an LVM partition on a GPT drive Commands: pvs Shows physical volume lvs Shows logical volume vgs Shows volume groups vgdisplay Shows volume groups including mount points lsblk Shows block hierarchy (plate, partition, LVM) The basic flow of necessary steps is essentially: Resize LVM partition to use the new space. Resize the Physical Volume in the LVM partition to use the newly resized space. Resize the Logical Volume(s) inside the Volume Group to their new sizes. Resize the filesystems in each Logical Volume to match their sizes. WARNING: MAKE A BACKUP BEFORE Reboot into Recovery Mode or reboot into a Live CD/USB environment, as it is not possible to resize a partition while it is online. (Since this is a GPT-partitioned disk, we have to use the Parted tool instead of FDisk.) In your Recovery Mode or Live environment, open a terminal if you haven’t already got one and launch Parted to examine your array by typing in: $ sudo parted /dev/sda GNU Parted 2.3 Using /dev/sda Welcome to GNU Parted! Type 'help' to view a list of commands. Now we have a “(parted)” prompt. First up, we need to switch the units of measurement we’re using to sectors. Do that by issuing the following command: (parted) u s Now list the existing partitions using the “print” command. You will see something similar to the following: (parted) print Model: INTEL SRCSATAWB (scsi) Disk /dev/sda: 19521474560s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 2048s 1953791s 1951744s ext4 Boot boot 2 (red)1953792s(/red) 19521474526s 19519520735s MYSERVER_HDD lvm NOTE: You may be shown a warning message advising that the GPT table is not at the end of the disk, saying that the disk size is smaller than the system expects it to be (because you resized your array, remember?). It will ask you if you wish to fix this. Type in “F” and hit enter. You may then be warned that the kernel is not yet aware of changes being made. Respond to this with Ignore by typing in “I” and hit enter. You may be prompted with the latter message several times whilst using Parted. Respond “Ignore” to it each time. In this environment, the current kernel does not need to be aware of the changes because we’re going to reboot at the end of it anyway. Make a note of the items that are highlighted in red above, namely the total sectors of the device (which represents the total size of your newly expanded array) and the start sector of the second partition. Please double-check your figures and make sure they are right. Any mistakes here can DESTROY YOUR DATA. Now we’re going to resize the second partition to use all of the newly created space in the array. Unfortunately GPT has no ability to resize. Instead you must remove the partition and recreate it. Don’t worry, as scary as it sounds, this process will NOT change any of the data on the drive. It simply deletes the geometry data relating to the start and end of the partition on the drive only. Remove the second partition with the following command: (parted) rm 2 Now let’s create a new partition to replace it. Type in the following: (parted) mkpart You will be asked for name for the partition. Give it the same name you had for it before, or specify a new name if you like: Partition name? []? MYSERVER_HDD You will then be asked about the file system type. You can’t specify LVM here, so just hit enter to accept “ext2″ – we’ll change it later: File system type? [ext2]? You will then be asked for the start sector. Specify the value of the start of the second partition that you recorded earlier (don’t write the letter “s” on the end): Start? 1953792 You will then be asked for the end sector. Specify the value of the total size of the drive that you recorded earlier minus one. If you specify the actual value, you will get an error saying that the value is “outside of the device” which is why you specify a value just inside that limit. End? 19521474559 You will then be told that the system cannot actually make a partition up to that location (because there’s another partition on the disk taking up space), so the system will offer the next closest value which will just happen to be the maximum space remaining on the array. Simply respond “Y” for Yes. Warning: You requested a partition from 1953792s to 19521474559s. The closest location we can manage is 1953792s to 19521165533s. Is this still acceptable to you? Yes/No? Now we need to change the partition type to LVM as follows: (parted) toggle Partition number? 2 Flag to Invert? lvm We’re now done with our partitioning so quit parted with the quit command: (parted) quit Reboot your server and boot up as normal. If you check your drive using parted or fdisk, it should now show that the total partition size includes the newly added space in your array, but nothing is using it yet. Now it’s time to tell LVM to use the new space by resizing the Physical Volume with the following command: $ sudo pvresize /dev/sda2 Once completed, you can now check out the new free space (shown as free extents) in the LVM Physical Group by issuing the command: $ sudo pvdisplay Now we can start allocating this newly acquired free space to our LVM Logical Volumes. First up, let’s get a list of all our defined Logical Volumes: $ sudo lvdisplay Note down the “LV Name” of each Logical Volume you wish to add space to. Now let’s resize the Logical Volume. There are two ways you can do this. One method is to specify an absolute value that defines the new size of that Logical Volume, or specify a value that will add to the existing size of it. In this first example, I’m going to change the size of my Logical Volume called /dev/myserver/mylogicalvolume to be ab absolute size of 20 gigabytes: $ sudo lvextend -L20G /dev/myserver/mylogicalvolume …which will make the /dev/myserver/mylogicalvolume Logical Volume 20 gigabytes in size regardless of its previous size. It does NOT add to the existing size. Alternatively to add space to the existing size using the following command instead: $ sudo lvextend -L+20G /dev/myserver/mylogicalvolume (note the plus sign between the -L and the 20G) …which will add 20 gigabytes of space to the /dev/myserver/mylogicalvolume Logical Volume. If it was 10 gigabytes in size before, it will now be 30 gigabytes in size. Alternatively, if you wish to allocate all remaining free space to a Logical Volume, issue the following command: $ sudo lvextend -l +100%FREE /dev/myserver/mylogicalvolume (notice that the parameter is a lowercase L instead of a capital L) Repeat for all Logical Volumes you are extending. There are other ways to allocate space as well, but the above are the most common methods that would be used. See the man page of the lvextend command for more information. You can confirm the new sizes for each Logical Volume by issuing the following command: $ sudo lvdisplay We’re nearly there! All that is left to do is now to resize the filesystems containing within our Logical Volumes to use the newly allocated space. Again, using the LV Names you recorded earlier, specify the following command for each Logical Volume you have modified: $ sudo resize2fs /dev/myserver/mylogicalvolume Once you have expanded the filesystems on all your Logical Volumes, you can check the free space on each of your filesystems by issuing the following command: $ df -h And that’s it! You have successfully expanded your LVM partition on your GPT-partitioned array! Pat yourself on the back. You are done. Original post: After long search this guide helped me out: http://www.serenux.com/2013/11/howto-resize-an-lvm-partition-on-a-gpt-drive-after-expanding-the-underlying-array/ (I tried my best to format it as nice as possible.)
Resize an LVM partition on a GPT drive
1,580,829,756,000
I know it's up to 128 for windows OS. Does the same limit apply to Linux? So this limit is actually a limit of GPT?
The Windows FAQ on this says The specification allows an almost unlimited number of partitions. However, the Windows implementation restricts this to 128 partitions. The number of partitions is limited by the amount of space reserved for partition entries in the GPT. so the 128 is Windows-specific. For Linux, as explained here, the limitation comes usually from DISK_MAX_PARTS, which is 256, so 255 is the maximum number of partitions. I'd assume that this applies to all partition schemata, not only GPT. I do not know if anything else would break if you just increase this number, and recompile the kernel with it.
How many partitions can be created for linux on GPT?
1,580,829,756,000
I want to dual-boot Arch Linux alongside Windows 10, which is already installed. I am using UEFI-GPT. On Windows 10 installation, it creates an EFI system partition as required by UEFI. This partition's capacity is 100 megabytes. On the Arch Linux installation guide listed on the Arch Wiki, it displays that I need to create an EFI system partition that is 260-512 megabytes of capacity . Which according to that statement is not enough for my 100 megabyte EFI system partition that Windows 10 has created on installation. On the ArchWiki EFI system partition: Check for an existing partition page, it displays if I already have an EFI system partition I can simply continue mounting the partition. Will I have any problems with only 100 MB EFI partition, or do I need to extend it somehow by moving partitions around or create another one? The boot loader I will be using is GRUB2.
If Arch's filesystem layout only places grubx64.efi (and possibly the GRUB2 configuration file) to the EFI partition, 100 MB is fine. But if your layout mounts the EFI partition as /boot (rather than /boot/efi) or otherwise causes the entire kernel + initramfs files to be placed in there, you might run out of space with more than just one or two kernel versions installed. That is going to make kernel updates unnecessarily risky. You'll always want to have at least two kernels installed: the one you're currently using, and the previous one as a known good back-up. When you are installing a new kernel, that means you'll end up temporarily with three kernels installed: the old, the current and the new one. If you are brave, you can always delete the old kernel (+ its initramfs file) just before installing a new kernel, but in a production system I would not like to do that. (Disclaimer: on my main home system, I used to have precisely such a layout before I replaced the system disk with a bigger one.) Note that the EFI system partition is often formatted as FAT32, and that filesystem type has a minimum number of blocks requirement. If your disk uses classic 512-byte blocks, 100 MB works out fine. But if you later migrate your system to a new disk that happens to use the new 4096-byte blocks, the minimum size of a FAT32 filesystem works out to a little less than 260 MB. As a result, 260 MB is a good forward-compatible minimum size for the EFI system partition for new installations. 100 MB can be a bit too small if you dual-boot. (Windows 10 uses that size too if it detects the disk is using 4096-byte blocks.) And yes, with a tool like gparted you could resize or move the following partitions further on the disk, and then resize the EFI system partition. Such an operation would be best done by booting the system from an external media, such as some Linux Live DVD/USB, so that the filesystems you'll need to move won't be mounted and in use at the time.
Am I able to install Arch Linux with the 100MB EFI partition that Windows 10 had already created on installation?
1,580,829,756,000
Are timestamps created with new GPT partition tables? If so, where are they (i.e, headers, entries, GUIDs, labels)? How are they accessed?
There aren’t necessarily any timestamps created with new GPT partition tables. The GPT layout doesn’t include any timestamp fields. If a partition is given a version 1 GUID (see RFC-4122 sections 4.1.3 and 4.1.4), the generated GUID will include a timestamp; but any other version won’t. Most partition GUIDs I’ve seen use version 4 and therefore don’t contain a timestamp.
Are timestamps created with new GPT partition tables?
1,580,829,756,000
I have an issue very similar to the question here: https://askubuntu.com/questions/1370421/restore-ext4-hd-after-creating-gpt-partition-table My problem seems to be that I had an ext4 filesystem which sat directly on a block device, and installing windows to an entirely different drive decided to mess with that device's partition table (or, seemingly, it's lack of a partition table). When I booted, this drive has a GPT partition table which looks like so: λ sudo fdisk -l /dev/nvme1n1 Disk /dev/nvme1n1: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors Disk model: Samsung SSD 970 EVO 1TB Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: 73405727-65E8-485F-99F8-C2D65E99D767 Device Start End Sectors Size Type /dev/nvme1n1p1 2048 1953525127 1953523080 931.5G Linux filesystem But this partition is unmountable and appears to have an invalid filesystem. I can, however, get all my data back by running fsck.ext4 /dev/nvme1n1 - but seemingly since this is the whole device rather than the partition, doing this then blows up the GPT table: λ sudo fdisk -l /dev/nvme1n1 The primary GPT table is corrupt, but the backup appears OK, so that will be used. ... I can re-write the table with gdisk, but then I'm back to having a broken file system. I can toggle back and forth like this, but I can't figure out how to do what I actually want: create a valid GPT partition table and recover my existing filesystem onto it. I have tried passing explicit superblocks, without good results: λ sudo fsck.ext4 -p -b 32765 -B 4096 /dev/nvme1n1p1 fsck.ext4: Bad magic number in super-block while trying to open /dev/nvme1n1p1
It's not possible (that way). You can't have a partition table, and a filesystem, on the same block device. When you create a partition on /dev/nvme1n1, it gives you a new block device /dev/nvme1n1p1 — you have to use the new block device for the filesystem. And that means shifting all data by the partition offset. Keeping the filesystem data at the old offset won't work. fsck won't fix that for you. So it can't be done (the way you're trying to do it). So your options are: keep using the bare drive as is and remove the msdos/gpt partition table headers entirely (use wipefs to remove msdos/gpt partition headers only) shrink the filesystem by 2MiB then move it by 1MiB (or whatever your partition offset is). Shrinking is necessary to make room for GPT headers at start and end of drive. backup all files, set it up properly from scratch with new partitions and filesystems, then restore files to it I recommend the last option. While shifting data offsets can be done in theory (and tools like gparted might help you), it's actually very risky to do so and when anything goes wrong, you're left with a device that is unusable and there is no trivial fix. Using bare drives directly is possible in theory but in practice, you run into this exact case that something else "helpfully" creates a partition table for you, damaging your data in the process. Thus having a partition table is not optional; it's mandatory.
Fixing an ext4 whole-device filesystem and corrupt GPT partition table
1,580,829,756,000
I use msdos partition table, so there is no PARTUUID supported (it's only on GPT partition tables) root@xenial:~# fdisk -l Disk /dev/sda: 200 GiB, 214748364800 bytes, 419430400 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xa9ff83af Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 314574847 314572800 150G 7 HPFS/NTFS/exFAT /dev/sda2 314574848 315598847 1024000 500M 83 Linux /dev/sda3 315598848 399484927 83886080 40G 83 Linux /dev/sda4 399486974 407875583 8388610 4G 5 Extended /dev/sda5 399486976 407875583 8388608 4G 82 Linux swap / Solaris So what is PARTUUID displayed in blkid ? root@xenial:~# blkid /dev/sda1: LABEL="windows" UUID="3364EC1A72AE6339" TYPE="ntfs" PARTUUID="a9ff83af-01" /dev/sda2: LABEL="/boot" UUID="9de57715-5090-4fe1-bb45-68674e5fd32c" TYPE="ext4" PARTUUID="a9ff83af-02" /dev/sda3: LABEL="/" UUID="553912bf-82f3-450a-b559-89caf1b8a145" TYPE="ext4" PARTUUID="a9ff83af-03" /dev/sda5: LABEL="SWAP-sda5" UUID="12e4fe69-c8c2-4c93-86b6-7d6a86fdcb2b" TYPE="swap" PARTUUID="a9ff83af-05" I need to change it to debug a ubuntu kickstart multiboot installation, where can i set this PARTUUID ?
Looks like the PARTUUID on a MBR-partitioned disk is the Windows Disk Signature from the MBR block (8 hex digits) + a dash + a two-digit partition number. The Windows Disk Signature is stored in locations 0x1B8..0x1BB in the first block of the disk (the MBR block), in little-endian byte order. This command will display the Windows Disk Signature straight out of the MBR: # dd if=/dev/sda bs=1 count=4 skip=440 2>/dev/null | od -t x4 -An
What is PARTUUID from blkid when using msdos partition table?
1,580,829,756,000
I have a 4-way mirrored rpool on a set of GPT disks running Solaris 11.3 (GPT being used as the disks are more than 2TB in size) on x86 hardware. Disks 1 and 2 are the "main" operating system. Disks 3 and 4 are intermittently present as they are offsite (offline) backup disks (which automatically update their data when present thanks to the magic of Solaris and ZFS) swapped in and out so all the data is never in one place. I'm trying to wipe the "MBR" of disks 3 and 4 so they can't be accidentally booted (as the data may be stale) but just can't seem to find a decent explanation of: where the "MBR" resides on a GPT disk and/or what format the initial bytes would be (from reading it seems to suggest wiping the initial bytes would also wipe the GUID Partition Table) how a GPT disk actually boots / what is on the initial bios_grub partition (which created using GParted on Ubuntu but now seems to be in Solaris file format from what I can see?) how this could be done in Solaris 11.3 without wiping key information as I may have to restore the ability to boot from these disks at a later date (I've read about bootadm and /sbin/installgrub but seem no closer to an answer or working solution). (I also need to be able to restore boot ability should disk 1 or 2 break which means I need to replace the disk and make it bootable.) Any ideas? :-/
Managed to sort it in the end but not quite the way intended, although probably easier in the long run. I first used: bootadm install-bootloader which ensures all the disks boot. The BIOS then has a disks section which looks like a lists of disks it allows to be used. The list is however a list of disks it boots from, so removed disks 3 and 4 from there and voilà! (I did notice the disks listed were not necessarily in the order of the slots, so had to do the step with care.) If issues do develop then rather than installing a bootloader on disks 3/4 I can simply amend the BIOS as a bootloader already exists.
Wiping the boot partition of a GPT disk in Solaris 11.3
1,580,829,756,000
So I'm having a problem with kimsufi server. I was installing windows by using this command: wget -O- ...url.../server.gz | gunzip | dd of=/dev/sda And I messed up and accidentally ran that command on already existing windows installation, now I can't use RDP anymore, I guess it's all gone now, it somehow wrote over existing installation, even though it had 3% progress at downloading the image. All my important files were on different partition, not on primary where the OS was stored. Is there a way to transfer all files to another server by using rescue mode ? Can I somehow get FTP server running in Kimsufi Linux rescue mode ? I am thinking of connecting to it from another server (windows), browse files and download/back up them. I have tried to use WinSCP, but it shows only Linux directories. How can I browse windows partitions through WinSCP ? Could it be that after running that command it had overwritten my main partition and corrupted other partitions ? I ran lsblk command and it shows only 2 partitions NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 1.8T 0 disk ├─sda1 8:1 0 500M 0 part └─sda2 8:2 0 14.5G 0 part Or it just shows linux partitions ?
I recovered the partly overwritten partition with testdisk. In case someone has the same problem, here's the solution (use testdisk): Intel/PC Partition > Analyse > Quick search > And there I found the deleted partition [1.8 TB] > Enter to continue > [Write] (Write partition structure to disk) > And now the partition is showing when I run fdisk -l After that I tried to mount it, but it showed an error: "Metadata kept in Windows cache, refused to mount" root@rescue:/dev# sudo mount /dev/sda3 /mnt The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. Failed to mount '/dev/sda3': Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume read-only with the 'ro' mount option. Read some other thread on this site on how to fix this: sudo ntfsfix /dev/sda3 and sudo mount -o rw /dev/sda3 /mnt > now the mounted NTFS partition is showing in WinSCP (SFTP) /mnt folder. sda3 is the recovered partition's name, it can contain a different number based on how many other partitions you have.
How to recover overwritten partition?
1,580,829,756,000
I have 480 GB SSD it currently has the following partitions 256MB EFI partition, 16GB SWAP, and 40GB CentOS 7 see details from lshw below. I want to use the remaining 400GB of unused space on the drive as an iSCSI target. The system only has /dev/sda1, /dev/sda2, /dev/sda3, there is no /dev/sda4 mapped to the 400GB of free disk space on the SSD. How do I add /dev/sda4 and map it to the unused 400DB on my disk so that it can be used as an iSCSI target? I am on CentOS 7. *-scsi physical id: 1 logical name: scsi3 capabilities: emulated *-disk description: ATA Disk product: Crucial_CT480M50 physical id: 0.0.0 bus info: scsi@3:0.0.0 logical name: /dev/sda version: MU03 serial: 13440956E89D size: 447GiB (480GB) capabilities: gpt-1.00 partitioned partitioned:gpt configuration: ansiversion=5 guid=ab9704e2-9162-4c08-a759-956ad6a2f8f1 logicalsectorsize=512 sectorsize=4096 *-volume:0 UNCLAIMED description: Windows FAT volume vendor: mkfs.fat physical id: 1 bus info: scsi@3:0.0.0,1 version: FAT16 serial: fa26-fbee size: 255MiB capacity: 255MiB capabilities: boot fat initialized configuration: FATs=2 filesystem=fat name=EFI System Partition *-volume:1 description: Linux swap volume vendor: Linux physical id: 2 bus info: scsi@3:0.0.0,2 logical name: /dev/sda2 version: 1 serial: c2b0907a-8337-4f32-b1e9-9affe6927264 size: 15GiB capacity: 15GiB capabilities: nofs swap initialized configuration: filesystem=swap pagesize=4095 *-volume:2 description: data partition vendor: Windows physical id: 3 bus info: scsi@3:0.0.0,3 logical name: /dev/sda3 logical name: / serial: f7efca38-7631-4a20-ae0a-04942971d5ba capacity: 39GiB configuration: mount.fstype=xfs mount.options=rw,seclabel,relatime,attr2,inode64,noquota state=mounted fdisk -l output below. fdisk -l /dev/sda WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion. Disk /dev/sda: 480.1 GB, 480103981056 bytes, 937703088 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: gpt # Start End Size Type Name 1 2048 526335 256M EFI System EFI System Partition 2 526336 33294335 15.6G Linux swap 3 33294336 115214335 39.1G Microsoft basic
The first step is to create a partition. There's no entry in /dev for the free space because it's free space, not a partition. You can use fdisk to create a partition. Run fdisk /dev/sda, then enter the n command and create a partition covering the free space. Once you're satisfied with the new partition table, enter the command w to write it to disk. You may need to run partprobe /dev/sda to get the kernel to re-read the partition table. Now you can add /dev/sda4 to your iSCSI configuration.
Create a GPT partition covering the free space
1,580,829,756,000
I have a laptop, which has a 500GB hard disk and with MBR partition, and I install my windows8.1 on it, but I left ~75G unused which I would like to use it for OpenSuse13.1. Now I have 2 USB, one is Gnome Live 13.1, another is standard DVD install ISO. I tried to insert Gnome live 13.1 and no problems, it recognized my unused 75G partition, and recommend my Linux partition as below: /dev/sda6 --> swap /dev/sda7 --> / /dev/sda8 --> /home I cancelled this installation and try with DVD iso, and now I get an error when I try to use my 75G unused partition as below: Your system states that it requires an EFI boot setup, since the selected disk does not contain a GPT disk label YaST will create a GPT label on this disk. You need to mark all partitions on this disk for removal. I would like to keep my windows and create a dual boot system, but I am stuck here. anyone please give me some suggestions?
It sounds like you're either booting Windows 8.1 in legacy/BIOS/MBR mode (as opposed to EFI/GPT mode), or YaST is buggy and thinks that you have EFI booting enabled even though you don't. Another possibility is that your laptop's BIOS boots optical drives in EFI mode by default, causing YaST to load in EFI/GPT-only mode. Therefore, if there's a BIOS option in your laptop to turn off EFI booting, I suggest you set it. Another thing to try, is when you bring up the laptop's boot menu to select "boot from DVD-ROM", if there are 2 options "Boot from DVD - EFI" and "Boot from DVD - BIOS/legacy", pick "BIOS/legacy". If you're really stuck, why don't you Google your laptop's exact model number with terms like "boot optical drive in BIOS/legacy mode" - this should hopefully point you in the right direction.
How to install after windows 8.1 with MBR partition
1,580,829,756,000
I'm trying to install grub but I'm getting the error: Warning This GPT partition label contains no BIOS boot partition: embedding won't be possible I'm using GPT to partition, and the file system is ext3. When I ran gdisk -l it shows the first partitions start sector on the SSD (which is /sda) is 2040. Am I getting this error because the disk doesn't start on the first 512 bytes? If not, what else could be causing this error? I'm trying to get this to work to complete Arch installation. parted -l : Warning: /dev/sda contains GPT signatures, indicating that it has a GTP table. However, it does not have a valid fake msdos partition table, as it should. Perhaps it was corrupted -- possibly by a program that doesn't understand GTP partition tables. Or prehaps you deleted the GPT table, and are now using an msdos partition table. Is this a GPT partition table? Yes/No: yes Error: The backup GPT table is not at the end of the disk, as it should be. This might mean that another operating system believes the disk is smaller. Fix, by moving the backup to the end (and removing the old backup)? Fix/Ignore/Cancel? Fix Warning: Not all of the space available to /dev/sda appears to be used, you can fix the GPT to use all of the sapce (an extra 6576128 blocks) or continue with the current setting? Fix/Ignore/Cancel? Fix Error: Unable to satisfy all constraints on the partition. Model: Verbatim (scsi) Disk /dev/sda: 3932MB Sector size (logical/physical): 512B/512B Partition Table: unknown Disk Flags: Model: ATA TOSHIBA THNSNH12 (scsi) Disk /dev/sdb: 120GB Sector size (logical/physical): 512B/512B Partition Table: loop Disk Flags: Number Start End Size File system Flags 1 0.00B 120GB 120GB ext3 My drive is now /dev/sdb. I've just had to wipe it all and start the install again.
The first partition can start at 2040, but it must have the bios_grub flag and that is what your grub install is complaining about. If you do parted -l /dev/sda you should get something like: Model: ATA Samsung SSD 840 (scsi) Disk /dev/sda: 250GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB bios_grub 2 2097kB 2150MB 2147MB ext2 3 2150MB 36,5GB 34,4GB btrfs
Error installing Grub
1,580,829,756,000
I wanted to try out the new GPT system, and used it to partition my new HDD, with the partitions themselves using ext4. For some reason, about 1-2% of the space in each partition is already shown as used, both in df and gparted. Currently the only content of the partitions is the lost+found folder which occupies all of 16K. Is there a reason for this? Can this be fixed? Or is this just the space used by the file table (or equivalent)? Edit: Is this more related ext4 than GPT? I found this just now, ext4: How to account for the filesystem space?
1-5% is reserved for root and as overhead for the filesystem. It is NORMAL. This is done is to leave root that 1-5% so if the users on the machine fill the disk up, critical system processes and the root user still have a small chunk to play with. As jordanm pointed out, the reserved space is also used to reduce filesystem fragmentation. You can use tune2fs -m 1.0 /dev/sda2 to lower the default 5% to 1%. Please note that it is not recommended to use -m 0 but still can be done.
GPT Partition - Used Space immediately after creating partition [duplicate]
1,580,829,756,000
I'm having difficulties installing an image I've built with Yocto. In the past I've always used u-boot, MBR, and legacy boot. Installing Yocto meant creating boot and rootfs partitions, installing the first stage u-boot boot loader, and copying the files in /boot to the boot partition (a FAT32 partition). Now I'm trying to do something very different for an Intel machine that doesn't seem to support legacy boot. I'm using systemd-boot, GPT, and UEFI. If I directly write my .wic image that's produced by Yocto, it correctly boots. But if I instead try and follow a process as above where I manually partition and copy files over, it will run systemd-boot, but once it tries to load my boot entry, nothing happens. One thing I did notice is that the /boot directory that's in the rootfs.tar.gz produced by Yocto is different from the /boot directory that's on the .wic file. The kernels are different (different sizes) and the .wic file includes a microcode.cpio file. I tried copying the boot files from the .wic file and installing them manually when installing, but that got me to a point where it says EFI stub: Loaded initrd from LINUX_EFI_INITRD_MEDIA_GUID device path, but then nothing happens after that. Is there any guide to installing Yocto images by manually partitioning on UEFI systems? I'm not doing anything unusual other than maybe the installation method. I'm building nanbield, core-image-base, and have added the meta-intel layer. This is my local.conf: MACHINE ?= "intel-corei7-64" MACHINE ??= "qemux86-64" DISTRO ?= "poky" EXTRA_IMAGE_FEATURES ?= "debug-tweaks" USER_CLASSES ?= "buildstats" PATCHRESOLVE = "noop" BB_DISKMON_DIRS ??= "\ STOPTASKS,${TMPDIR},1G,100K \ STOPTASKS,${DL_DIR},1G,100K \ STOPTASKS,${SSTATE_DIR},1G,100K \ STOPTASKS,/tmp,100M,100K \ HALT,${TMPDIR},100M,1K \ HALT,${DL_DIR},100M,1K \ HALT,${SSTATE_DIR},100M,1K \ HALT,/tmp,10M,1K" PACKAGECONFIG:append:pn-qemu-system-native = " sdl" IMAGE_FEATURES += "read-only-rootfs" IMAGE_FSTYPES = "tar.xz" CORE_IMAGE_EXTRA_INSTALL += "kernel-modules" # OS packages CORE_IMAGE_EXTRA_INSTALL += "openssh" CORE_IMAGE_EXTRA_INSTALL += "nginx" CORE_IMAGE_EXTRA_INSTALL += "openssl" CORE_IMAGE_EXTRA_INSTALL += "gnupg" CORE_IMAGE_EXTRA_INSTALL += "iptables" CORE_IMAGE_EXTRA_INSTALL += "logrotate" CORE_IMAGE_EXTRA_INSTALL += "mongodb" CORE_IMAGE_EXTRA_INSTALL += "sudo" CORE_IMAGE_EXTRA_INSTALL += "rsync" CORE_IMAGE_EXTRA_INSTALL += "procps" # Python packages CORE_IMAGE_EXTRA_INSTALL += "python3" CORE_IMAGE_EXTRA_INSTALL += "python3-flask" CORE_IMAGE_EXTRA_INSTALL += "python3-setuptools" CORE_IMAGE_EXTRA_INSTALL += "python3-pymongo" CORE_IMAGE_EXTRA_INSTALL += "python3-cryptography" CORE_IMAGE_EXTRA_INSTALL += "python3-scrypt" CORE_IMAGE_EXTRA_INSTALL += "python3-pip" CORE_IMAGE_EXTRA_INSTALL += "python3-pyserial" CORE_IMAGE_EXTRA_INSTALL += "python3-pyudev" # Feature services CORE_IMAGE_EXTRA_INSTALL += "dnsmasq" CORE_IMAGE_EXTRA_INSTALL += "rsyslog" CORE_IMAGE_EXTRA_INSTALL += "ntp" CORE_IMAGE_EXTRA_INSTALL += "ntpq" CORE_IMAGE_EXTRA_INSTALL += "ntp-utils" CORE_IMAGE_EXTRA_INSTALL += "freeradius" CORE_IMAGE_EXTRA_INSTALL += "net-snmp" # Remove the following packages before 1.0 release CORE_IMAGE_EXTRA_INSTALL += "coreutils" CORE_IMAGE_EXTRA_INSTALL += "vim" This is my bblayers.conf: # POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf # changes incompatibly POKY_BBLAYERS_CONF_VERSION = "2" BBPATH = "${TOPDIR}" BBFILES ?= "" BBLAYERS ?= " \ /data/opis-current/meta \ /data/opis-current/meta-poky \ /data/opis-current/meta-yocto-bsp \ /data/opis-current/meta-openembedded/meta-oe \ /data/opis-current/meta-openembedded/meta-python \ /data/opis-current/meta-openembedded/meta-webserver \ /data/opis-current/meta-openembedded/meta-networking \ /data/opis-current/meta-intel \ "
The problem ended up being an obscure BIOS setting, which I found looking through the documentation of the machine I'm using. I saw that they noted, in order to make Ubuntu boot on that machine, you had to turn on "PinCntrl Driver GPIO Scheme" in the BIOS. I made that change and my Yocto build started working as well. So if others have this problem, I recommending looking in to various BIOS issues.
How can I manually install a Yocto image?
1,698,413,468,000
I have a PC with a mechanical interrupt in order to enable different hdds and use different OSs. Windows has bee installed with bios legacy and I am trying to install archlinux-uefi. At the end of the installation i reboot archlinux and it is all ok, the installation procedure was performed correctly. I shutdown and power on and it is still ok. When I switch back to Windows (fortunately still working) and then again switch back to archlinux starts the uefi instead of the OS. My impression is it seems the GPT has been modified since regenerating fstab and grub.config files the problem remains and no message is displayed after power on, just the uefi starts, as it does not find GPT. Some of you has any idea of what is going on? Thanks!
UPDATE - now works. I think is a combination between fstab and bad grub install parameter. With the following command at installation, now works. genfstab -t PARTLABEL --> in order to generate fstab based on partitionlabel, and refers to persistent block name (not sure it is necessary) grub-install --target=x86_64-efi --efi-directory=esp --removable --recheck --> to grub from a "removable-like" device (since the HDD can be deatached) as specified in https://wiki.archlinux.org/title/Install_Arch_Linux_on_a_removable_medium esp is the Efi System Partition, I followed the installation guide so is equal to /boot, since EFI is mounted here
archlinux in a different HDD
1,698,413,468,000
I have a CentOS server on VMware that has, among others, a disk of 1.5TB, with a single xfs partition using the whole disk. This disk/partition is running out of space, so I need to increase its size to 2.5TB. So I increased the size on VMware and tried to delete and add the partition, which failed. Of course, the original partition is MBR and the new one must be GPT, but when trying to remove/add the partition, the conversion fails. I've found the original partition is at sector 128 and the new one tries to start at sector 2048, which I tried to modify, but I couldn't (I guess because GPT needs more space than MBR?). Then I come with the idea of moving the original partition so it starts at sector 2048, convert the partition to GPT, then increase the size of the partition. Does it make sense? Is that possible? Specially the first part of moving the partition. Thanks! Update For formatting reasons, here's the output of the suggested command: parted /dev/disk unit s print free Disk /dev/sdb: 5368709120s Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Numero Inicio Fin Tamaño Typo Sistema de ficheros Banderas 63s 127s 65s Free Space 1 128s 3259013119s 3259012992s primary xfs 3259013120s 5368709119s 2109696000s Free Space
So you have one msdos partition that starts at sector 128. This is uncommon since the standard would be MiB alignment, starting at sector 2048 (for 512 byte logical sector size). With GPT, you can still use the start sector 128, that isn't a problem: # parted /dev/loop0 unit s print free Model: Loopback device (loopback) Disk /dev/loop0: 3259017216s Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 34s 127s 94s Free Space 1 128s 3259017182s 3259017055s However, parted will complain to you when you create it: # parted /dev/loop0 (parted) mklabel gpt (parted) mkpart Partition name? []? File system type? [ext2]? Start? 128s End? 100% Warning: The resulting partition is not properly aligned for best performance: 128s % 2048s != 0s Ignore/Cancel? Ignore If you do not care about MiB alignment (and since your data is already there, you have no choice, anyway) you can just ignore this warning. A start sector of 128 would still be 4K aligned (64K aligned), so that would be fine too. GPT also stores a backup at the end of disk, so the end sector can sometimes be the issue. However you're lucky and you have 4096 free sectors at the end of your disk, so there is no issue in your case. Otherwise you'd have to grow the disk first before converting to GPT. If you want to achieve MiB alignment, you'd have to shift all data. The safest way to do so (if you have enough space) would be to just copy it over to a new disk entirely. Relocating data in place can be risky.
Increase disk size and change from MBR to GPT
1,698,413,468,000
How the tag ID_PART_ENTRY_UUID is computed? Can I get ID_PART_TABLE_UUID from ID_PART_ENTRY_UUID? I have a disk with GPT partition table and some partitions. I need to identify which partitions are related to my disk. All partitions in the disk are referenced to partition table in this disk. I can found this partition table id with udevadm: $ sudo udevadm info /dev/loop18p1 | grep ID_PART_TABLE_UUID E: ID_PART_TABLE_UUID=75e3b937-1ff1-4166-a51f-524b98278e6e However, unfortunately udevadm (as well as parted and so on) is not suitable for me and I have to use blkid I can found a partition table id from disk: $ sudo blkid -po udev /dev/loop18 | grep ID_PART_TABLE_UUID ID_PART_TABLE_UUID=75e3b937-1ff1-4166-a51f-524b98278e6e But ID_PART_TABLE_UUID is absent in partition: $ sudo blkid -po udev /dev/loop18p1 ID_PART_ENTRY_SCHEME=gpt ID_PART_ENTRY_NAME=primary ID_PART_ENTRY_UUID=bcf5e461-90db-4625-a471-6c1d61126773 ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4 ID_PART_ENTRY_NUMBER=1 ID_PART_ENTRY_OFFSET=34 ID_PART_ENTRY_SIZE=195279 ID_PART_ENTRY_DISK=7:18 There is only ID_PART_ENTRY_UUID. In MBR partition table, ID_PART_ENTRY_UUID is just a ID_PART_TABLE_UUID plus serial number of partition, so I can handle it easily. But in GPT table ID_PART_ENTRY_UUID is a tricky hash. I suppose this hash is related to ID_PART_TABLE_UUID and I can use it to recognize a partitions of disk. So, how this hash ID_PART_ENTRY_UUID is computed? Can I get ID_PART_TABLE_UUID from ID_PART_ENTRY_UUID? I suppose it's possible because udevadm can do it. Upd: Actually I use binding to liblkid instead of CLI blkid but I suppose it doesn't matter.
For GPT ID_PART_TABLE_UUID and ID_PART_ENTRY_UUID are not related, these are just unique UUIDs (or in fact GUIDs converted to UUIDs in libblkid) from GPT header (for ID_PART_TABLE_UUID) and GPT partition entry (for ID_PART_ENTRY_UUID). UDev has the information simply because it has a basic parent-child relationship knowledge and for partitions some basic information from the parent (disk) are added to the partition data (see 60-persistent-storage.rules UDev rule).
How ID_PART_ENTRY_UUID is computed in GPT?
1,698,413,468,000
I'm using sgdisk in a bash script similar to this: sgdisk --clear /dev/vda --set-alignment=1 --new 1:34:2047 --typecode 1:EF02 -c 1:"grub" -g /dev/vda sgdisk --new 2:2048:16779263 --typecode 2:8300 -g /dev/vda sgdisk --new 3:16779264:20971486 --typecode 3:8200 -g /dev/vda That works only when the devices are well known in advance and the sectors are hard-coded. I want to drop the hardcode the sector values. Instead, I want the script to work when the disk size is not known until the script runs. After making partition 1, I will set aside a known fixed amount to partition 3 for swap, and give the rest to partition 2. The easy way would be to make the swap partition #2. I know how to do that. However, I want to see if I can instead do this while keeping swap on partition 3. It means sgdisk will have to calculate a size or end sector value for partition 2 taking into account the size that will be allocated for partition 3 in the next step. Reading through the sgdisk man page hasn't given me the clues about how to do this (or even if it can be done).
The following will work: sgdisk --clear /dev/vda --set-alignment=1 --new 1:34:2047 --typecode 1:EF02 -c 1:"grub" -g /dev/vda sgdisk --new 2:0:-2G --typecode 2:8300 -g /dev/vda sgdisk --new 3:0:0 --typecode 3:8200 -g /dev/vda It's much simpler than I thought. sgdisk does all the calculations. The key is the minus sign, which is explained in the man page (which I had missed earlier). You can specify locations relative to the start or end of the specified default range by preceding the number by a '+' or '-' symbol, as in +2G to specify a point 2GiB after the default start sector, or -200M to specify a point 200MiB before the last available sector. A start or end value of 0 specifies the default value,
How to determine sizes using sgdisk partitioning in bash script
1,698,413,468,000
I have a UEFI system that I installed Arch Linux on beside Windows, then later removed Windows and made it Linux only, keeping the EFI boot partition of course. After removing all Windows and recovery partitions, my Linux root partition was still identified as /dev/sda5. lsblk output: NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 931.5G 0 disk ├─sda1 8:1 0 260M 0 part /boot └─sda5 8:5 0 931.3G 0 part / sr0 11:0 1 1024M 0 rom Now I'd like to install btrfs, but I want my partitions to be ordered and have concurrent names to avoid confusion, i.e. boot=/dev/sda1, linux root=/dev/sda2, and btrfs=/dev/sda3. From poking around in gparted, I can't seem to find any functionality for this. I've tried googling this for quite some time and either I'm not sure how to phrase it properly, or people ambiguously use "name" to mean "label" when discussing partitions. Is this possible to accomplish without backing up, formatting my drive, and starting fresh?
Run fdisk. Carefully note the characteristics of the existing partition (position, size, type, name, UUID if you care). Delete it, then create a new one with the desired number and the same characteristics. This is a lot of risk for a negligible benefit, so I don't recommend doing it. Partition numbers are pretty arbitrary, they can't always be chosen freely (with MBR partitions, a primary or extended partition has to be in the range 1–4 and a logical partition has to be ≥5). You should avoid using partition numbers anywhere and use UUIDs, GPT names, or better LVM volume names or filesystem labels instead.
Changing partition dev path
1,698,413,468,000
I'm running Debian Wheezy on an SSD, and in addition I have two 500GB hard disks in Intel software RAID 0 (fakeraid). Both the SSD and the RAID array have GPT partition layouts. I have set up my fstab to automatically mount one of the partitions on the RAID array, but the root filesystem is on the SSD. During boot, dmraid finds the array but does not automatically discover the partitions on it. This causes the boot fsck to fail and dumps me at a recovery shell. Running kpartx -a /dev/mapper/isw_xxx_Volume0 at the recovery shell automatically discovers the partitions and everything works great, but it's a bit irritating having to type it in every time I boot. Am I doing something wrong? Is there some way to make the partition probing automatic? Partition layout of /dev/sda (the SSD) Number Start (sector) End (sector) Size Code 1 2048 411647 200.0 MiB EF00 EFI System Partition 2 411648 117598207 55.9 GiB 0700 Debian root filesystem 3 117598208 250068991 63.2 GiB 0700 Not used yet Partition layout of /dev/mapper/isw_cddhbifacg_Volume0 (the RAID array) Number Start (sector) End (sector) Size Code 1 2048 937502719 447.0 GiB 0700 Debian extra stuff 2 937502720 976564223 18.6 GiB 8200 Swap 3 976564224 1953535999 465.9 GiB 0700 Not used yet Contents of /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> UUID=7f894df3-49f4-4119-bda9-f4734780eaab / ext4 errors=remount-ro 0 1 UUID=0B6C-A37C /boot/efi vfat defaults 0 1 /dev/mapper/isw_cddhbifacg_Volume0p1 /mnt/data ext4 defaults 0 2 /dev/mapper/isw_cddhbifacg_Volume0p2 none swap sw 0 0 /dev/sr0 /media/cdrom0 udf,iso9660 user,noauto 0 0 /dev/sdd1 /media/usb0 auto rw,user,noauto 0 0 /dev/sde1 /media/usb1 auto rw,user,noauto 0 0 /dev/sde2 /media/usb2 auto rw,user,noauto 0 0
Solution to the original problem Install kpartx: sudo aptitude install kpartx Change these lines in /lib/udev/rules.d/60-kpartx.rules: ENV{DM_STATE}=="ACTIVE", ENV{DM_UUID}=="dmraid-*", \ RUN+="/sbin/kpartx -a -p -part /dev/$name" to this: ENV{DM_STATE}=="ACTIVE", ENV{DM_UUID}=="DMRAID-*", \ RUN+="/sbin/kpartx -a /dev/$name" Update the initramfs: sudo update-initramfs -u Restart and the partitions should have been detected properly. Alternative solution Use mdadm instead of dmraid. Set up the RAID array using the Intel configuration utility (Ctrl+I during boot), and Debian Installer 7 RC1 will detect and activate it automatically.
Automatically run kpartx during boot
1,698,413,468,000
So, I have Windows 7 and Fedora 16 installed on my old HDD. Everything worked well and fine before I've had my new 3TB drive built in, which I initialized as GPT in Windows. Actually I initialized 1,5TB - the rest remains untouched. After that Fedora won't boot up anymore. Instead it prompts me to maintenance mode, showing something like: [...]/sbin/blkid -o udev -p /dev/sda[number] [...] terminated by signal 15 (Terminated) Whenever I press Ctrl+D it shows one or multiple messages similar to that. Using parted /dev/sdb print shows that the drive as such is recognized as GPT. It also shows up in /etc/fstab. Using older kernels results in the same problem. What should I do ? Edit: I initialized the remaining ~1,5TB in Windows - nothing changed.
I'd try rebuilding the initial ramdisk: /sbin/new-kernel-pkg --package kernel --mkinitrd --dracut --depmod --update `uname -r` Failing that, I'd probably give up and update to F17, which should sort out the problem as well.
Fedora 16 fails to boot after Win7 installed a GPT Drive
1,698,413,468,000
For context, I have a Fedora KDE installation whose partitions take up half my SSD. The other half I left unallocated when I installed Linux. Although I'm aware that it's dangerous to attempt to extend any of the existing Fedora partitions without booting from a gparted USB environment or similar, I had previously assumed it was safe to create new partitions in the unallocated space from within my Fedora environment (whether the GUI partition manager or fdisk, etc). But I just read in How Linux Works (Brian Ward) that when modifying partition tables, Ensure that no partitions on your target disk are currently in use. This is a concern because most Linux distributions automatically mount any detected filesystems. However, it's unclear to me from the context whether he is talking about MBR or GPT or both. So my question is, is it dangerous to make any changes whatsoever to a GPT partition table on the same disk as the partitions of the currently-running Linux environment, even if you aren't extending/shrinking the existing partitions?
Creating, deleting, and modifying partitions in an unallocated space of the same disk containing running Linux partitions is not an issue. The fdisk command, and other partitioning tools, do that all the time. because most Linux distributions automatically mount any detected filesystems. Even assuming that this is true (filesystems need to have an entry in /etc/fstab for the system to mount them), this is not an issue, because when you're modifying these partitions there is no filesystem on it. First you create the partition, then you create a filesystem (the non-techie term for this is formatting) on it.
Dangerous to create partitions in unallocated space on the same disk as the running Linux system?
1,698,413,468,000
Is there any command to list all partition type codes recognizable by currently installed distribution (In my case Ubuntu 18.04.03 LTS) I know the following website exists Andries E. Brouwer 1995-2002 - homepages.cwi.nl yet there should be any command inbuilt in the linux console. I know that cgdisk shows all partition codes while creating new partition Provided screenshots from my own system while formatting a pendrive creating bootable Ubuntu 20.04 lts usb Yet again my question is, Is there any command that can show all recognizable partition type codes for MBR and GPT for the current distribution or if there is any man pages that has reference? Or may be this is different for the different tools? Example of MBR partition types codes thestarman.pcministry.com
Ok finally I found that it's mainly dependent of the filesystem and the volume identification hex code is/should be present in the filesystem documentation as seen below for NTFS and EXT4 Conclusion: There is not specific command or tool only for listing partitions hex code besides the function of cgdisk, gdisk, cfdisk, fdisk, etc while creating the partition. gdisk - list partition hex code previous to creation cgdisk, cfdisk and fdisk - list partition hex code during creation only NTFS Partition $VOLUME_INFORMATION 0x70 Attribute http://dubeyko.com/development/FileSystems/NTFS/ntfsdoc.pdf EXT4 Partition Identifier for MBR (right column) https://en.wikipedia.org/wiki/Ext4 This post partially answers the question also Why does parted need a filesystem type when creating a partition, and how does its action differ from a utility like mkfs.ext4? "A partition can have a type. The partition type is a hint as in "this partition is designated to serve a certain function". Many partition types are associated with certain file-systems, though the association is not always strict or unambiguous. You can expect a partition of type 0x07 to have a Microsoft compatible file-system (e.g. FAT, NTFS or exFAT) and 0x83 to have a native Linux file-system (e.g. ext2/3/4)." So apparently the code is not always strictly associated as shown in the previous answer. For example EXT4 83h Any native Linux file system (see 93h, corresponds with 43h) https://en.wikipedia.org/wiki/Partition_type#PID_83h Or Solaris ZFS for example as seen in BFh and 82h sections https://en.wikipedia.org/wiki/Partition_type#PID_BFh https://en.wikipedia.org/wiki/Partition_type#PID_82h Additional examples information gathered during the research ZFS Attributes BF01 BF07 EF02 BF01 special hex type code Solaris Partition BF07 special hex type code Solaris Reserved 1 EF02 special hex type code BIOS Boot Partition https://www.it-swarm-es.tech/es/gdisk/codigos-hex-de-gdisk/961390299/
Command to list partition type codes in deb and rpm distributions for MBR and GPT
1,698,413,468,000
My disk partition scheme, as seen by Grub, is as follows: hd0,gpt1: EFI system hd0,gpt2: Linux Swap hd0,gpt3: Linux Filesystem hd0,gpt4: FreeBSD UFS` The install process of FreeBSD 11.0-RELEASE went fine, I also tried chrooting and updating the system, just in case. I then booted into Arch Linux and edited /etc/grub/40_custom, trying various configurations (see FreeBSD menu entry in GRUB on wiki.archlinux.org), and ran grub-mkconfig -o /boot/grub/grub.cfg. (Note: I edited the "mountfrom" parameter specifying the correct dev file for the root fs, which in my case is ada0p4, and omitted the "bsd1" entry, only setting (hd0,gpt4), otherwise it couldn't find the partition.) If I load kfreebsd /boot/loader and boot, I get a black screen. If I load kfreebsd /boot/kernel/kernel kfreebsd_loadenv /boot/device.hints set kFreeBSD.vfs.root.mountfrom=ufs:/dev/ada0p4 set kFreeBSD.vfs.root.mountfrom.options=rw and boot, I get this: My laptop is a Thinkpad X220 (stock BIOS up-to-date). Grub version: 2.02.beta3-4. Any ideas what's wrong here? Please leave a comment if you need further info. My grub.cfg (link): # # DO NOT EDIT THIS FILE # # It is automatically generated by grub-mkconfig using templates # from /etc/grub.d and settings from /etc/default/grub # ### BEGIN /etc/grub.d/00_header ### insmod part_gpt insmod part_msdos if [ -s $prefix/grubenv ]; then load_env fi if [ "${next_entry}" ] ; then set default="${next_entry}" set next_entry= save_env next_entry set boot_once=true else set default="0" fi if [ x"${feature_menuentry_id}" = xy ]; then menuentry_id_option="--id" else menuentry_id_option="" fi export menuentry_id_option if [ "${prev_saved_entry}" ]; then set saved_entry="${prev_saved_entry}" save_env saved_entry set prev_saved_entry= save_env prev_saved_entry set boot_once=true fi function savedefault { if [ -z "${boot_once}" ]; then saved_entry="${chosen}" save_env saved_entry fi } function load_video { if [ x$feature_all_video_module = xy ]; then insmod all_video else insmod efi_gop insmod efi_uga insmod ieee1275_fb insmod vbe insmod vga insmod video_bochs insmod video_cirrus fi } if [ x$feature_default_font_path = xy ] ; then font=unicode else insmod part_gpt insmod ext2 set root='hd0,gpt3' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt3 --hint-efi=hd0,gpt3 --hint-baremetal=ahci0,gpt3 ff637c2e-1e42-4533-9a12-6ac2f6d43c9b else search --no-floppy --fs-uuid --set=root ff637c2e-1e42-4533-9a12-6ac2f6d43c9b fi font="/usr/share/grub/unicode.pf2" fi if loadfont $font ; then set gfxmode=1024x768 load_video insmod gfxterm set locale_dir=$prefix/locale set lang=en_US insmod gettext fi terminal_input console terminal_output gfxterm if [ x$feature_timeout_style = xy ] ; then set timeout_style=menu set timeout=5 # Fallback normal timeout code in case the timeout_style feature is # unavailable. else set timeout=5 fi ### END /etc/grub.d/00_header ### ### BEGIN /etc/grub.d/10_linux ### menuentry 'Arch Linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-ff637c2e-1e42-4533-9a12-6ac2f6d43c9b' { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod fat set root='hd0,gpt1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt1 --hint-efi=hd0,gpt1 --hint-baremetal=ahci0,gpt1 BE35-0EC9 else search --no-floppy --fs-uuid --set=root BE35-0EC9 fi echo 'Loading Linux linux ...' linux /vmlinuz-linux root=UUID=ff637c2e-1e42-4533-9a12-6ac2f6d43c9b rw echo 'Loading initial ramdisk ...' initrd /intel-ucode.img /initramfs-linux.img } submenu 'Advanced options for Arch Linux' $menuentry_id_option 'gnulinux-advanced-ff637c2e-1e42-4533-9a12-6ac2f6d43c9b' { menuentry 'Arch Linux, with Linux linux' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-linux-advanced-ff637c2e-1e42-4533-9a12-6ac2f6d43c9b' { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod fat set root='hd0,gpt1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt1 --hint-efi=hd0,gpt1 --hint-baremetal=ahci0,gpt1 BE35-0EC9 else search --no-floppy --fs-uuid --set=root BE35-0EC9 fi echo 'Loading Linux linux ...' linux /vmlinuz-linux root=UUID=ff637c2e-1e42-4533-9a12-6ac2f6d43c9b rw echo 'Loading initial ramdisk ...' initrd /intel-ucode.img /initramfs-linux.img } menuentry 'Arch Linux, with Linux linux (fallback initramfs)' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-linux-fallback-ff637c2e-1e42-4533-9a12-6ac2f6d43c9b' { load_video set gfxpayload=keep insmod gzio insmod part_gpt insmod fat set root='hd0,gpt1' if [ x$feature_platform_search_hint = xy ]; then search --no-floppy --fs-uuid --set=root --hint-bios=hd0,gpt1 --hint-efi=hd0,gpt1 --hint-baremetal=ahci0,gpt1 BE35-0EC9 else search --no-floppy --fs-uuid --set=root BE35-0EC9 fi echo 'Loading Linux linux ...' linux /vmlinuz-linux root=UUID=ff637c2e-1e42-4533-9a12-6ac2f6d43c9b rw echo 'Loading initial ramdisk ...' initrd /intel-ucode.img /initramfs-linux-fallback.img } } ### END /etc/grub.d/10_linux ### ### BEGIN /etc/grub.d/20_linux_xen ### ### END /etc/grub.d/20_linux_xen ### ### BEGIN /etc/grub.d/30_os-prober ### ### END /etc/grub.d/30_os-prober ### ### BEGIN /etc/grub.d/40_custom ### # This file provides an easy way to add custom menu entries. Simply type the # menu entries you want to add after this comment. Be careful not to change # the 'exec tail' line above. menuentry 'FreeBSD 11.0' { insmod ufs2 set root=(hd0,gpt4) kfreebsd /boot/kernel/kernel kfreebsd_loadenv /boot/device.hints set kFreeBSD.vfs.root.mountfrom=ufs:/dev/ada0p4 set kFreeBSD.vfs.root.mountfrom.options=rw } ### END /etc/grub.d/40_custom ### ### BEGIN /etc/grub.d/41_custom ### if [ -f ${config_directory}/custom.cfg ]; then source ${config_directory}/custom.cfg elif [ -z "${config_directory}" -a -f $prefix/custom.cfg ]; then source $prefix/custom.cfg; fi ### END /etc/grub.d/41_custom ### EDIT: forgot to mention, just before the disk partitioning, I get a warning with the following text Your model of Lenovo is known to have a BIOS bug that prevents it booting from GPT partitions without UEFI. Would you like the installer to apply a workaround for you? Since I'm booting with UEFI, i choose not to apply it (I even tried applying it with no success).
While most guides suggest to use chainloader +1 for chainloading, it didn't work for me. The following configuration did the trick: insmod ufs2 set root=(hd0,gpt4) chainloader /boot/loader.efi
Unable to dual boot FreeBSD alongside Arch Linux with Grub2
1,698,413,468,000
I just finished installing Arch and my 3TB GPT drive (this is not my boot drive) is not showing in arch. I initialised it as GPT on my windows 7 machine and that machine is still able to read from that drive. Does it have to do with UEFI? My mobo is UEFI and my windows installation is installed as EFI. I wasn't sure how to do EFI on arch and some threads mentioned arch would automatically recognise the UEFI BIOS (I am not too sure about the relation between UEFI and OS installation).
I have no idea why the partition isn't showing up on your Arch desktop (IMO GUIs are strange and unreliable and often have annoying hard-coded assumptions1) but if the partition is visible to the operating system, e.g. with sudo fdisk -l or sudo blkid or sudo lsblk then you can manually mount the partition anywhere you like. For example: sudo mkdir -p /windows-data sudo mount -t ntfs -o ro /dev/sda2 /windows-data You can, as always, tweak the exact mount options (e.g. for uid, gid, perms) to suit your needs. You can also cause the partition to be auto-mounted by adding a line to /etc/fstab, something like: /dev/sda2 /windows-data ntfs uid=1000,gid=1000,noatime,allow_other 0 2 (that's all on one line, not two lines) 1 If I had to guess, I'd say that the Arch desktop knows what an "HPFS/NTFS/exFAT" partition is and will display it but doesn't know what a "Microsoft basic data" is, so ignores it.
Arch not recognising GPT drive
1,698,413,468,000
I have tried installing Arch Linux and what I ended up with was a partition scheme like this: /dev/sda: /dev/sda1 NTFS partition (Windows 7) /dev/sda2 ext4 (Arch) /dev/sda3 swap I don't know why, but for some reason I have been unable to mount the NTFS partition under Linux. It's worth mentioning that the first partition is, for some reason, detected as an EFI partition and as on a GPT-formatted disk (my computer doesn't have an EFI bootloader and the drive has always had an MBR partition table). I deleted the sda2 and sda3 using the Windows repair disk and was about to install a second Windows 7 installation alongside the first partition, but an error reported that the entire disk is a GPT drive! The "Used" and "Free space available" sections indicate that the data on the first partition is still there, it's just that I cannot access the actual partition by any means. It seems that the first partition is with an MBR partition table on a GPT style disk. How do I access the data on the partition?
This is utterly strange, but I have solved my problem. As I'm not sure what exactly solved the issue, I'll describe what happened. First of all, I tried to access the partition from Arch Linux, which was installed on the same drive. This didn't work; I deleted the Linux partitions; I've unplugged the computer from the electric source and left it overnight (this helped me a few times, especially while fixing recursive fault errors during boot); The next morning I created an Ubuntu LiveUSB, booted the computer from it and mounted the malfunctioning partition using the following command: sudo mkdir /mnt/disk sudo mount /dev/sda1 /mnt/disk After executing the above commands I was able to access the partition and backup all of my files.
Recover Windows partition in a GPT disk (previously MBR)
1,698,413,468,000
I am trying out Funtoo on a new machine. I've been through the installation process, as described in Funtoo Linux Installation . Specifically, the installation is done from within an existing Linux distro via chroot, though in a new empty SSD. All went fine up to the point of installing the bootloader which fails: grub-install --no-floppy /dev/sda source_dir doesn't exist. Please specify --target or --directory The partitions created are: Number Start (sector) End (sector) Size Code Name 1 2048 1026047 500.0 MiB 8300 Linux filesystem 2 1026048 1091583 32.0 MiB EF02 BIOS boot partition 3 1091584 269527039 128.0 GiB 8200 Linux swap 4 269527040 395356159 60.0 GiB 8300 Linux filesystem 5 395356160 479242239 40.0 GiB 8300 Linux filesystem 6 479242240 500118158 10.0 GiB 8300 Linux filesystem and the /etc/fstab looks like /dev/sda1 /boot ext2 noatime 1 2 /dev/sda3 none swap sw 0 0 /dev/sda4 / ext4 noatime 0 1 /dev/sda5 /osgeo ext4 auto,rw,exec,user 0 2 /dev/sda6 /home ext4 defaults,noatime 0 2 #/dev/cdrom /mnt/cdrom auto noauto,ro 0 0 The /dev/sda1 partition is mounted as reported by mount ... /dev/sda1 on /boot type ext2 (rw,relatime) ... If it matters, the existing Linux distro has a similar GPT scheme, of course, in another disk than the target for Funtoo. Some info about it: cat /proc/mounts | grep boot /dev/sdb1 /boot ext4 rw,relatime,data=ordered 0 0 /dev/sdb2 /boot/efi vfat rw,relatime,fmask=0002,dmask=0002,allow_utime=0020,codepage=cp437,iocharset=iso8859-1,shortname=mixed,utf8,errors=remount-ro 0 0 /dev/sda1 /boot ext2 rw,noatime 0 0 What else is there to be done? Do I have to mount /dev/sda2 as well? Thanks for any hints!
I just exited from chroot, unmounted all partitions, rebooted (the other Linux distro), and re-did the same stuff (following http://www.funtoo.org/wiki/Installation_Troubleshooting). It simply worked!
grub-install --no-floppy /dev/sda fails (Funtoo)
1,698,413,468,000
I intend to create a dualboot persistent usb. I'd like to try creating a USB where I can boot macOS High Sierra and WIndows 10. From what I understand of LVM, I can create 2 VG, 1 APFS and 1 NTFS. This would allow me to boot into Windows10 on a PC and Windows10/macOS on a Mac. I know workarounds involve using 2 USB, or using bootcamp, but I'd like to give it a try. rEFInd would be able to give me the options I need, with individual /boot files intheir own VG. Rather than using GUI, I'm using this opportunity to learn about the basics. These are the steps I have taken. Step 1: I wiped my USB with sudo dd if=/dev/zero of=/dev/sdb bs=4k && sync Step 2: Add GPT. sudo gdisk /dev/sdb o # Create new empty GPT Step 3: Create EFI partition n # new partition 1 # 1st partition <enter> # suggested/default start sector +512M # Internet wisdom on EFI size ef00 # EFI system Step 4: Create LVM partition n # new partition 4 # 2nd partition +128M # Internet wisdom on good practice -128M # Internet wisdom to create buffer space 8e00 # LVM file system printing the end result: Disk /dev/sdd: 242614272 sectors, 115.7 GiB Logical sector size: 512 bytes Disk identifier (GUID): FE8B1928-7122-4004-9CF6-D5D47C08999E Partition table holds up to 128 entries First usable sector is 34, last usable sector is 242614238 Partitions will be aligned on 2048-sector boundaries Total free space is 526302 sectors (257.0 MiB) Number Start (sector) End (sector) Size Code Name 1 2048 1050623 512.0 MiB EF00 EFI System 2 1312768 242352094 114.9 GiB 8E00 Linux LVM Here is where I am lost. I don't know how to install Boot loaders into the EFI partition. I have followed Rod Smith's Managing EFI Boot Loaders for Linux: EFI Boot Loader Installation and The rEFInd Boot Manager: Installing rEFInd but I got lost at the /boot/efi part. My Ubuntu Machine does not /boot/efi, but /boot/grub. Could anyone advise on actual steps to achieve rEFInd on an external USB?
An EFI System Partition is simply a FAT32-formatted partition (with the ESP boot flag set on GPT partition tables). Some UEFI systems will happily load bootloaders from a FAT32 partition on a standard MBR partition. It looks to me like you've created it properly, but lacks formatting. Once formatted, you'll "install" rEFInd there by copying the rEFInd files. Format: mkfs.vfat -F 32 /dev/sdd1 Mount: mkdir /tmp/usbboot && mount /dev/sdd1 /tmp/usbboot Prepare destination folder: mkdir /tmp/usbboot/EFI Copy rEFInd's files: cp -a /path/to/refind /tmp/usbboot/EFI/ (or use the refind-install script: refind-install --root /tmp/usbboot) Edit rEFInd's configuration to taste (/tmp/usbboot/EFI/refind/refind.conf) The final step to fully install a bootloader on a UEFI system is to register it with your UEFI firmware, using efibootmgr or similar. This is often skipped with bootable USB drives; it will only affect the current system. To boot on other systems you'd use the firmware's boot menu. (If there are no other drives plugged in, and no other bootloaders on the USB's ESP, the firmware should autodetect rEFInd and load it automatically.)
How to install rEFInd for DIY multiboot USB
1,698,413,468,000
Assumed we have a MS-Windows/Linux dualboot system that works completely fine: partitioning scheme: gpt sda1: Windows sda2: Linux Root sda3: Linux Home Unfortunately this installation is pretty aged, so I would like to perform a Linux reinstallation while maintaining the MS-Windows dualboot. Question: Does it make sense to repartition/reformat the whole drive and also perform a fresh Windows installation in this case? My thoughts concern about possible, technical gpt updates that could provide more stability or similar. My assumption is that the gpt system used at the previous time of installation could be based on old gpt versions, so there could have been implemented several updates in the meanwhile. Or do I overthink this whole thing too much and it would be completely nonsense to install everything new, where I actually only want to reinstall Linux?
I don't think it's relevant to repartition or recreate your drive's partition table. The GPT is only the table, that points out how the disk is divided and how the partitions are identified. The performance itself is dependent on the partition format, not the partition table (for instance ext4 performs better than ext2). Remember for example how trivial is to convert a partition table. You can convert a MBR disk to GPT with gdisk in an instant, without having to reformat your partitions.
Linux/Windows Dualboot: Does it make sense to repartition a whole drive when reinstalling Linux?
1,698,413,468,000
I have new computer with UEFI. I formated the disc as GPT (not MBR), made few small partions on start (as placeholders for UEFI, for boot, for swap etc ...) then larger partition for system / (and left rest of the disc free for future usage) and installed Gentoo. But I cannot figure, how to install grub-LEGACY to enable booting different kernels with different command line aguments. It is easy on MBR disc and it is easy to manage grub.gonf with just nano or so to get changes done. Much easier and more straithforward, then configuring and running bunch of scripts every time I need small change to Grub2 configuration (and the resultinggrub.conf is also much smaller and more readable), so I would like to stich with grub-legacy as long as possible. (I know, that Grub2 is more new and supports more filesystems, which I do not ever use, but so far grub-legacy worked just well for me and did everything I needed in easy and simply to understand/modify way) Thanks for all directions
3 possibilities mbr Use mbr; waste the disk space over 2 TB gpt Use grub2 with gpt. [Stressing yourself over the scripts etc is not strictly necessary — you can just ignore the suggestion not to edit grub.conf and edit like legacy grub. Just make sure no updates that point to (this) grub automatically run] hybrid Use an hybrid approach ie 4 mbr partitions (under 2TB) which grub legacy is aware of and a gpt aware OS that uses the rest. Caveat: I've given the third choice since that's what you (seem to) want. However as the link suggests its a great deal of trouble and not worth it.
How install legacy grub to gpt uefi disk?
1,698,413,468,000
I have a Lenovo Thinkpad W550s that already has Windows 7 on it. I would like to install Fedora 29 Workstation alongside Windows 7, but I have run into some problems. The hard drive was formatted with MBR (not GPT) and three partitions. Using the fdisk -l command from a Fedora 29 LiveUSB yields the following information: Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x7a8dee3d Device Boot Start End Sectors Size Id Type /dev/sda1 * 2048 3074047 3072000 1.5G 7 HPFS/NTFS/exFAT /dev/sda2 3074048 944916479 941842432 449.1G 7 HPFS/NTFS/exFAT /dev/sda3 944916480 976771071 31854592 15.2G 7 HPFS/NTFS/exFAT The motherboard has UEFI. However, Legacy BIOS is enabled, and Secure Boot is disabled. In the Fedora 29 Workstation installer, I could shrink the /dev/sda2 partition, and use that for root, home, whatever, and delete the /dev/sda3 partition to satisfy the four partition limit with MBR. But when I try to install the OS, the installer gives an error about requiring a /boot/efi partition. Even when I try deleting /dev/sda1 (still from within the Fedora installer), formatting that and installing the EFI to /dev/sda1, the installer still won't proceed. Is there a way to install Fedora 29 on this laptop without removing Windows 7? I need it for work, and can't do a reinstall of Windows 7.
One of two things are the issue: You created a UEFI only Installer USB Your booting in UEFI mode and need to boot in MBR/Legacy Mode. If you can get to the CLI try this : https://askubuntu.com/questions/162564/how-can-i-tell-if-my-system-was-booted-as-efi-uefi-or-bios Update: When I have a USB/ISO that is both UEFI/MBR compatible it usually shows two boot options in the BIOS/BootLoader. See if a second option shows up and try that and/or try messing with BIOS settings to force MBR/Legacy mode only. I have also had it where Rufus (Windows ISO write to USB Tool) will say "Do you want to use ISO Mode (Recommended)" or "DD Mode" and I generally use ISO mode. But, I remember having where that created a UEFI only ISO and I then tried DD Mode and had a Hybrid USB which was both MBR and UEFI compatible. Try using DD to create installer USB and then check for a new boot entry.
Can't install Fedora 29 on Thinkpad W550s due to GPT
1,698,413,468,000
System: Laptop with Linux Mint 17.3, 1x SSD for system and 2x HDD intended for RAID1 using mdadm. Situation: Without knowing how to create RAID1 properly, I created it badly. GParted showed a warning that a primary gpt partition table is not there, and that it is using the backup one, I think it showed this twice GParted showed the partition on both HDDs contained ext4 filesystem, instead of linux-raid filesystem GParted did not show the raid flag on neither HDDs Reboot caused the array not to work, I mean not only it did not mount automatically, it could not be mounted without stopping the array and re-assembling it There were probably other things I did not notice like I don't know if the array, I mean the mirroring, even worked properly
In this answer, let it be clear that all of your data will be destroyed on both of the array members (drives), so back it up first! Open terminal and become root (su); if you have sudo enabled, you may also do for example sudo -i; see man sudo for all options): sudo -i Check what number (mdX) the array has: cat /proc/mdstat Suppose it is md0 and it is mounted on /mnt/raid1, first we have to unmount and stop the array: umount /mnt/raid1 mdadm --stop /dev/md0 We need to erase the super-block on both drives, suppose sda and sdb: mdadm --zero-superblock /dev/sda1 mdadm --zero-superblock /dev/sdb1 Let's get to work; we should erase the drives, if there were any data and filesystems before, that is. Suppose we have 2 members: sda, sdb: pv < /dev/zero > /dev/sda pv < /dev/zero > /dev/sdb If you were to skip the previous step for your reasons, you need to wipe all filesystems on both of the drives. Then check if there is nothing left behind, you may peek with GParted on both of the drives, and if there is any filesystem other than unknown, wipe it. First, we wipe all existing partitions, suppose sda contains 3 partitions, then: wipefs --all /dev/sda3 wipefs --all /dev/sda2 wipefs --all /dev/sda1 Use this on both of the drives and do all partitions there are. Then, we wipe the partition scheme with: wipefs --all /dev/sda wipefs --all /dev/sdb Then, we initialize both drives with GUID partition table (GPT): gdisk /dev/sda gdisk /dev/sdb In both cases use the following: o Enter for new empty GUID partition table (GPT) y Enter to confirm your decision w Enter to write changes y Enter to confirm your decision Now, we need to partition both of the drives, but don't do this with GParted, because it would create a filesystem in the process, which we don't want, use gdisk again: gdisk /dev/sda gdisk /dev/sdb In both cases use the following: n Enter for new partition Enter for first partition Enter for default of the first sector Enter for default of the last sector fd00 Enter for Linux RAID type w Enter to write changes y Enter to confirm your decision To triple-check if there is nothing left behind, you may peek with GParted on both of the newly created partitions, and if they contain any filesystem other than unknown, wipe it: wipefs --all /dev/sda1 wipefs --all /dev/sdb1 You can examine the drives now: mdadm --examine /dev/sda /dev/sdb It should say: (type ee) If it does, we now examine the partitions: mdadm --examine /dev/sda1 /dev/sdb1 It should say: No md superblock detected If it does, we can create the RAID1 array: mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1 We shall wait until the array is fully created, this process we may watch with: watch -n 1 cat /proc/mdstat After creation of the array, we should look at its detail: mdadm --detail /dev/md0 It should say: State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Now we create filesystem on the array, if you use ext4, this is better to be avoided, because of ext4lazyinit would take noticeable amount of time, hence the name, "lazyinit", therefore I recommend you to avoid this one: mkfs.ext4 /dev/md0 Instead, you should force a full instant initialization with: mkfs.ext4 -E lazy_itable_init=0,lazy_journal_init=0 /dev/md0 By specifying these options, the inodes and the journal will be initialized immediately during creation, useful for larger arrays. If you chose to take a shortcut and created the ext4 filesystem with the "better avoided command", note that ext4lazyinit will take noticeable amount of time to initialize all of the inodes, you may watch it until it is done, e.g. with: iotop Either way you choose to make the file system initialization, you should mount it after it has finished its initialization. We now create some directory for this RAID1 array: mkdir --parents /mnt/raid1 And simply mount it: mount /dev/md0 /mnt/raid1 Since we are essentially done, we may use GParted again to quickly check if it shows linux-raid filesystem, together with the raid flag on both of the drives. If it does, we properly created the RAID1 array with GPT partitions and can now copy files on it. See what UUID the md0 filesystem has: blkid /dev/md0 Copy the UUID to clipboard. Now we need to edit fstab, with your favorite text editor: nano /etc/fstab And add add an entry to it: UUID=<the UUID you have in the clipboard> /mnt/raid1 ext4 defaults 0 0 You may check if it is correct, after you save the changes: mount --all --verbose | grep raid1 It should say: already mounted If it does, we save the array configuration; in case you don't have any md device yet created, you can simply do: mdadm --detail --scan >> /etc/mdadm/mdadm.conf In case there are arrays already existent, just run the previous command without redirection to the conf file: mdadm --detail --scan and add the new array to the mdadm.conf file manually. In the end, don't forget to update your initramfs: update-initramfs -u Check if you did everything according to plan, and if so, you may restart: reboot --reboot
How to re-create RAID1 really properly
1,698,413,468,000
I have a 2tb hard drive containing gpt and a single 2tb partition with ext4 file system. The partition has one 1.5tb file inside it. I want to change the type of file system of this partition from ext4 to exfat without deleting the 1.5tb file. Can I do that without writing a custom program?
There is a tool which some people have successfully used to convert Ext4 partitions to exFAT in place, fstransform. Note that the tool doesn’t officially support conversions to exFAT, and I haven’t tried it — but there are apparently reports of it working (with the --force-untested-file-systems flag). In any case you should have a backup of your file before attempting this, in which case you might as well reformat and restore your file from backup.
Change the file system of a partition without deleting its content
1,592,331,333,000
If I setup a VirtualBox guest with two 30 GB virtual hard disks and follow the following steps, the result will be a fully functional, booting operating system: Boot Ubuntu 14.04 Server install CD At the partioner, select 'manual'. Put a single empty partition on each virtual hard disk. Select 'Configure software RAID' from 'manual' menu. Add the two virtual hard disks, each with empty partitions, to a new RAID 1 pair and select 'finish.' Select 'Guided Partitioning' from 'manual' menu. Return to guided partitioning and select 'Guided - use entire disk and setup encrypted LVM.' Install to recently created software RAID device and use entire storage available for LVM. Finish installation. However, take the steps above, but substitute Virtualbox guest for bare metal and substitute two virtual hard disks for two zeroed 3TB SATA disks, and the result is an un-bootable system. No GRUB screen, BIOS skips the disks. I tried every possible combination of GPT flags, still nothing. Any thoughts on the cause? -Update- So, the bare metal in question is a Lenovo x3100 M5 server with IBM firmware. One hint to the problem, the Ubuntu server installer is dropping a BIOS compatible boot loader on the Virtualbox install. On the Lenovo, it installs a uEFI GRUB, which, the Lenovo can boot as long as it is not on a mdadm RAID. If I follow the above steps on the Lenovo, minus the Ubuntu software RAID, it boots. If I configure the RAID 1 pair in the IBM firmware (c100/LSI fakeRAID), the install fails at the GRUB install. Does not seem to be a GPT vs MBR issue since the Lenovo does boot the 3TB GPT LVM volume, as long as it is not on a RAID pair.
Your 3TB disks need GPT boot rather than MBR, so you will need to allocate a 1MB BIOS boot partition for grub to store its data. See http://ubuntuforums.org/showthread.php?t=2248346 for the gory details (which I will try to summarise here when I get back to a decent keyboard).
Ubuntu 14.04 software RAID with LVM install won't boot on bare metal
1,592,331,333,000
I have an ASUS Z170 pro gaming motherboard. Its motherboard uses UEFI, not BIOS. I have two drives: An SSD with Windows 10 installed. A blank HDD. Following the instructions here, I run msinfo32 and get the value of BIOS mode. It says Legacy, which means it boots in BIOS-MBR mode. The wiki has different instructions for BIOS-MBR and UEFI-GPT, but I don't know which one I have because the BIOS and Windows give me different information. Should I use MBR or GPT? Also, I must find my EFI partition but I don't know which it is. In my SSD with Windows 10, I have three partitions: System Reserved 100 MB NTFS Healthy (System, Active, Primary Partition) Which is the EFI partition?
Windows was booting in the Legacy mode on a UEFI motherboard, which is incorrect. In order for a Linux bootoader to see Windows, it must be the same type. Reinstalling Windows 10 made it boot in UEFI mode, fixing the bootloader issue. Because Windows 10 is in UEFI mode, it is best to use GPT rather than MBR. None of the listed partitions appear to be the EFI system partition. After resinstalling Windows, a new partition clearly labeled as the EFI partition was created.
Arch Linux and Windows 10 dual boot
1,592,331,333,000
I partitioned a MMC card into multiple partitions (in GPT format), and the very first partition is just padding space so that all other partitions are aligned to a optimal boundary. Problem is, on boot Linux always tries to mount the first partition, which is almost guaranteed to fail, which 1) takes time, 2) if it should succeed, behavior is highly undefined. Is there a flag I can set for the partition, or a config file that I can change, to prevent certain partitions of certain block devices from being mounted?
Use the option noauto in /etc/fstab for that mount point to make sure the init process will not mount it at boot. You might have a line like this in /etc/fstab : /dev/sda1 /mnt/your_partition ntfs-3g defaults,noauto 0 0
How to mark a partition as unmountable?
1,592,331,333,000
I have a laptop with SSD which has Windows 10 installed. I booted the laptop from USB flash drive into Ubuntu 14.04.3 and tried to find out the file system on partition 4. According to gdisk it has partition code 0x0700, which means that it is 0x07(0x0700/0x0100) in MBR codes which means HPFS/NTFS/exFAT. This is in accord with gdisk manual which says that codes for all varieties of FAT and NTFS partition correspond to a single GPT code(entered as 0x0700 in sgfdisk). According to parted it's msftdata. Parted seems to gather its information by looking the data from partition. fdisk -s /dev/sda4, which uses the same principle as parted, finds that the file-system is PE32 executable. Finally I tried to get any additional information with ntfsinfo, but looks like that ntfsinfo wants the file-system to be mounted: For example dumpe2fs can be used on unmounted file-systems. One could assume that this is a NTFS partition, but for some reason the partition is not mounted: In short, how to determine Windows file-system on GPT disk partition? Or is there a way to check from Linux if this partition is encrypted?
Turned out, that file-system on /dev/sda4 partition was corrupted and not encrypted. I was able to fix the partition with ntfsfix /dev/sda4. Output of file -s /dev/sda4 and ntfsinfo once the file-system is fixed can be seen below: root@ubuntu:~# file -s /dev/sda4 /dev/sda4: x86 boot sector root@ubuntu:~# ntfsinfo -vm /dev/sda4 | head Volume Information Name of device: /dev/sda4 Device state: 11 Volume Name: Volume State: 91 Volume Flags: 0x0000 Volume Version: 3.1 Sector Size: 512 Cluster Size: 4096 Index Block Size: 4096 root@ubuntu:~#
Determine Windows file-system on GPT partition
1,592,331,333,000
I have a NTFS hard drive internally attached to my computer and it's causing problems with the other Windows installation on my dual (or triple?) boot machine. I'm not sure if the partition scheme is GPT or MBR, but how can I create a backup of the partition table using dd and then wipe it from the drive so it isn't recognized by the other Windows as it starts up?
To backup DOS label (MBR) use this: dd if=/dev/sdX of=mbr bs=512 count=1 To backup GPT label use this: dummy=$(parted -ms /dev/sdX print | tail -1| cut -b1) size=$((128 * dummy + 1024)) dd if=/dev/sdX of=gpt bs=1 count=$size To wipeout the labels use this: dd if=/dev/zero of=/dev/sdX bs=Y count=Z partprobe /dev/sdX HTH
Backup and then wipe partition table from head of drive
1,592,331,333,000
I've restored backup GPT headers on previously GPT partitioned disks that are member of a Linux software raid (mdraid). This was done because partprobe reported corrupted headers. Now actually software raid should manage the entire disks but the previously used partition information remains from the time when the server was used in a different fashion. Realizing that GPT is probably not relevant in my setup I removed GPT information entirely through gdisk expert mode. My worry at this point is however that my fiddling around with GPT table restore/GPT information removal might have corrupted my software raid. The system itself doesn't show any signs that this is the case (still boots, data is accessible) but I wondered if someone can advise if the data could still be corrupted by my actions or what way I could check the integrity of the data.
Version 1.2 metadata is stored 4K from the start of the block device. The data itself is a fair bit in, typically. For example, here is (part of) mdadm -E from a disk in one of my arrays: /dev/sda3: Magic : a92b4efc Version : 1.2 ⋮ Data Offset : 262144 sectors Super Offset : 8 sectors Unused Space : before=262064 sectors, after=2224 sectors ⋮ As you can see, 8 sectors in (8 * 512 bytes/logical sector = 4KiB) is the array superblock. The data is much further in, 128MiB in fact. GPT layout is the first sector (#0) is a protective MBR; the next 33 sectors (#1–#33) the GPT partition table and entries. The last 33 sectors on the disk store the backup. So, restoring from the backup GPT partition table may overwrite the first 34 sectors total. But it wouldn't touch the data (because that's well over two hundred thousand sectors away). Depending on how much space is unused after, then even writing to the backup at the end wouldn't cause corruption (there is plenty in my array, yours may differ.) It sounds like your superblocks were not destroyed, though, as you have assembled the array since. I'd confirm by checking mdadm -E on each disk, but other than that, it sounds like no harm was done. You may also want to clear and re-enable the write-intent bitmap if (a) in use and (b) internal, as that's stored in the space between the superblock and the data.
Data integrity after GPT restore on Mdraid configured disk
1,592,331,333,000
I wanted to upgrade my 1.5TB HDD to a 4TB SSD, none of the internet resources I found correctly matched my situation from beginning to end, and I ended up spending about 20 hours getting it to work, so I'm compiling my findings here for future travelers.
Base details: I have an old Dell Inspiron 5720 running Ubuntu 20.04.6. I started with a 1.5TB HDD and bought a 4TB SSD. I discovered that MBR, the partition table my old drive used, did not support partitions larger than ~2TB, so I was going to have to switch to GPT. Apparently this almost inevitably entails switching your boot system from BIOS to UEFI, as well, which was really the primary source of problems. (There's evidence there was a broken UEFI installation already present, as well, which may have complicated things.) I copied Kali Linux onto a USB stick to help with the proceedings. You can skip down to where I start listing steps, if you want the tl;dr. I'm leaving the rest, and the lines in parentheses, in case the details help somebody figure out what's wrong with theirs. The following is the first thing I tried, which DID NOT WORK. It seemed like it would be safer to mess with a partition while it was not in use, so I booted the usb. I plugged the SSD in with an adapter. I opened gparted, and if I remember correctly, on the SSD I created a GPT partition table. I then (in gparted) selected the HDD, copied the first partition, selected the SSD, pasted, and likewise for the second partition. Reading UEFI guides, I MAY have created a third partition at the end for the UEFI partition, or I may have added that later when my first attempts failed. I then waited for the copy to finish (like 4 hours), turned off the computer, and swapped hard drives, and tried to boot. No dice. I think the error was "No operation system", or something equally nearly-right. Long story short, I tried a whole bunch of things for many hours, and none of them worked. I then switched to plan B. Plan B, which eventually DID work. Most of the guides expected you to be booted into the system you wanted to convert, so first I swapped drives again and booted the usb and copied my hdd directly onto the ssd with dd if=/dev/sda bs=1M of=/dev/sdc, checking I had the right device. (I nearly copied it the wrong way.) The next morning, I swapped hard drives and booted successfully. (The HDD was no longer used at any point past here.) Next I (installed?) ran gdisk /dev/sda. It said that there was a working MBR table, but also a corrupt GPT table. I think it asked me which to proceed with, so I selected (1)MBR; and I don't THINK it made me select anything else before putting me into the main menu, so after poking around the options, did w for write. Rebooted. Got a blinking cursor on a black screen. Eliding a lot of testing and false starts, but restating the first steps so the list is complete: (Suggestion: check that your bios supports UEFI and otherwise update it; doing that earlier might have fixed some of my problems.) Be careful about copying, some of these commands use my specific drive numbers Boot from usb Copy hdd onto ssd, e.g. dd if=/dev/HDD bs=1M of=/dev/SSD (get it right!!) Turn off computer Swap drives, put HDD somewhere safe in case you mess up and need to start over Reboot into ssd gdisk /dev/sda (If it asks you to pick between a working MBR and a broken GPT, pick MBR) w for "write" Reboot into usb (though it MIGHT work and not wreck your data to continue on to the following from your main os, and without the mounting and chroot stuff. Might still need to mount the efi partition.) Connect to the internet sudo bash apt update apt install zfsutils-linux modprobe zfs mount /dev/sda5 /mnt # sda5 was my main installation partition mount /dev/sda1 /mnt/boot/efi # sda1 I think was the partition originally intended to be the efi partition modprobe efivarfs mount -t proc proc /mnt/proc mount -t sysfs sys /mnt/sys mount -o bind /dev /mnt/dev mount --bind /run /mnt/run chroot /mnt bash apt install grub-efi update-grub grub-install --target=x86_64-efi /dev/sda update-grub # not sure whether it's supposed to go before or after install (Hopefully I haven't missed any steps etc.) I eventually realized my bios, Phoenix SecureCore Tiano version A03, was too old to support UEFI. So I: Downloaded the newest firmware from https://www.dell.com/support/home/en-us/drivers/driversdetails?driverid=khvck Downloaded the FreeDOS FullUSB image from https://www.freedos.org/download/ dd'd the img file onto a usb Replugged the usb Mounted the usb's partition Copied the bios EXE onto it sync and shut down Booted that USB (must be writable or the exe fails to extract) Ran the exe file, which installed the new bios, version A17 Confirmed the bios now COULD work with UEFI, saw new options in the menus etc. (Boot still failed, though I forget exactly how) (I then used the boot menu to select the hard drive - Ubuntu started to boot, but showed "watchdog: BUG: soft lockup", and after a few minutes I restarted) Selected hard drive from boot menu Got a grub screen, and selected a maintenance mode shell modprobe zfs update-grub grub-install update-grub reboot (Bios now showed "ubuntu" in the uefi boot list, which worked, but the bios wouldn't automatically boot into it) Selected hard drive in boot list, successful boot sudo add-apt-repository ppa:yannubuntu/boot-repair sudo apt install boot-repair boot-repair Ran suggested fixes, noting the sda1/efi/ubuntu/grubx64.efi path it gave in the success dialog (Reboot failed) (I turned off Legacy Boot ROM option in the bios, and it DID boot correctly, but took away my other boot options, so I turned it back on.) In bios, I selected "add boot", gave the boot option a name, left selected the only target present (some long string that I think referred to my hard drive or partitions or something), and the third box let me navigate to "EFI\ubuntu\grubx64.efi", and created the boot option. Made sure that option was the first of the UEFI boot options. (It had all the legacy options above the uefi ones, and wouldn't let me change that.) It finally booted unaided I used gparted from my os to resize my main partition Successfully rebooted I don't know how much of that is necessary, and how much could have been avoided by e.g. updating the bios first, and maybe skipping some of the boot-into-usb steps, but this is what ended up working, cobbled together from dozens of different SO posts and reddit posts and forum threads.
Convert MBR/BIOS to GPT/UEFI (infodump)
1,592,331,333,000
I have a (GPT-partitioned) disk, for example /dev/sda. /dev/sda8 is a partition on that disk. I used the cfdisk utility to create a GPT table with few partitions in /dev/sda8. I expected these partitions to become available via something like /dev/sda8p1. But Linux did not automatically recognize them. How do I make Linux recognize partitions in a partition, and automatize that if possible?
I know of nothing that that will automatically scan a partition as if it were a disk, and indeed it can't even be scanned manually: partx --add - /dev/sda8 partx: /dev/sda8: error adding partitions 1-2 However, you can use a loop device to map the partition back to a device - and this device can be scanned as if it were a disk. Example for a device /dev/sda8 containing two partitions: losetup --show --find --partscan /dev/sda8 /dev/loop0 ls -1 /dev/loop0* # Arg is #1, not lowercase "L" /dev/loop0 /dev/loop0p1 /dev/loop0p2 Remember to delete the loop device when you've finished losetup -d /dev/loop0
How to make Linux read partition table in a partition?
1,592,331,333,000
I have built a custom image of Armbian with a partition size of 3.1 GB, and I am now finished working with it. It is currently written to a bootable 64 GB SD card which is using a GUID partition table (GPT). My problem is, is that when I want to make an image of the card using Ubuntu, I get an image file 63 GB in size, but I don't want an image file with 60 GB of empty space. I looked for other ways of shortening the image file by using truncate command, and creating an image using dd count= and it isn't working. When I use dd it creates an image file that when mounted is all "free space" and PMBR, and truncate breaks a working image file. So (unless I'm doing it wrong), how can I create a 3 GB image of my SD card that will contain the boot information?
truncate is a good tool. You need to shrink the image, so it contains every partition defined in the partition table. In other words, if the end sector of the partition closest to the end is N (note it doesn't have to be the partition with the highest number), you need N+1 sectors of the image (+1 because numbering starts at 0). Use gdisk -l image to know the N. Most likely the card uses 512-byte sectors and the partition table is valid when interpreted in terms of 512-byte sectors (for comparison: see what happens when this assumption does not hold). So you need (N+1)*512 bytes (or more, having more is not fatal). truncate accordingly. Reading this number of bytes directly from the card in the first place would give you the same result. An easy way (although non-POSIX, see this) is head -c number-of-bytes-here /dev/sdx > image. Then you need 33 additional logical sectors for a secondary (backup) GPT. Use truncate again and add 33*512 bytes to the file (truncate -s +16896 image). We could have shrunk the image to the desired final size with the first truncate (or read more with head), but doing this in two steps causes these additional 33 sectors to contain zeros instead of garbage that might interfere in a moment. The first truncating (or creating a partial image) discarded the original secondary GPT. Use gdisk image and let it fix the problem. It will tell you that disk size is smaller than the main header indicates and invalid backup GPT header, but valid main header; regenerating backup header from main header. Thanks to the second truncate there is room for the backup GPT. All you need is to "write table to disk and exit"; the tool will rewrite GPT, including the backup one.
Problem creating a disk image of an SD card
1,592,331,333,000
From UEFI spec 2.8, GPT partition entry array definition (5.3.3), I understand that Unique Partition GUID is to "uniquely identify every partition that will ever be created". And the language seemingly implies there can be a pool of GUIDs or some default GUID generator. Then where does this pool/generator usually reside? (tool lib like fdisk? media firmware? I don't think it's in BIOS or kernel from what I read.)
The tool that creates the new partition also generates the GUID for it. fdisk uses the uuid_generate_random function from libuuid for that. If you are interested in details, RFC 4122 describes UUIDs in more details and also includes description of algorithms to create them. (UUID and GUID are more or less synonyms, there are some differences, if you are interested in details, I recommend this answer on stackoverflow).
How is the value of the "unique partition GUID" is generated?
1,592,331,333,000
System: Linux Mint 20 x64 cinnamon booted on mbr formatted disk. I inserted a new hdd containing a gpt and a ntfs partition (next to a small MSR partition, which doesn't matter here I guess). GParted detects the hdd but the mount option is greyed out. Why is that? And what can I do to mount this partition?
Like suggested in the comments the problem was indeed the hybernation flag. The ntfs-3g driver cannot safely mount the ntfs disk as Windows is in a hybernation state. This article explains it perfectly: all-explaining article some possible solutions: if you have access to Windows: boot into windows and do a normal shutdown/restart, fast-boot in Win8/10 must be disabled. Boot in Linux and ntfs disk will be accessible. if you have no access to Windows: ntfs-3g remove_hiberfile, it completely deletes hiberfil.sys and will cause you to lose all unsaved information in the hibernated Windows programs. But your ntfs partition can be ounted again as rw. here is how
Can I mount a gpt disk with ntfs partition on an mbr booted linux system
1,286,911,424,000
I'm looking for an easy way (a command or series of commands, probably involving find) to find duplicate files in two directories, and replace the files in one directory with hardlinks of the files in the other directory. Here's the situation: This is a file server which multiple people store audio files on, each user having their own folder. Sometimes multiple people have copies of the exact same audio files. Right now, these are duplicates. I'd like to make it so they're hardlinks, to save hard drive space.
There is a perl script at http://cpansearch.perl.org/src/ANDK/Perl-Repository-APC-2.002/eg/trimtrees.pl which does exactly what you want: Traverse all directories named on the command line, compute MD5 checksums and find files with identical MD5. IF they are equal, do a real comparison if they are really equal, replace the second of two files with a hard link to the first one.
Is there an easy way to replace duplicate files with hardlinks?
1,286,911,424,000
I read in text books that Unix/Linux doesn't allow hard links to directories but does allow soft links. Is it because, when we have cycles and if we create hard links, and after some time we delete the original file, it will point to some garbage value? If cycles were the sole reason behind not allowing hard links, then why are soft links to directories allowed?
This is just a bad idea, as there is no way to tell the difference between a hard link and an original name. Allowing hard links to directories would break the directed acyclic graph structure of the filesystem, possibly creating directory loops and dangling directory subtrees, which would make fsck and any other file tree walkers error prone. First, to understand this, let's talk about inodes. The data in the filesystem is held in blocks on the disk, and those blocks are collected together by an inode. You can think of the inode as THE file.  Inodes lack filenames, though. That's where links come in. A link is just a pointer to an inode. A directory is an inode that holds links. Each filename in a directory is just a link to an inode. Opening a file in Unix also creates a link, but it's a different type of link (it's not a named link). A hard link is just an extra directory entry pointing to that inode. When you ls -l, the number after the permissions is the named link count. Most regular files will have one link. Creating a new hard link to a file will make both filenames point to the same inode. Note: % ls -l test ls: test: No such file or directory % touch test % ls -l test -rw-r--r-- 1 danny staff 0 Oct 13 17:58 test % ln test test2 % ls -l test* -rw-r--r-- 2 danny staff 0 Oct 13 17:58 test -rw-r--r-- 2 danny staff 0 Oct 13 17:58 test2 % touch test3 % ls -l test* -rw-r--r-- 2 danny staff 0 Oct 13 17:58 test -rw-r--r-- 2 danny staff 0 Oct 13 17:58 test2 -rw-r--r-- 1 danny staff 0 Oct 13 17:59 test3 ^ ^ this is the link count Now, you can clearly see that there is no such thing as a hard link. A hard link is the same as a regular name. In the above example, test or test2, which is the original file and which is the hard link? By the end, you can't really tell (even by timestamps) because both names point to the same contents, the same inode: % ls -li test* 14445750 -rw-r--r-- 2 danny staff 0 Oct 13 17:58 test 14445750 -rw-r--r-- 2 danny staff 0 Oct 13 17:58 test2 14445892 -rw-r--r-- 1 danny staff 0 Oct 13 17:59 test3 The -i flag to ls shows you inode numbers in the beginning of the line. Note how test and test2 have the same inode number, but test3 has a different one. Now, if you were allowed to do this for directories, two different directories in different points in the filesystem could point to the same thing. In fact, a subdir could point back to its grandparent, creating a loop. Why is this loop a concern? Because when you are traversing, there is no way to detect you are looping (without keeping track of inode numbers as you traverse). Imagine you are writing the du command, which needs to recurse through subdirs to find out about disk usage. How would du know when it hit a loop? It is error prone and a lot of bookkeeping that du would have to do, just to pull off this simple task. Symlinks are a whole different beast, in that they are a special type of "file" that many file filesystem APIs tend to automatically follow. Note, a symlink can point to a nonexistent destination, because they point by name, and not directly to an inode. That concept doesn't make sense with hard links, because the mere existence of a "hard link" means the file exists. So why can du deal with symlinks easily and not hard links? We were able to see above that hard links are indistinguishable from normal directory entries. Symlinks, however, are special, detectable, and skippable!  du notices that the symlink is a symlink, and skips it completely! % ls -l total 4 drwxr-xr-x 3 danny staff 102 Oct 13 18:14 test1/ lrwxr-xr-x 1 danny staff 5 Oct 13 18:13 test2@ -> test1 % du -ah 242M ./test1/bigfile 242M ./test1 4.0K ./test2 242M .
Why are hard links to directories not allowed in UNIX/Linux?
1,286,911,424,000
I know what hard links are, but why would I use them? What is the utility of a hard link?
The main advantage of hard links is that, compared to soft links, there is no size or speed penalty. Soft links are an extra layer of indirection on top of normal file access; the kernel has to dereference the link when you open the file, and this takes a small amount of time. The link also takes a small amount of space on the disk, to hold the text of the link. These penalties do not exist with hard links because they are built into the very structure of the filesystem. The best way I know of to see this is: $ ls -id . 1069765 ./ $ mkdir tmp ; cd tmp $ ls -id .. 1069765 ../ The -i option to ls makes it give you the inode number of the file. On the system where I prepared the example above, I happened to be in a directory with inode number 1069765, but the specific value doesn't matter. It's just a unique value that identifies a particular file/directory. What this says is that when we go into a subdirectory and look at a different filesystem entry called .., it has the same inode number we got before. This isn't happening because the shell is interpreting .. for you, as happens with MS-DOS and Windows. On Unix filesystems .. is a real directory entry; it is a hard link pointing back to the previous directory. Hard links are the tendons that tie the filesystem's directories together. Once upon a time, Unix didn't have hard links. They were added to turn Unix's original flat file system into a hierarchical filesystem. (For more on this, see Why does '/' have an '..' entry?.) It is also somewhat common on Unix systems for several different commands to be implemented by the same executable. It doesn't seem to be the case on Linux any more, but on systems I used in the past, cp, mv and rm were all the same executable. It makes sense if you think about it: when you move a file between volumes, it is effectively a copy followed by a removal, so mv already had to implement the other two commands' functions. The executable can figure out which operation to provide because it gets passed the name it was called by. Another example, common in embedded Linuxes, is BusyBox, a single executable that implements dozens of commands. I should point out that on most filesystems, users aren't allowed to make hard links to directories. The . and .. entries are automatically managed by the filesystem code, which is typically part of the kernel. The restriction exists because it is possible to cause serious filesystem problems if you aren't careful with how you create and use directory hard links. This is one of many reasons soft links exist; they don't carry the same risk.
Why do hard links exist?
1,286,911,424,000
I'm creating a shell script that would take a filename/path to a file and determine if the file is a symbolic link or a hard link. The only thing is, I don't know how to see if they are a hard link. I created 2 files, one a hard link and one a symbolic link, to use as a test file. But how would I determine if a file is a hard link or symbolic within a shell script? Also, how would I find the destination partition of a symbolic link? So let's say I have a file that links to a different partition, how would I find the path to that original file?
Jim's answer explains how to test for a symlink: by using test's -L test. But testing for a "hard link" is, well, strictly speaking not what you want. Hard links work because of how Unix handles files: each file is represented by a single inode. Then a single inode has zero or more names or directory entries or, technically, hard links (what you're calling a "file"). Thankfully, the stat command, where available, can tell you how many names an inode has. So you're looking for something like this (here assuming the GNU or busybox implementation of stat): if [ "$(stat -c %h -- "$file")" -gt 1 ]; then echo "File has more than one name." fi The -c '%h' bit tells stat to just output the number of hardlinks to the inode, i.e., the number of names the file has. -gt 1 then checks if that is more than 1. Note that symlinks, just like any other files, can also be linked to several directories so you can have several hardlinks to one symlink.
Determining if a file is a hard link or symbolic link?
1,286,911,424,000
If you do rm myFile where myFile is a hard link, what happens?
In Unix all normal files are Hardlinks. Hardlinks in a Unix (and most (all?)) filesystems are references to what's called an inode. The inode has a reference counter, when you have one "link" to the file (which is the normal modus operandi) the counter is 1. When you create a second, third, fourth, etc link, the counter is incremented (increased) each time by one. When you delete (rm) a link the counter is decremented (reduced) by one. If the link counter reaches 0 the filesystem removes the inode and marks the space as available for use. In short, as long as you do not delete the last link the file will remain. Edit: The file will remain even if the last link is removed. This is one of the ways to ensure security of data contained in a file is not accessible to any other process. Removing the data from the filesystem completely is done only if the data has 0 links to it as given in its metadata and is not being used by any process. This IMHO is by far the easiest way to understand hard-links (and its difference from softlinks).
What happens when you delete a hard link?
1,286,911,424,000
How can we find all hard links to a given file? I.e., find all other hard links to the same file, given a hard link? Does filesystem keep track of the hard links to a file? The inode of a file only stores the number of hard links to the file, but not the hard links, right?
If the given file is called /path/to/file and you want to find all hard links to it that exist under the current directory, then use: find . -samefile /path/to/file The above was tested on GNU find. Although -samefile is not POSIX, it is also supported by Mac OSX find and FreeBSD find. Documentation From GNU man find: -samefile name        File refers to the same inode as name. When -L is in effect, this can include symbolic links. Differences between find and ls ls -l lists the number of hard links to a file or directory. For directories, this number is larger than the number of results shown by find . -samefile. The reason for this is explained in the GNU find manual: A directory normally has at least two hard links: the entry named in its parent directory, and the . entry inside of the directory. If a directory has subdirectories, each of those also has a hard link called .. to its parent directory. The . and .. directory entries are not normally searched unless they are mentioned on the find command line. In sum, ls -l counts the . and .. directories as separate hard links but find . -samefile does not.
How to find all hard links to a given file? [duplicate]
1,286,911,424,000
When would you use one over the other?
The different semantics between hard and soft links make them suitable for different things. Hard links: indistinguishable from other directory entries, because every directory entry is hard link "original" can be moved or deleted without breaking other hard links to the same inode only possible within the same filesystem permissions must be the same as those on the "original" (permissions are stored in the inode, not the directory entry) can only be made to files, not directories Symbolic links (soft links) simply records that point to another file path. (ls -l will show what path a symlink points to) will break if original is moved or deleted. (In some cases it is actually desirable for a link to point to whatever file currently occupies a particular location) can point to a file in a different filesystem can point to a directory on some file system formats, it is possible for the symlink to have different permissions than the file it points to (this is uncommon)
What is the difference between symbolic and hard links?
1,286,911,424,000
How to move directories that have files in common from one to another partition ? Let's assume we have partition mounted on /mnt/X with directories sharing files with hardlinks. How to move such directories to another partition , let it be /mnt/Y with preserving those hardlinks. For better illustration what do I mean by "directories sharing files in common with hardlinks", here is an example: # let's create three of directories and files mkdir -p a/{b,c,d}/{x,y,z} touch a/{b,c,d}/{x,y,z}/f{1,2,3,4,5} # and copy it with hardlinks cp -r -l a hardlinks_of_a To be more specific, let's assume that total size of files is 10G and each file has 10 hardlinks. The question is how to move it to destination with using 10G (someone might say about copying it with 100G and then running deduplication - it is not what I am asking about)
First answer: The GNU Way GNU cp -a copies recursively preserving as much structure and metadata as possible. Hard links between files in the source directory are included in that. To select hard link preservation specifically without all the other features of -a, use --preserve=links. mkdir src cd src mkdir -p a/{b,c,d}/{x,y,z} touch a/{b,c,d}/{x,y,z}/f{1,2,3,4,5} cp -r -l a hardlinks_of_a cd .. cp -a src dst
How to copy directories with preserving hardlinks?
1,286,911,424,000
I've seen many explanations for why the link count for an empty directory in Unix based OSes is 2 instead of 1. They all say that it's because of the '.' directory, which every directory has pointing back to itself. I understand why having some concept of '.' is useful for specifying relative paths, but what is gained by implementing it at the filesystem level? Why not just have shells or the system calls that take paths know how to interpret it? That '..' is a real link makes much more sense to me -- the filesystem needs to store a pointer back to the parent directory in order to navigate to it. But I don't see why '.' being a real link is necessary. It also seems like it leads to an ugly special case in the implementation -- you would think you could only free the space used by inodes that have a link count less than 1, but if they're directories, you actually need to check for a link count less than 2. Why the inconsistency?
An interesting question, indeed. At first glance I see the following advantages: First of all you state that interpreting "." as the current directory may be done by the Shell or by system calls. But having the dot-entry in the directory actually removes this necessity and forces consistency at even a lower level. But I don't think that this was the basic idea behind this design decision. When a file is being created or removed from a directory, the directory's modification time stamp has to be updated, too. This timestamp is stored in its inode. The inode number is stored in the corresponding directory entry. IF the dot entry would not be there, the routines would have to search for the inode number at the entry for this directory in the parent directory, which would cause a directory search again. BUT luckily there is the dot entry in the current directory. The routine that adds or removes a file in the current directory just has to jump back to the first entry (where the dot-entry usually resides) and immediately has found the inode number for the current directory. There is a third nice thing about the dot entry: When fsck checks a rotten filesystem and has to deal with non-connected blocks that are also not on the free list, it's easy for it to verify if a data block (when interpreted as a directory list) has a dot entry that's pointing to an inode which in turn points back to this data block. If so, this data block may be considered as a lost directory which has to be reconnected.
Why is '.' a hard link in Unix?
1,286,911,424,000
In what situations would one want to use a hard-link rather than a soft-link? I personally have never run across a situation where I'd want to use a hard-link over a soft-link, and the only use-case I've come across when searching the web is deduplicating identical files.
Aside from the backup usage mentioned in another comment, which I believe also includes the snapshots on a BTRFS volume, a use-case for hard-links over soft-links is a tag-sorted collection of files. (Not necessarily the best method to create a collection, a database-driven method is potentially better, but for a simple collection that's reasonably stable, it's not too bad.) A media collection where all files are stored in one, flat, directory and are sorted into other directories based on various criteria, i.e.: year, subject, artist, genre, etc. This could be a personal movie collection, or a commercial studio's collective works. Essentially finished, the file is saved, not likely to be modified, and sorted, possibly into multiple locations by links. Bear in mind that the concept of "original" and "copy" are not applicable to hard-links: every link to the file is an original, there is no "copy" in the normal sense. For the description of the use-case, however, the terms mimic the logic of the behavior. The "original" is saved in the "catalog" directory, and the sorted "copies" are hard-linked to those files. The file attributes on the sorting directories can be set to r/o, preventing any accidental changes to the file-names and sorted structure, while the attributes on the catalog directory can be r/w allowing it to be modified as needed. (Case for that would be music files where some players attempt to rename and reorganize files based on tags embedded in the media file, from user input, or internet retrieval.) Additionally, since the attributes of the "copy" directories can be different than the "original" directory, the sorted structure could be made available to the group, or world, with restricted access while the main "catalog" is only accessible to the principal user, with full access. The files themselves, however will always have the same attributes on all links to that inode. (ACL could be explored to enhance that, but not my knowledge area.) If the original is renamed, or moved (the single "catalog" directory becomes too large to manage, for example) the hard-links remain valid, soft-links are broken. If the "copies" are moved and the soft-links are relative, then the soft-links will, again, be broken, and the hard-links will not be. Note: there seems to be inconsistency on how different tools report disk usage when soft-links are involved. With hard-links, however, it seems consistent. So with 100 files in a catalog sorted into a collection of "tags", there could easily be 500 linked "copies." (For an photograph collection, say date, photographer, and an average of 3 "subject" tags.) Dolphin, for example, would report that as 100 files for hard-links, and 600 files if soft-links are used. Interestingly, it reports that same disk-space usage either way, so it looks like a large collection of small files for soft-links, and a small collection of large files for hard-links. A caveat to this type of use-case is that in file-systems that use COW, modifying the "original" could break the hard-links, but not break the soft-links. But, if the intent is to have the master copy, after editing, saved, and sorted, COW doesn't enter the scenario.
Use cases for hardlinks? [closed]
1,286,911,424,000
A hard link is defined as a pointer to an inode. A soft link, also known as a symbolic link, is defined as an independent file pointing to another link without the restrictions of hard links. What is the difference between a file and a hard link? A hard link points to an inode, so what is a file? The inode entry itself? Or an inode with a hard link? Let's say I create a file with touch. Then an inode entry is created in the inode table. And I create a hard link, which has the same inode number as the file. So did I create a new file? Or is the file just defined as an inode?
The very short answer is: a file is an anonymous blob of data a hardlink is a name for a file a symbolic link is a special file whose content is a pathname Unix files and directories work exactly like files and directories in the real world (and not like folders in the real world); Unix filesystems are (conceptually) structured like this: a file is an anonymous blob of data; it doesn't have a name, only a number (inode) a directory is a special kind of file which contains a mapping of names to files (more specifically inodes); since a directory is just a file, directories can have entries for directories, that's how recursion is implemented (note that when Unix filesystems were introduced, this was not at all obvious, a lot of operating systems didn't allow directories to contain directories back then) these directory entries are called hardlinks a symbolic link is another special kind of file, whose content is a pathname; this pathname is interpreted as the name of another file other kinds of special files are: sockets, fifos, block devices, character devices Keeping this metaphor in mind, and specifically keeping in mind that Unix directories work like real-world directories and not like real-world folders explains many of the "oddities" that newcomers often encounter, like: why can I delete a file I don't have write access to? Well, for one, you're not deleting the file, you are deleting one of many possible names for the file, and in order to do that, you only need write access to the directory, not the file. Just like in the real world. Or, why can I have dangling symlinks? Well, the symlink simply contains a pathname. There is nothing that says that there actually has to be a file with that name. My question is simply what is the difference of a file and a hard link ? The difference between a file and a hard link is the same as the difference between you and the line with your name in the phone book. Hard link is pointing to an inode, so what is a file ? Inode entry itself ? Or an Inode with a hard link ? A file is an anonymous piece of data. That's it. A file is not an inode, a file has an inode, just like you are not a Social Security Number, you have a SSN. A hard link is a name for a file. A file can have many names. Let's say, I create a file with touch, then an Inode entry is created in the Inode Table. Yes. And I create a hard link, which has the same Inode number with the file. No. A hard link doesn't have an inode number, since it's not a file. Only files have inode numbers. The hardlink associates a name with an inode number. So did I create a new file ? Yes. Or the file is just defined as an Inode ? No. The file has an inode, it isn't an inode.
What is the difference between a hard link and a file?
1,286,911,424,000
I'm running an application that writes to log.txt. The app was updated to a new version, making the supported plugins no longer compatible. It forces an enormous amount of errors into log.txt and does not seem to support writing to a different log file. How can I write them to a different log? I've considered replacing log.txt with a hard link (application can't tell the difference right?) Or a hard link that points to /dev/null. What are my options?
# cp -a /dev/null log.txt This copies your null device with the right major and minor dev numbers to log.txt so you have another null. Devices are not known by name at all in the kernel but rather by their major and minor numbers. Since I don't know what OS you have I found it convenient to just copy the numbers from where we already know they are. If you make it with the wrong major and minor numbers, you would most likely have made some other device, perhaps a disk or something else you don't want writing to.
Replace file with hard link to /dev/null
1,286,911,424,000
For example, I have a file myold_file. Then I use ln to create a hard link as mylink: ln myold_file mylink Then, even by using ls -a, I cannot tell which is the old one. Is there anyway to tell?
You can't, because they are literally the same file, only reached by different paths. The first one has no special status.
How to tell which file is original if hard link is created
1,286,911,424,000
Two setuid programs, /usr/bin/bar and /usr/bin/baz, share a single configuration file foo. The configuration file's mode is 0640, for it holds sensitive information. The one program runs as bar:bar (that is, as user bar, group bar); the other as baz:baz. Changing users is not an option, and even changing groups would not be preferable. I wish to hard link the single configuration file as /etc/bar/foo and /etc/baz/foo. However, this fails because the file must, as far as I know, belong either to root:bar or to root:baz. Potential solution: Create a new group barbaz whose members are bar and baz. Let foo belong to root:barbaz. That looks like a pretty heavy-handed solution to me. Is there no neater, simpler way to share the configuration file foo between the two programs? For now, I am maintaining two, identical copies of the file. This works, but is obviously wrong. What would be right? For information: I have little experience with Unix groups and none with setgid(2).
You can use ACLs so the file can be read by people in both groups. chgrp bar file chmod 640 file setfacl -m g:baz:r-- file Now both bar and baz groups can read the file. For example, here's a file owned by bin:bin with mode 640. $ ls -l foo -rw-r-----+ 1 bin bin 5 Aug 17 12:19 foo The + means there's an ACL set, so let's take a look at it. $ getfacl foo # file: foo # owner: bin # group: bin user::rw- group::r-- group:sweh:r-- mask::r-- other::--- We can see the line group:sweh:r-- : that means people in the group sweh can read it. Hey, that's me! $ id uid=500(sweh) gid=500(sweh) groups=500(sweh) And yes, I can read the file. $ cat foo data
One file wants to belong to two users. How? Hard linking fails
1,286,911,424,000
I am reading this intro to the command line by Mark Bates. In the first chapter, he mentions that hard links cannot span file systems. An important thing to note about hard links is that they only work on the current file system. You can not create a hard link to a file on a different file system. To do that you need to use symbolic links, Section 1.4.3. I only know of one filesystem. The one starting from root (/). This statement that hard links cannot span over file systems doesn't make sense to me. The Wikipedia article on Unix file systems is not helpful either.
Hopefully I can answer this in a way that makes sense for you. A file system in Linux, is generally made up of a partition that is formatted in one of various ways (gotta love choice!) that you store your files on. Be that your system files, or your personal files... they are all stored on a file system. This part you seem to understand. But what if you partition your hard drive to have more than one partition (think Apple Pie cut up into pieces), or add an additional hard drive (perhaps a USB stick?). For the sake of argument, they all have file systems on them as well. When you look at the files on your computer, you're seeing a visual representation of data on your partition's file system. Each file name corresponds to what is called an inode, which is where your data, behind the scenes, really lives. A hard link lets you have multiple "file names" (for lack of a better description) that point to the same inode. This only works if those hard links are on the same file system. A symbolic link instead points to the "file name", which then is linked to the inode holding your data. Forgive my crude artwork but hopefully this explains better. image.jpg image2.jpg \ / [your data] here, image.jpg, and image2.jpg both point directly to your data. They are both hardlinks. However... image.jpg <----------- image2.jpg \ [your data] In this (crude) example, image2.jpg doesn't point to your data, it points to the image.jpg... which is a link to your data. Symbolic links can work across file system boundaries (assuming that file system is attached and mounted, like your usb stick). However a hard link cannot. It knows nothing about what is on your other file system, or where your data there is stored. Hopefully this helps make better sense.
Why are hard links only valid within the same filesystem?
1,286,911,424,000
This answer reveals that one can copy all files - including hidden ones - from directory src into directory dest like so: mkdir dest cp -r src/. dest There is no explanation in the answer or its comments as to why this actually works, and nobody seems to find documentation on this either. I tried out a few things. First, the normal case: $ mkdir src src/src_dir dest && touch src/src_file src/.dotfile dest/dest_file $ cp -r src dest $ ls -A dest dest_file src Then, with /. at the end: $ mkdir src src/src_dir dest && touch src/src_file src/.dotfile dest/dest_file $ cp -r src/. dest $ ls -A dest dest_file .dotfile src_dir src_file So, this behaves simlarly to *, but also copies hidden files. $ mkdir src src/src_dir dest && touch src/src_file src/.dotfile dest/dest_file $ cp -r src/* dest $ ls -A dest dest_file src_dir src_file . and .. are proper hard-links as explained here, just like the directory entry itself. Where does this behaviour come from, and where is it documented?
The behaviour is a logical result of the documented algorithm for cp -R. See POSIX, step 2f: The files in the directory source_file shall be copied to the directory dest_file, taking the four steps (1 to 4) listed here with the files as source_files. . and .. are directories, respectively the current directory, and the parent directory. Neither are special as far as the shell is concerned, so neither are concerned by expansion, and the directory will be copied including hidden files. *, on the other hand, will be expanded to a list of files, and this is where hidden files are filtered out. src/. is the current directory inside src, which is src itself; src/src_dir/.. is src_dir’s parent directory, which is again src. So from outside src, if src is a directory, specifying src/. or src/src_dir/.. as the source file for cp are equivalent, and copy the contents of src, including hidden files. The point of specifying src/. is that it will fail if src is not a directory (or symbolic link to a directory), whereas src wouldn’t. It will also copy the contents of src only, without copying src itself; this matches the documentation too: If target exists and names an existing directory, the name of the corresponding destination path for each file in the file hierarchy shall be the concatenation of target, a single slash character if target did not end in a slash, and the pathname of the file relative to the directory containing source_file. So cp -R src/. dest copies the contents of src to dest/. (the source file is . in src), whereas cp -R src dest copies the contents of src to dest/src (the source file is src). Another way to think of this is to compare copying src/src_dir and src/., rather than comparing src/. and src. . behaves just like src_dir in the former case.
cp behaves weirdly when . (dot) or .. (dot dot) are the source directory
1,286,911,424,000
On Linux, when you a create folder, it automatically creates two hard links to the corresponding inode. One which is the folder you asked to create, the other being the . special folder this folder. Example: $ mkdir folder $ ls -li total 0 124596048 drwxr-xr-x 2 fantattitude staff 68 18 oct 16:52 folder $ ls -lai folder total 0 124596048 drwxr-xr-x 2 fantattitude staff 68 18 oct 16:52 . 124593716 drwxr-xr-x 3 fantattitude staff 102 18 oct 16:52 .. As you can see, both folder and .'s inside folder have the same inode number (shown with -i option). Is there anyway to delete this special . hardlink? It's only for experimentation and curiosity. Also I guess the answer could apply to .. special file as well. I tried to look into rm man but couldn't find any way to do it. When I try to remove . all I get is: rm: "." and ".." may not be removed I'm really curious about the whole way these things work so don't refrain from being very verbose on the subject. EDIT: Maybe I wasn't clear with my post, but I want to understand the underlying mechanism which is responsible for . files and the reasons why they can't be deleted. I know the POSIX standard disallows a folder with less than 2 hardlinks, but don't really get why. I want to know if it could be possible to do it anyway.
It is technically possible to delete ., at least on EXT4 filesystems. If you create a filesystem image in test.img, mount it and create a test folder, then unmount it again, you can edit it using debugfs: debugfs -w test.img cd test unlink . debugfs doesn't complain and dutifully deletes the . directory entry in the filesystem. The test directory is still usable, with one surprise: sudo mount test.img /mnt/temp cd /mnt/temp/test ls shows only .. so . really is gone. Yet cd ., ls ., pwd still behave as usual! I'd previously done this test using rmdir ., but that deletes the directory's inode (huge thanks to BowlOfRed for pointing this out), which leaves test a dangling directory entry and is the real reason for the problems encountered. In this scenario, the test folder then becomes unusable; after mounting the image, running ls produces ls: cannot access '/mnt/test': Structure needs cleaning and the kernel log shows EXT4-fs error (device loop2): ext4_lookup:1606: inode #2: comm ls: deleted inode referenced: 38913 Running e2fsck in this situation on the image deletes the test directory entirely (the directory inode is gone so there's nothing to restore). All this shows that . exists as a specific entity in the EXT4 filesystem. I got the impression from the filesystem code in the kernel that it expects . and .. to exist, and warns if they don't (see namei.c), but with the unlink .-based test I didn't see that warning. e2fsck doesn't like the missing . directory entry, and offers to fix it: $ /sbin/e2fsck -f test.img e2fsck 1.43.3 (04-Sep-2016) Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Missing '.' in directory inode 30721. Fix<y>? This re-creates the . directory entry.
How to unlink (remove) the special hardlink "." created for a folder?
1,286,911,424,000
Is there a limit of number of hardlinks for one file? Is it specified anywhere? What are safe limits for Linux? And what for other POSIX systems?
Posix requires that the operating system understand the concept of hard links but not that hard links can actually be used in any particular circumstance. You can find out how many hard links are permitted at a particular location (this can vary by filesystem type) by calling pathconf(filename, _PC_LINK_MAX). The minimum limit (_POSIX_LINK_MAX) is 8, but this is rather meaningless as link() can report many other errors anyway (permission denied, disk full, …). The stat structure stores the link count in a field of type nlink_t, so the type of this field gives an upper limit on your system. But there's a good chance you'll never be able to reach that far: it's common to have a 32-bit nlink_t but only 16 bits in many filesystems (a quick grep in the Linux source shows that ext[234], NTFS, UFS and XFS use 16-bit link counts in the kernel data structures).
Is there a limit of hardlinks for one file?
1,286,911,424,000
Which permissions affect hard link creation? Does file ownership itself matter? Suppose user alice wants to create a hard link to the file target.txt in a directory target-dir. Which permissions does alice need on both target.txt and target-dir? If target.txt is owned by user bill and target-dir is owned by user chad, does that change anything? I've tried to simulate this situation by creating the following folder/file structure on an ext4 filesystem: #> ls -lh . * .: drwxr-xr-x 2 bill bill 60 Oct 1 11:29 source-dir drwxrwxrwx 2 chad chad 60 Oct 1 11:40 target-dir source-dir: -r--r--r-- 1 bill bill 0 Oct 1 11:29 target.txt target-dir: -rw-rw-r-- 1 alice alice 0 Oct 1 11:40 dummy While alice can create a soft link to target.txt, she can't create a hard link: #> ln source-dir/target.txt target-dir/ ln: failed to create hard link ‘target-dir/target.txt’ => ‘source-dir/target.txt’: Operation not permitted If alice owns target.txt and no permissions are changed, the hard link succeeds. What am I missing here?
To create the hard link, alice will need write+execute permissions on target-dir on all cases. The permissions needed on target.txt will vary: If fs.protected_hardlinks = 1 then alice needs either ownership of target.txt or at least read+write permissions on it. If fs.protected_hardlinks = 0 then any set of permissions will do; Even 000 is okay. This answer to a similar question had the missing piece of information to answer this question. From this commit message (emphasis mine): On systems that have user-writable directories on the same partition as system files, a long-standing class of security issues is the hardlink-based time-of-check-time-of-use race, most commonly seen in world-writable directories like /tmp. The common method of exploitation of this flaw is to cross privilege boundaries when following a given hardlink (i.e. a root process follows a hardlink created by another user). Additionally, an issue exists where users can "pin" a potentially vulnerable setuid/setgid file so that an administrator will not actually upgrade a system fully. The solution is to permit hardlinks to only be created when the user is already the existing file's owner, or if they already have read/write access to the existing file.
Hard link creation - Permissions?
1,286,911,424,000
In the manual page of tar command, an option for following hard links is listed. -h, --dereference follow symlinks; archive and dump the files they point to --hard-dereference follow hard links; archive and dump the files they refer to How does tar know that a file is a hard link? How does it follow it? What if I don't choose this option? How does it not hard-dereference?
By default, if you tell tar to archive a file with hard links, and more than one such link is included among the files to be archived, it archives the file only once, and records the second (and any additional names) as hard links. This means that when you extract that archive, the hard links will be restored. If you use the --hard-dereference option, then tar does not preserve hard links. Instead, it treats them as independent files that just happen to have the same contents and metadata. When you extract the archive, the files will be independent. Note: It recognizes hard links by first checking the link count of the file. It records the device number and inode of each file with more than one link, and uses that to detect when the same file is being archived again. (When you use --hard-dereference, it does not do this.)
Dereferencing hard links
1,286,911,424,000
Thanks to some good Q&A around here and this page, I now understand links. I see hard links refer the same inode by a different name, and copies are different "nodes, with different names. Plus soft links have the original file name and path as their inode, so if the file is moved, the link breaks. So, I tested what I've learnt with some file ("saluton_mondo.cpp" below), made a hard and a soft link and a copy. jmcf125@VMUbuntu:~$ ls -lh soft hard copy s*.cpp -rw-rw-r-- 1 jmcf125 jmcf125 205 Aŭg 27 16:10 copy -rw-rw-r-- 2 jmcf125 jmcf125 205 Aŭg 25 13:34 hard -rw-rw-r-- 2 jmcf125 jmcf125 205 Aŭg 25 13:34 saluton_mondo.cpp lrwxrwxrwx 1 jmcf125 jmcf125 17 Aŭg 27 16:09 soft -> saluton_mondo.cpp I found awkward that the hard link, however, has the same size as the original and, logically, the copy. If the hard link and the original share the same inode, that has the data, and only differ by the filename, shouldn't the hard link take only the space of its name, instead of 205 bytes? Or is that the size of the original file that ls -lh returns? But then how can I know what space does the filename take? Here it says hard links have no size. Is their file name kept alongside the original file name? Where is the file name of hard links stored?
A file is an inode with meta data among which a list of pointers to where to find the data. In order to be able to access a file, you have to link it to a directory (think of directories as phone directories, not folders), that is add one or more entries to one of more directories to associate a name with that file. All those links, those file names point to the same file. There's not one that is the original and the other ones that are links. They are all access points to the same file (same inode) in the directory tree. When you get the size of the file (lstat system call), you're retrieving information (that metadata referred to above) stored in the inode, it doesn't matter which file name, which link you're using to refer to that file. By contrast symlinks are another file (another inode) whose content is a path to the target file. Like any other file, those symlinks have to be linked to a directory (must have a name) so you can access them. You can also have several links to a symlinks, or in other words, symlinks can be given several names (in one or more directories). $ touch a $ ln a b $ ln -s a c $ ln c d $ ls -li [a-d] 10486707 -rw-r--r-- 2 stephane stephane 0 Aug 27 17:05 a 10486707 -rw-r--r-- 2 stephane stephane 0 Aug 27 17:05 b 10502404 lrwxrwxrwx 2 stephane stephane 1 Aug 27 17:05 c -> a 10502404 lrwxrwxrwx 2 stephane stephane 1 Aug 27 17:05 d -> a Above the file number 10486707 is a regular file. Two entries in the current directory (one with name a, one with name b) link to it. Because the link count is 2, we know there's no other name of that file in the current directory or any other directory. File number 10502404 is another file, this time of type symlink linked twice to the current directory. Its content (target) is the relative path "a". Note that if 10502404 was linked to another directory than the current one, it would typically point to a different file depending on how it was accessed. $ mkdir 1 2 $ echo foo > 1/a $ echo bar > 2/a $ ln -s a 1/b $ ln 1/b 2/b $ ls -lia 1 2 1: total 92 10608644 drwxr-xr-x 2 stephane stephane 4096 Aug 27 17:26 ./ 10485761 drwxrwxr-x 443 stephane stephane 81920 Aug 27 17:26 ../ 10504186 -rw-r--r-- 1 stephane stephane 4 Aug 27 17:24 a 10539259 lrwxrwxrwx 2 stephane stephane 1 Aug 27 17:26 b -> a 2: total 92 10608674 drwxr-xr-x 2 stephane stephane 4096 Aug 27 17:26 ./ 10485761 drwxrwxr-x 443 stephane stephane 81920 Aug 27 17:26 ../ 10539044 -rw-r--r-- 1 stephane stephane 4 Aug 27 17:24 a 10539259 lrwxrwxrwx 2 stephane stephane 1 Aug 27 17:26 b -> a $ cat 1/b foo $ cat 2/b bar Files have no names associated with them other than in the directories that link them. The space taken by their names is the entries in those directories, it's accounted for in the file size/disk usage of the directories. You'll notice that the system call to remove a file is unlink. That is, you don't remove files, you unlink them from the directories they're referenced in. Once unlinked from the last directory that had an entry to a given file, that file is then destroyed (as long as no process has it opened).
Why do hard links seem to take the same space as the originals?
1,286,911,424,000
I use rsnapshot for backups, which generates a series of folders containing files of the same name. Some of the files are hard linked, while others are separate. For instance, hourly.1/file1 and hourly.2/file1 might be hard linked to the same file, while hourly.1/file2 and hourly.2/file2 are entirely separate files. I want to find the amount of space used by the folder hourly.2 ignoring any files which are hard links to files in hourly.1. So in the above example, I would want to get the size of file2, but ignore file1. I'm using bash on linux, and I want to do this from the command line as simply as possible, so no big graphical or other-OS-only solutions please.
Total size in bytes of all files in hourly.2 which have only one link: $ find ./hourly.2 -type f -links 1 -printf "%s\n" | awk '{s=s+$1} END {print s}' From find man-page: -links n File has n links. To get the sum in kilobytes instead of bytes, use -printf "%k\n" To list files with different link counts, play around with find -links +1 (more than one link), find -links -5 (less than five links) and so on.
How to get folder size ignoring hard links?
1,286,911,424,000
I understand the notion of hardlinks very well, and have read the man pages for basic tools like cp --- and even the recent POSIX specs --- a number of times. Still I was surprised to observe the following behavior: $ echo john > john $ cp -l john paul $ echo george > george At this point john and paul will have the same inode (and content), and george will differ in both respects. Now we do: $ cp george paul At this point I expected george and paul to have different inode numbers but the same content --- this expectation was fulfilled --- but I also expected paul to now have a different inode number from john, and for john to still have the content john. This is where I was surprised. It turns out that copying a file to the destination path paul also has the result of installing that same file (same inode) at all other destination paths that share paul's inode. I was thinking that cp creates a new file and moves it into the place formerly occupied by the old file paul. Instead what it seems to do is to open the existing file paul, truncating it, and write george's content into that existing file. Hence any "other" files with the same inode get "their" content updated at the same time. Ok, this is a systematic behavior and now that I know to expect it I can figure out how to work around it, or take advantage of it, as appropriate. What puzzles me is where I was supposed to see this behavior documented? I'd be surprised if it's not documented somewhere in documents I've already looked at. But apparently I missed it, and can't now find a source that discusses this behavior.
First, why is it done this way? One reason is historical: that's how it was done in Unix First Edition. Files are taken in pairs; the first is opened for reading,the second created mode 17. Then the first is copied into the second. “Created” refers to the creat system call (the one that's famously missing an e), which truncates the existing file by the given name if there is one. And here's the source code of cp in Unix Second Edition (I can't find the source code of First Edition). You can see the calls to open for the source file and creat for the second file; and, as an improvement to First Edition, if the second file is an existing directory, cp creates a file in that directory. But, you may ask, why was it done that way at the time? The answer to “why did Unix originally do it that way” is almost always simplicity. cp opens its source for reading and creates its destination — and the system call to create a file overwrites an existing file by opening it for writing, because that allows the caller to impose the content of a file by the given name whether the file already existed or not. Now, as to where it's documented: in the FreeBSD man page. For each destination file that already exists, its contents are overwritten if permissions allow. Its mode, user ID, and group ID are unchanged unless the -p option was specified. That wording was present at least as far back as 1990 (back when BSD was 4.3BSD). There is similar wording on Solaris 10: If target_file exists, cp overwrites its contents, but the mode (and ACL if applicable), owner, and group associated with it are not changed. Your case is even spelled out in the HP-UX 10 manual: If new_file is a link to an existing file with other links, overwrites the existing file and retains all links. POSIX puts it in standardese. Quoting from Single UNIX v2: If dest_file exists, the following steps are taken: (…) A file descriptor for dest_file will be obtained by performing actions equivalent to the XSH specification open() function called using dest_file as the path argument, and the bitwise inclusive OR of O_WRONLY and O_TRUNC as the oflag argument. The man pages and specification that I quoted further specifies that if the -f option is passed and the attempt to open/create the target file fails (typically due to not having permission to write the file), cp tries to remove the target and create a file again. This would break the hard link in your scenario. You may want to report a documentation bug against the GNU coreutils manual, since it doesn't document this behavior. Even the description of --preserve=links, which in your scenario would lead to the paul link being removed and a new file being created, doesn't make it clear what happens without --preserve=links. The description of -f kind of implies what happens without it but doesn't spell it out (“When copying without this option and an existing destination file cannot be opened for writing, the copy fails. However, with --force, …”).
Surprised by behavior of cp with hardlinks
1,286,911,424,000
This is a bit of a theoretical question, but it's important to use proper names for things. In UNIX/Linux file systems, .. points to the parent directory. However, we know that hard links cannot point to directories, because that has the potential to break the acyclic graph structure of the filesystem and cause commands to run in an endless loop. So, is .. really a hard link (like .)? That would make it a special type of hard link, not subject to the directory restriction, but which for all purposes behaves like one. Or is that a special inode mapping, hardcoded into the filesystem, which ought not be called a hard link?
It depends on the filesystem. Most filesystems follow the traditional Unix design, where . and .. are hard links, i.e. they're actual directory entries in the filesystem. The hard link count of a directory is 2 + n where n is the number of subdirectories: that's the entry in the directory's parent, the directory's own . entry, and each subdirectory's .. entry. The hard link count is updated each time a subdirectory is created, removed or moved in or out of the directory. See Why does a new directory have a hard link count of 2 before anything is added to it? for a more detailed explanation. A few filesystems deviate from this tradition, in particular btrfs. we know that hard links cannot point to directories This is imprecise wording. More precisely, you can't create a hard link to a directory using the ln utility or the link system call or a similar method, because the kernel prevents you. Calling mkdir does create a hard link to the parent of the new directory. It's the only way to create a new hard link to a directory on a filesystem (and conversely removing a directory is the only way to remove a hard link to a directory). Also, note that it's misleading to think of hard links in terms of “pointing to” a primary file. Hard links are not directional, unlike symbolic links. When a file has multiple hard links, they're equivalent. After the following sequence: mkdir a b touch a/file ln a/file b/file there is nothing in the filesystem that makes b/file secondary to a/file. The two directory entries both refer to the same file. They're both hard links to the file.
Is '..' really a hard link?
1,286,911,424,000
When you upgrade or reinstall a package with dpkg (and ultimately anything that uses it, like apt-get etc) it backs up the existing files by creating a hard link to the file before replacing it. That way if the unpack fails it can easily put back the existing files. That's great, since it protects the operating system from Bad Things™ happening. Except... it only works if your filesystem supports hard links. Not all filesystems do - such as FAT filesystems. I am working on a distribution of Debian for a specific embedded ARM platform, and the boot environment requires that certain files (the kernel included) are on a FAT filesystem so the boot code is able to locate and load them. When you go to upgrade the kernel package (or any other package that has files in that FAT partition) the install fails with: dpkg: error processing archive linux-image3.18.11+_3.18.11.2.armadillian_armhf.deb (--install): unable to make backup link of `./boot/vmlinuz-3.18.11+' before installing new version: Operation not permitted And the whole upgrade fails. I have scoured the web, and the only references I can find are specific people with specific problems when doing specific upgrades, the answer to which is usually "Delete /boot/vmlinuz-3.18.11+ and try again", and yes, that fixes that specific problem. But that's not the answer for me. I am an OS distributor, not an OS user, so I need a way to fix this that doesn't involve the end user manually deleting their kernel files before doing an upgrade. I need a way to tell dpkg to "copy, not hard link" for files on /boot (or all files for all I care, though that would slow down the upgrade operation somewhat), or better yet "If a hard link fails, don't complain, just copy it instead". I have tried such things as the --force-unsafe-io and even --force-all flags to dpkg, but nothing has any effect.
The behaviour you're seeing is implemented in archives.c in the dpkg source, line 1030 (for version 1.18.1): debug(dbg_eachfiledetail, "tarobject nondirectory, 'link' backup"); if (link(fnamevb.buf,fnametmpvb.buf)) ohshite(_("unable to make backup link of '%.255s' before installing new version"), ti->name); It seems to me that you could handle the link failure by falling back to the rename behaviour used lines 1003 and following; something like (this is untested): debug(dbg_eachfiledetail, "tarobject nondirectory, 'link' backup"); if (link(fnamevb.buf,fnametmpvb.buf)) { debug(dbg_eachfiledetail,"link failed, nonatomic"); nifd->namenode->flags |= fnnf_no_atomic_overwrite; if (rename(fnamevb.buf,fnametmpvb.buf)) ohshite(_("unable to move aside '%.255s' to install new version"), ti->name); } I'm not a dpkg expert though... (And there's no option already available in dpkg to provide this behaviour.)
dpkg replacing files on a FAT filesystem
1,286,911,424,000
I was wondering if there was a way to register this, but since most modern search engines don't work well with phrases over about 5 words in length, I need some help on this one. I was wondering this because I'm making a bash script that has to register files as certain types and make decisions accordingly. This technically isn't important to my project, but I was curious. Also, if they are considered to be regular files, then is there a way to check if these files are hard linked without having to parse ls -i? And is there a way to check if some arbitrary file, X, is hard linked to some other arbitrary file, Y, without using the find -i command?
In Unix-style systems, the data structure which represents filesystem objects (in other words, the data about a file), is stored in what's called an "inode". A file name is just a link to this inode, and is referred to as a "hard link". There is no difference between the first name a file is given and any subsequent link. So the answer is, "yes": a hard link is a regular file and, indeed, a regular file is a hard link. The ls command will show you how many hard links there are to the file. For example: seumasmac@comp:~$ echo Hello > /tmp/hello.txt seumasmac@comp:~$ ls -l /tmp/hello.txt -rw-rw-r-- 1 seumasmac seumasmac 6 Oct 4 13:05 /tmp/hello.txt Here we've created a file called /tmp/hello.txt. The 1 in the output from ls -l indicates that there is 1 hard link to this file. This hard link is the filename itself /tmp/hello.txt. If we now create another hard link to this file: seumasmac@comp:~$ ln /tmp/hello.txt /tmp/helloagain.txt seumasmac@comp:~$ ls -l /tmp/hello* -rw-rw-r-- 2 seumasmac seumasmac 6 Oct 4 13:05 /tmp/helloagain.txt -rw-rw-r-- 2 seumasmac seumasmac 6 Oct 4 13:05 /tmp/hello.txt you can now see that both filenames indicate there are 2 hard links to the file. Neither of these is the "proper" filename, they're both equally valid. We can see that they both point to the same inode (in this case, 5374043): seumasmac@comp:~$ ls -i /tmp/hello* 5374043 /tmp/helloagain.txt 5374043 /tmp/hello.txt There is a common misconception that this is different for directories. I've heard people say that the number of links returned by ls for a directory is the number of subdirectories, including . and .. which is incorrect. Or, at least, while it will give you the correct number, it's right for the wrong reasons! If we create a directory and do a ls -ld we get: seumasmac@comp:~$ mkdir /tmp/testdir seumasmac@comp:~$ ls -ld /tmp/testdir drwxrwxr-x 2 seumasmac seumasmac 4096 Oct 4 13:20 /tmp/testdir This shows there are 2 hard links to this directory. These are: /tmp/testdir /tmp/testdir/. Note that /tmp/testdir/.. is not a link to this directory, it's a link to /tmp. And this tells you why the "number of subdirectories" thing works. When we create a new subdirectory: seumasmac@comp:~$ mkdir /tmp/testdir/dir2 seumasmac@comp:~$ ls -ld /tmp/testdir drwxrwxr-x 3 seumasmac seumasmac 4096 Oct 4 13:24 /tmp/testdir you can now see there are 3 hard links to /tmp/testdir directory. These are: /tmp/testdir /tmp/testdir/. /tmp/testdir/dir2/.. So every new sub-directory will increase the link count by one, because of the .. entry it contains.
Do hard links count as normal files?
1,286,911,424,000
I am keeping my dotfiles under version control and the script deploying them creates hard links. I also use etckeeper to put my /etc under version control. Recently I have gotten warnings like this: warning: hard-linked files could cause problems with bzr A simple copy (cp filename.ext filename.ext) will not work: cp: `filename.ext' and `filename.ext' are the same file Renaming/moving a file - except across volumes - also doesn't break the hard-link. So my question is: is there a way to break a hard-link to a file without actually having to know where the other hard-link/s to that file is/are?
cp -p filename filename.tmp mv -f filename.tmp filename Making it scriptable: dir=$(dirname -- "$filename") tmp=$(TMPDIR=$dir mktemp) cp -p -- "$filename" "$tmp" mv -f -- "$tmp" "$filename" Doing the copy first, then moving it into place, has the advantage that the file atomically changes from being a hard link to being a separate copy (there is no point in time where filename is partial or missing).
Breaking a hard-link in-place?
1,286,911,424,000
From the man pages: ln - make links between files and link - call the link function to create a link to a file These seem to do the same thing however ln takes a lot of options as well. Is link just a very basic ln? Is there any reason to use link over ln?
link used solely for hard links, calls the link() system function and doesn't perform error checking when attempting to create the link ln has error checking and can create hard and soft links
What is the difference between the link and ln commands?
1,286,911,424,000
I am implementing a backup scheme using rsync and hardlinks. I know I can use link-dest with rsync to do the hardlinks, but I saw mention of using "cp -l" before "link-dest" was implemented in rsync. Another method of hardlinking I know of is "ln". So my question is, out of curiosity: is there a difference in making hardlinks using "cp -l" as compared to using "ln"?
The results of both has to be the same, in that a hard link is created to the original file. The difference is in the intended usage and therefore the options available to each command. For example, cp can use recursion whereas ln cannot: cp -lr <src> <target> will create hard links in <target> to all files in <src>. (it creates new directories; not links) The result will be that the directory tree structure under <target> will look identical to the one under <src>. It will differ from cp -r <src> <target> in that using the latter will copy each file and folder and give each a new inode whereas the former just uses hard links on files and therefore simply increases their Links count. When used to copy a single file, as in your example, then the results will be the identical.
Is there a difference between hardlinking with cp -l or ln?
1,286,911,424,000
When I stat a directory I get a listing that tell me there are 5 links to the directory. stat dir My question is how do I get information (names and locations) to all these 5 links?
You just need ls (or find). When you create a directory, its link count starts at 2: One for the directory itself One for the . link inside itself The other thing that increases the directory's link count is its subdirectories: they all have a .. entry linking back to their parent, adding one to its link count. You can't hardlink directories in Linux, so these are the only things that count towards the link count - two plus number of subdirectories.
How to find all the links to a directory
1,286,911,424,000
When I wanted to create a hard link in my /home directory in root mode, Linux showed the following error message: ln: failed to create hard link ‘my_sdb’ => ‘/dev/sda1’: Invalid cross-device link The above error message is shown below: # cd /home/user/ # ln /dev/sda1 my_sdb But I could only create a hard link in the /dev directory, and it was not possible in other directories. Now, I want to know how to create a hard link from an existing device file (like sdb1) in /home directory (or other directories) ?
But I could only create a hard link in the /dev directory and it was not possible in other directories. As shown by the error message, it is not possible to create a hard link across different filesystems; you can create only soft (symbolic) links. For instance, if your /home is in a different partition than your root partition, you won't be able to hard link /tmp/foo to /home/user/. Now, as @RichardNeumann pointed out, /dev is usually mounted as a devtmpfs filesystem. See this example: [dr01@centos7 ~]$ df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/centos_centos7-root 46110724 3792836 42317888 9% / devtmpfs 4063180 0 4063180 0% /dev tmpfs 4078924 0 4078924 0% /dev/shm tmpfs 4078924 9148 4069776 1% /run tmpfs 4078924 0 4078924 0% /sys/fs/cgroup /dev/sda1 1038336 202684 835652 20% /boot tmpfs 815788 28 815760 1% /run/user/1000 Therefore you can only create hard links to files in /dev within /dev.
Why I can't create a hard link from device file in other than /dev directory?
1,286,911,424,000
I am trying to take snapshots of a massive folder regularly. I have read here: http://www.mikerubel.org/computers/rsync_snapshots/#Incremental that cp -al takes a snapshot of a folder by simply copying the hard links. That is all great, but the problem is that in this snapshot, if I change a file, it changes in all snapshots. What I would like instead is to have the system create a new file on-change and link to that instead. That way each snapshot would not become invalid on an edit of the first file. How can I achieve that? p.s. I tried rsync -a --delete --link-dest=../backup.1 source_directory/ backup.0/, but it has the same problem.
That's how hardlinks work. But, there are ways around it: A couple of options come to mind: Use a filesystem with support for copy-on-write files, like btrfs. Of course, were you using btrfs, you'd just use its native snapshots... If your filesystem supports it, you can use cp --reflink=always. Unfortunately, ext4 doesn't support this. Only share hardlinks across your snapshots, not with the original. That is, the first time you see a given version of a file, copy it to the snapshot. But the next time, link it to the one in the previous snapshot. (Not sure what program I used to do this—a decade ago—but searching turns up dirvish, obnam, storebackup, and rsnapshot) Depending on how your files are being changed, you might be able to guarantee that a write temp/rename is used to change them, then that will break the hardlink—so the version in the snapshot will remain pristine. This is less safe, though, as bugs could corrupt your snapshot. Take LVM snapshots of the entire filesystem. Of course, there is the other option—use a proper backup system. Most all of them can manage to only backup changed files.
`cp -al` snapshot whose hard links get directed to a new file when edited
1,286,911,424,000
I have some complex read-only data in my file system. It contains thousands of snapshots of certain revisions of a svn repository, and the output of regression tests. Identical files between snapshots are already de-duplicated using hard links. This way, the storage capacity doesn't need to be large, but it still consumes a lot of inodes, and this makes fsck painfully long for my main file system. I'd like to move these data to another file system, so that it doesn't affect the main file system too much. Do you have suggestions? Squashfs seems to be a possible choice, but I'll have to check if it can handle hard links efficiently.
If it's abot fsck slowness, did you try ext4? They added a few features to it that make fsck really quick by not looking at unused inodes: Fsck is a very slow operation, especially the first step: checking all the inodes in the file system. In Ext4, at the end of each group's inode table will be stored a list of unused inodes (with a checksum, for safety), so fsck will not check those inodes. The result is that total fsck time improves from 2 to 20 times, depending on the number of used inodes (http://kerneltrap.org/Linux/Improving_fsck_Speeds_in_Ext4). It must be noticed that it's fsck, and not Ext4, who will build the list of unused inodes. This means that you must run fsck to get the list of unused inodes built, and only the next fsck run will be faster (you need to pass a fsck in order to convert a Ext3 filesystem to Ext4 anyway). There's also a feature that takes part in this fsck speed up - "flexible block groups" - that also speeds up filesystem operations.
filesystem for archiving
1,286,911,424,000
Let's say I have two hard links pointing at the same picture. /photography/picture_1.jpg /best_pictures/picture_1.jpg What happens if I edit /photography/picture_1.jpg? Is the hard link broken and did I end up with 2 different files? Does it keep the link and therefore edit the "second" file, accessed by the second pointer?
A hard link is simply an alternative name for the same inode (file). Editing the file found at either of those paths will change the picture that both paths point to. A soft/symbolic link is different: it's a pointer to the original file and can be broken. A hard link is not a pointer to the file, it is the same file under a different name. However, some editing tools may use temporary files (as opposed to true, in-place editing) to create and save your edits. So it may end up being dependent on the tool you use. You can experiment with your editor of choice and see if it changes a file's inode number after editing. Find out a file's inode number from the output of ls -i filename (Thanks to Sparhawk's comment for that note). See also: What is the difference between a hard link and a symbolic link? why inode value changes when we edit in the "vi" editor
Editing a file with several hard links
1,286,911,424,000
From the manpage for ln: -d, -F, --directory allow the superuser to attempt to hard link directories (note: will probably fail due to system restrictions, even for the superuser) Are there any filesystem drivers that actually allow this, or is the only option mount --bind <src> <dest>? Or is this kind of behavior blocked by the kernel before it even gets to the filesystem-specific driver? NOTE: I'm not actually planning on doing this on any machines, just curious.
First a note: the ln command does not have options like -d, -F, --directory, this is a non-portable GNUism. The feature you are looking for, is implemented by the link(1)command. Back to your original question: On a typical UNIX system the decision, whether hard links on directories are possible, is made in the filesystem driver. The Solaris UFS driver supports hard links on directories, the ZFS driver does not. The reason why UFS on Solaris supports hard links is that AT&T was interested in this feature - UFS from BSD does not support hard linked directories. The reason why ZFS does not support hardlinked directories is that Jeff Bonwick does not like that feature. Regarding Linux, I would guess that Linux blocks attempts to created hard links on directories in the upper kernel layers. The reason for this assumption is that Linus Torvalds wrote code for GIT that did shred directories when git clone was called as root on a platform that supports hard linked directories. Note that a filesystem that supports to create hard linked directories also needs to support unlink(1) to remove non-empty directories as root. So if we assume that Torvalds knows how Linux works and if Linux did support hard linked directories, Torvalds should have known that calling unlink(2) on a directory while being root, will not return with an error but shred that directory. IN other words, it is unlikely that Linux permits a file system driver to implement hard linked directories.
Are there any filesystems for which `ln -d` succeeds?
1,286,911,424,000
Is there a way to tell cp to --link (i.e. create hard links), but fall back in the case where I am attempting inter-device hardlinks? Inter-device links aren't possible and would cause cp to fail. The reason I am asking is because I would like to use this in a GNUmakefile and would prefer a readable command line over some convoluted and lengthy one (or a function, for that matter). The question is for GNU coreutils (7.4 and 8.13). Note: right now the workaround would be something like (GNU make recipe syntax): cp -fl $^ $@ || cp -f $^ $@ This will of course give spurious error messages in case of inter-device links, although succeeding on the second cp call then. Also, then this gets expanded (source form looks readable after all) it won't be too readable anymore.
cp doesn't have this option. You could write a wrapper script, but it's pretty simple. ln -f $^ $@ 2>/dev/null || cp -f $^ $@ GNU Coreutils 7.5 introduced the --reflink option. If you pass --reflink=auto and the underlying filesystem supports copy-on-write (e.g. Btrfs or ZFS) and the copy happens to be on the same device, then cp will create a new inode but not copy the content; otherwise cp performs a normal copy. This is still not a hard link (the target will always be a different inode), but it's probably even better for your use case. However, if you're on ext4 (like most people nowadays), which doesn't support copy-on-write, this won't help you.
Is there a way to express: `--link` or fall back to ordinary copy in cp (from GNU coreutils)?
1,286,911,424,000
I've got a directory tree created by rsnapshot, which contains multiple snapshots of the same directory structure with all identical files replaced by hardlinks. I would like to delete all those hardlink duplicates and keep only a single copy of every file (so I can later move all files into a sorted archive without having to touch identical files twice). Is there a tool that does that? So far I've only found tools that find duplicates and create hardlinks to replace them… I guess I could list all files and their inode numbers and implement the deduplicating and deleting myself, but I don't want to reinvent the wheel here.
In the end it wasn't too hard to do this manually, based on Stéphane's and xenoid's hints and some prior experience with find. I had to adapt a few commands to work with FreeBSD's non-GNU tools — GNU find has the -printf option that could have replaced the -exec stat, but FreeBSD's find doesn't have that. # create a list of "<inode number> <tab> <full file path>" find rsnapshots -type f -links +1 -exec stat -f '%i%t%R' {} + > inodes.txt # sort the list by inode number (to have consecutive blocks of duplicate files) sort -n inodes.txt > inodes.sorted.txt # remove the first file from each block (we want to keep one link per inode) awk -F'\t' 'BEGIN {lastinode = 0} {inode = 0+$1; if (inode == lastinode) {print $2}; lastinode = inode}' inodes.sorted.txt > inodes.to-delete.txt # delete duplicates (watch out for special characters in the filename, and possibly adjust the read command and double quotes accordingly) cat inodes.to-delete.txt | while read line; do rm -f "$line"; done
How to delete all duplicate hardlinks to a file?
1,286,911,424,000
I understand the technical difference between symlinks and hardlinks, this is a question about their use in practice, particularly I'm curious to know why both are used in seemingly similar conditions: the /bin directory. Here's a fragment its listing on my system: ~$ ls -lai /bin total 10508 32770 drwxr-xr-x 2 root root 4096 Jun 14 11:47 . 2 drwxr-xr-x 28 root root 4096 Sep 6 13:15 .. 119 -rwxr-xr-x 1 root root 959120 Mar 28 22:02 bash 2820 -rwxr-xr-x 3 root root 31112 Dec 15 2011 bunzip2 127 -rwxr-xr-x 1 root root 1832016 Nov 16 2012 busybox 2820 -rwxr-xr-x 3 root root 31112 Dec 15 2011 bzcat 6191 lrwxrwxrwx 1 root root 6 Dec 15 2011 bzcmp -> bzdiff 5640 -rwxr-xr-x 1 root root 2140 Dec 15 2011 bzdiff 5872 lrwxrwxrwx 1 root root 6 Dec 15 2011 bzegrep -> bzgrep 3520 -rwxr-xr-x 1 root root 4877 Dec 15 2011 bzexe 6184 lrwxrwxrwx 1 root root 6 Dec 15 2011 bzfgrep -> bzgrep 5397 -rwxr-xr-x 1 root root 3642 Dec 15 2011 bzgrep 2820 -rwxr-xr-x 3 root root 31112 Dec 15 2011 bzip2 2851 -rwxr-xr-x 1 root root 10336 Dec 15 2011 bzip2recover 6189 lrwxrwxrwx 1 root root 6 Dec 15 2011 bzless -> bzmore 5606 -rwxr-xr-x 1 root root 1297 Dec 15 2011 bzmore I indented the hardlinks to the same inode for better visibility. So are symlinks used in case of bzcmp, bzegrep, bzfgrep, bzless and hardlinks in case of bzip2, bzcat, bunzip2? They are all regular files (not directories), reside inside one filesystem, are system utilities and are even made for working with the same thing: bzip archives. Are the reasons for use of hardlinks/symlinks in this particular case purely historical or am I missing something? Clarification of my question: I'm not asking about: The technical differences between symlinks and hardlinks The theoretical advantages and disadvantages each of them These questions have been addressed in other threads on SO. I'm trying to understand why different decisions were made in a specific case: for a group of related system utilities. Technically, they all could've been symlinks or they all could've been hardlinks, both options would work (and in both cases a program can still figure out how it's been invoked via argv[0]). I want to understand the intent here if there is any. Related: Why do hard links exist?
Why use hardlinks vs. Symbolic links There are primarily 3 advantages of using hardlinks over symbolic links in this scenario. Hard links With a hard link, the link points to the inode directly. Hard links are like having multiple copies of the executable but only using the disk space of one. You can rename either branch of the hard link without breaking anything. Symbolic links The link points to the object (which then in-turn points to the inode). They can span filesystems, whereas hardlinks cannot. Advantages of linking in general These links exist because many executables behave differently based on how they were called. For example the 2 commands bzless and bzmore are actually a single executable, bzmore. The executable will behave differently depending on which names was used to invoke it. This is done for a variety of reasons. Here are some of the more obvious ones: Easier to develop a single executable rather than many Saves disk space Easier to deploy Why are both being used? The choice of either, in this particular application, is moot. Either can facilitate the feature of acting as an alias so that a single executable can be overloaded. That's really the key feature that is getting exploited by the developers of the various programs here. In looking at the FHS (Filesystem Hierarchy Standard) even specifies it this way, that it can be either. excerpt If /bin/sh is not a true Bourne shell, it must be a hard or symbolic link to the real shell command. The rationale behind this is because sh and bash mightn't necessarily behave in the same manner. The use of a symbolic link also allows users to easily see that /bin/sh is not a true Bourne shell. ... ... If the gunzip and zcat programs exist, they must be symbolic or hard links to gzip. /bin/csh may be a symbolic link to /bin/tcsh or /usr/bin/tcsh. References Why are reboot, shutdown and poweroff symlinks to systemctl?
Why is there a mix of symlinks and hardlinks in /bin?
1,286,911,424,000
I have a script that goes like this ln /myfile /dev/${uniquename}/myfile I want to remove the link of /dev/somename/myfile to decrease the link count. How do I do this?
TL;DR... just delete the file name you don't want (with rm). If you create a hard link (which is what your command above is doing), you have two names pointing to the same area of storage. You can delete either name without affecting the other name or the storage - it's only when the last name is removed that the area of storage is released. Compare this to soft links... created with ln -s - there, the link is different, it's a pointer to the original name rather than a pointer to the storage. If you delete the original named file the soft links point to something that has been deleted, so the link remains but is broken.
Properly unlinking hard links
1,286,911,424,000
I'm getting a permissions error in CentOS 7 when I try to create a hard link. With the same permissions set in CentOS 6 I do not get the error. The issue centers on group permissions. I'm not sure which OS version is right and which is wrong. Let me illustrate what's happening. In my current working directory, I have two directories: source and destination. At the start, destination is empty; source contains a text file. [root@tc-dlx-nba cwd]# ls -l total 0 drwxrwxrwx. 2 root root 6 Jun 12 14:33 destination drwxrwxrwx. 2 root root 21 Jun 12 14:33 source [root@tc-dlx-nba cwd]# ls -l destination/ total 0 [root@tc-dlx-nba cwd]# ls -l source/ total 4 -rw-r--r--. 1 root root 8 Jun 12 14:20 test.txt [root@tc-dlx-nba cwd]# As you can see, regarding the permissions the two directories are 777, with both the owner and group set to root. The text file's owner and group are also both set to root. However, the text file's permissions are read-write for the owner but read-only for the group. When I'm logged in as root, I have no problem creating a hard-link in the destination directory pointing to the text file (in the source directory). [root@tc-dlx-nba cwd]# ln source/test.txt destination/ [root@tc-dlx-nba cwd]# ls destination/ test.txt However, if I log in as another user, in this case, admin, I cannot create the link. I get: "Operation not permitted." [root@tc-dlx-nba cwd]# rm -f destination/test.txt [root@tc-dlx-nba cwd]# su admin bash-4.2$ pwd /root/cwd bash-4.2$ ln source/test.txt destination/ ln: failed to create hard link ‘destination/test.txt’ => ‘source/test.txt’: Operation not permitted What happens actually makes sense to me, but since the above is allowed in CentOS 6, I wanted to check to see if I was misunderstanding something. To me, it seems like a bug in CentOS 6 that has been fixed in CentOS 7. Anyone know what gives? Am I right believing that the above behavior is the correct behavior? Is it CentOS 6 that is correct? Or, are both right and perhaps there is some subtle group permissions issue that I'm missing? Thanks. Edit: I tried the same test just now on a Debian v7 VM that I have. Debian agrees with CentOS 7: "Operation not permitted." Edit #2: I just tried the same thing on Mac OS X (Yosemite). That worked the way CentOS 6 did. In other words, it allowed the link to be created. (Note: On OS X, the root group is called "wheel." That's the only difference, as far as I can tell.)
I spun up some fresh CentOS 6 and 7 vm's and was able to recreate the exact behavior you showed. After doing some digging, it turns out that this is actually a change in the kernel regarding default behavior with respect to hard and soft links for the sake of security. The following pages pointed me in the right direction: http://kernel.opensuse.org/cgit/kernel/commit/?id=561ec64ae67ef25cac8d72bb9c4bfc955edfd415 http://kernel.opensuse.org/cgit/kernel/commit/?id=800179c9b8a1 If you make the file world writable, your admin user will be able to create the hard link. To revert to the behavior of CentOS 6 system wide, new kernel parameters were added. Set the following in /etc/sysctl.conf: fs.protected_hardlinks = 0 fs.protected_symlinks = 0 then run sysctl -p As for why your program opts to use links instead of copying files, why create an exact copy of a file you need to use when you can just create an entry that points to the original blocks? This saves disk space and the operation is less costly in terms of CPU and I/O. The new hard link is the same file, just with different metadata/inode. If you were to delete the original file after creating a hard link, it won't affect the link. A file is only 'deleted' once all links have been removed.
Hard link permissions behavior different between CentOS 6 and CentOS 7
1,286,911,424,000
Note: Question although says vice versa but it really does not have any meaning since both of them point to the same inode and its not possible to say which is head and which is tail. Say I have a file hlh.txt [root@FREL ~]# fallocate -l 100 hlh.txt Now if I see the change time for hlh.txt [root@FREL ~]# stat hlh.txt File: hlh.txt Size: 100 Blocks: 8 IO Block: 4096 regular file Device: fc00h/64512d Inode: 994 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: unconfined_u:object_r:admin_home_t:s0 Access: 2023-01-11 01:43:05.469703330 -0500 Modify: 2023-01-11 01:43:05.469703330 -0500 Change: 2023-01-11 01:43:05.469703330 -0500 Birth: 2023-01-11 01:43:05.469703330 -0500 Creating hard link [root@FREL ~]# ln hlh.txt hlt.txt Since both hlh.txt and hlt.txt points to same inode, so change time would be the ctime of the hard link tail file which is understood. [root@FREL ~]# stat hlt.txt File: hlt.txt Size: 100 Blocks: 8 IO Block: 4096 regular file Device: fc00h/64512d Inode: 994 Links: 2 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: unconfined_u:object_r:admin_home_t:s0 Access: 2023-01-11 01:43:05.469703330 -0500 Modify: 2023-01-11 01:43:05.469703330 -0500 Change: 2023-01-11 01:44:05.316842644 -0500 Birth: 2023-01-11 01:43:05.469703330 -0500 But if I unlink the head file, that changes ctime of the file as well. Why? I mean all we did is delete the head, what significance does change time have here internally. Why does it need to be change? [root@FREL ~]# unlink hlh.txt [root@FREL ~]# [root@FREL ~]# stat hlt.txt File: hlt.txt Size: 100 Blocks: 8 IO Block: 4096 regular file Device: fc00h/64512d Inode: 994 Links: 1 Access: (0644/-rw-r--r--) Uid: ( 0/ root) Gid: ( 0/ root) Context: unconfined_u:object_r:admin_home_t:s0 Access: 2023-01-11 01:43:05.469703330 -0500 Modify: 2023-01-11 01:43:05.469703330 -0500 Change: 2023-01-11 01:47:49.588364704 -0500 Birth: 2023-01-11 01:43:05.469703330 -0500
This is a requirement on the unlink() library function by POSIX: Upon successful completion, unlink() shall mark for update the last data modification and last file status change timestamps of the parent directory. Also, if the file's link count is not 0, the last file status change timestamp of the file shall be marked for update. The standard document does not expand on this requirement. Since the link count is decreased by one, I'm assuming the ctime timestamp (the "last file status change timestamp") is updated to reflect the fact that the file's status changed.
Deleting a hard link's tail file changes the change time of the head or vice versa. Why?
1,286,911,424,000
I'm trying to understand what cp --preserve=links does when used by itself. From my tests it seems that it copies a normal file normally and dereferences symlinks, but it seems like it just has the same effect as cp -L when used on a single file. Is that true or is there something I'm missing?
The --preserve=links option does not refer to symbolic links, but to hard links. It asks cp to preserve any existing hard link between two or more files that are being copied. $ date > file1 $ ln file1 file2 $ ls -1i file1 file2 6034008 file1 6034008 file2 You can see that the two original files are hard-linked and their inode number is 6034008. $ mkdir dir1 $ cp file1 file2 dir1 $ ls -1i dir1 total 8 6035093 file1 6038175 file2 You can see now that without --preserve=links their copies have two different inode numbers: there is no longer a hard link between the two. $ mkdir dir2 $ cp --preserve=links file1 file2 dir2 $ ls -1i dir2 total 8 6089617 file1 6089617 file2 You can see now that with --preserve=links, the two copies are still hard-linked, but their inode number is 6089617, which is not the same as the inode number of the original files (contrary to what cp --link would have done).
Info on cp --preserve=links
1,286,221,328,000
When displaying directories using ls -l, their number of links (the second field in the output) is at least two: one for the dir name and one for . $ mkdir foo $ ls -l total 2 drwxr-xr-x 2 user wheel 512 4 oct 14:02 foo Is it safe to always assume that the number of links above 2 corresponds to the number of subdirectories in this dir (.. links) ?
It is usually true on unix systems that the number of links to a directory is the number of subdirectories plus 2. However there are cases where this is not true: Some unices allow hard links to directories. Then there will be more than 2 links that do not correspond to subdirectories. There are filesystems where directories do not have entries for . and ... The GNU find manual mentions some examples in the discussion of its -noleaf option (which disables an optimization that assumes that . and .. exist in all directories): “CD-ROM or MS-DOS filesystems or AFS volume mount points” An almost reliable way to count the number of subdirectories (it may still fail if a file name contains a newline character) is $(($(LC_ALL=C ls -la /path/to/directory | grep '^d' | wc -l) - 2) A more reliable way uses shell globs */ and .*/; as usual handling the case where the pattern doesn't match is a bit of a pain (except in bash and zsh where you can turn on the nullglob option).
Can I determine the number of sub-directories in a directory using `ls -l`?
1,286,221,328,000
Suppose User A and User B have disk quotas of 1 GB. Also suppose User B creates a 900 MB file with permission 0666. This allows User A to access that file temporarily (for some project, etc.). Notice this allows User A to write to the file as well. If User A creates a hard link to that file, and User B then deletes the file, has User A essentially exploited the quota system by "stealing" 900 MB of storage from User B? Assume User B never reports this to the admin, and the admin never finds out. Also assume User B never suspects a thing about User A. In other words, assume User B will not look at User A's directory and corresponding files. If similar question has been asked/answered before, I apologize for not being able to find them.
This is one of the ugly corner cases of the Unix permission model. Granting write access to a file permits hard-linking it. If user A has write permission to the directory containing the file, they can move it to a directory where user B has no access. User B then can't access the file anymore, but it still counts against user B for quota purposes. Some older Unix systems had a worse hole: a user could call chown to grant one of their file to another user; if they'd made the file world-writable before, they could keep using that file but the file would count against the other user's quota. This is one reason modern systems reserve chown to root. There is no purely technical solution. It's up to user B to notice that their disk usage (du ~) doesn't match their quota usage and complain to the system administartor, who'll investigate and castigate user A.
Hard Linking Other Users' Files
1,286,221,328,000
I got a tarball (let's say t.tar.gz) that contains the following files ./a/a.txt ./b/b.txt where ./b/b.txt is a hard link to ./a/a.txt. I want to unpack the tarball on a network file system (AFS) that only supports hard links in the same directory (see here). Therefore, just unpacking it via tar -xzf t.tar.gz raises an error that the hard link ./b/b.txt cannot be created. So far, my solution to the problem was to unpack ./t.tar.gz on a file system that supports ordinary hard links. Then pack it again with the option --hard-dereference as the GNU tar manual proposes. And lastly, unpack that new tarball into the AFS. As this is unsatisfactory for me, I'm asking if there is an easier way to get the content of the archive unpacked directly to it's final destination? Such as an equivalent option to --hard-dereference for unpacking instead of archiving?
Mount the archive as a directory, for example with AVFS, then use your favorite file copying tool. mountavfs cp -a --no-preserve=links ~/.avfs/path/to/t.tar.gz\# target-directory/ or mountavfs rsync -a ~/.avfs/path/to/t.tar.gz\#/ target-directory/
unpacking tarball with hard links on a file system that doesn't support hard links
1,286,221,328,000
Softlinks are easily traceable to the original file with readlink etc... but I am having a hard time tracing hardlinks to the original file. $ ll -i /usr/bin/bash /bin/bash 1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /bin/bash* 1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /usr/bin/bash* ^ above is as expected - cool --> both files point to same inode 1310813 (but the number of links, indicated by ^, shows to be 1. From Gilles answer the reason for this can be understood) $ find / -samefile /bin/bash 2>/dev/null /usr/bin/bash above is as expected - so no problems. $ find / -samefile /usr/bin/bash 2>/dev/null /usr/bin/bash above is NOT cool. How do I trace the original file or every hardlink using the /usr/bin/bash file as reference? Strange - below did not help either. $ find / -inum 1310813 2>/dev/null /usr/bin/bash
First, there is no original file in the case of hard links; all hard links are equal. However, hard links aren’t involved here, as indicated by the link count of 1 in ls -l’s output: $ ll -i /usr/bin/bash /bin/bash 1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /bin/bash* 1310813 -rwxr-xr-x 1 root root 1183448 Jun 18 21:14 /usr/bin/bash* Your problem arises because of a symlink, the bin symlink which points to usr/bin. To find all the paths in which bash is available, you need to tell find to follow symlinks, using the -L option: $ find -L / -xdev -samefile /usr/bin/bash 2>/dev/null /usr/bin/rbash /usr/bin/bash /bin/rbash /bin/bash I’m using -xdev here because I know your system is installed on a single file system; this avoids descending into /dev, /proc, /run, /sys etc.
How to effectively trace hardlink in Linux?
1,286,221,328,000
$ sudo su # dd if=/dev/zero of=./myext.img bs=1024 count=100 . . . # modprobe loop # losetup --find --show myext.img /dev/loop0 # mkfs -t myext /dev/loop0 . . . # mkdir mnt # mount /dev/loop0 ./mnt # cd mnt # ls -al total 17 drwxr-xr-x 3 root root 1024 Jul 21 02:22 . drwxr-xr-x 11 shisui shisui 4096 Jul 21 02:22 .. drwx------ 2 root root 12288 Jul 21 02:22 lost+found (Cut out some of the output of some commands). My first question is, why isn't mnt showing up in the ls -al output? All I see is root. I cd'd into \mnt so I expected to see it in my ls -al output. But then what is the third link? Finally, are all the link numbers in this ls -al output hard links? Or does this link count also include symbolic links?
You don’t see mnt in the ls -al output because you’re inside mnt; it is represented by . There’s another hard link to ., lost+found/..; this explains the count of 3 links to the directory: . which points to the directory itself; .. which also points to the directory, because it’s the root directory in the file system (see Why does a new directory have a hard link count of 2 before anything is added to it?); lost+found/.., which points back to the root directory (again, in the file system, so mnt here). The link counts shown by ls -l count hard links only; symlinks aren’t included.
Why does this new directory have a link count of 3?
1,286,221,328,000
Having a (single, no batch filesystem processing needed) symlink, what a command line to use to turn it into a hard link to the same file?
ln -f "$(readlink <symlink>)" <symlink>
How to replace a symbolic link with an equivalent hard link?
1,286,221,328,000
I've found that I need to use hard links with a particular program (Ableton Live) that is unable to see aliases/symlinks, which is of course how I have all my working files organized. But making hard links is creating what appears to be duplicates of the original file. Do they actually take up as much space as the original? Or is the filesystem (OSX in this case) merely showing the size of the actual data on disk, and the fact that it is being referenced in two places does not actually double the amount of data?
The second thing you said is exactly correct. The file contents only exist once on disk. A hard link is an extra reference, which costs very little space - the size of a directory entry, which is the length of the filename plus a few bytes. I don't know if this applies to OSX, but in the version of GNU coreutils I have handy, du is aware of hard links, so you can use it to get an accurate report of the total size of a set of files. If it finds multiple links to a file, it only adds it to the total once. ls -l on the other hand does the wrong thing and adds everything it sees in a directory for its total line. $ ls -sl total 296 296 -rw-r--r-- 1 user group 300324 Feb 17 19:08 f1 $ du 296 . $ ln f1 f2 $ ls -sl total 592 296 -rw-r--r-- 2 user group 300324 Feb 17 19:08 f1 296 -rw-r--r-- 2 user group 300324 Feb 17 19:08 f2 $ du 296 . $ cp f1 f3 $ ls -sl total 888 296 -rw-r--r-- 2 user group 300324 Feb 17 19:08 f1 296 -rw-r--r-- 2 user group 300324 Feb 17 19:08 f2 296 -rw-r--r-- 1 user group 300324 Feb 17 19:08 f3 $ du 592 . $ The ultimate demonstration would be to create a huge file, more than half the size of the disk. Then see how many hard links you can create to it. Should be quite a lot.
Do hard links really take up so much disk space?
1,286,221,328,000
Can anyone help me to understand the logic behind the link value being "2" for the first time when we create any folders/ directory in linux. I searched a lot , but couldn't get satisfactory logic
The fundamental design of the Unix filesystem goes back to the early days. It is described in the paper The UNIX Time-Sharing System by Dennis M. Ritchie and Ken Thompson. The designers wanted to be able to refer to the current directory, and to have a way to go from a directory to its parent directory. Rather than introduce special shell syntax, they decided to use a feature that already existed anyway: directories can contain entries for other directories, so they decided that an entry with the special name . would always point to the directory itself, and an entry with the special name .. would always point to a directory's parent. For example, if the root directory contains a subdirectory called foo, then foo's .. entry points to the root directory and foo's . entry points to foo itself. Thus a directory's link count (the number of directory entries pointing to it) would always be 2 for a directory with no subdirectory: the expected entry in the parent directory, plus the . directory. Each subdirectory adds 1 to the link count due to the .. entry. The special entries . and .. were originally created by the mkdir command by mucking with the on-disk representation of the filesystem directly. Later systems moved this into the kernel. Today, many filesystems don't include . and .. entries in the on-disk representation anymore. The filesystem driver doesn't need ., and doesn't need .. either if it always remembers the location of a directory's parent (which increases memory consumption a bit, negligible by today's standards but not by the 1970's standards). In filesystems that include on-disk . and .. entries, the filesystem driver ensures that these entries are always present. In filesystems that don't include these entries in the on-disk representation, the filesystem driver pretends that these entries are present.
Link value is 2 by default for folders
1,286,221,328,000
I know we can do that for files. What about directories? It seems that cpanel uses that a lot.
Most filesystem do not support hard links on directories. However, you can symlink directories. You can bind mount a directory in Linux, which functions similar to a hard link from a user's perspective. Here is an example: mount --bind /usr /home/user/foo This is commonly used for chroot environments, since a symlink is relative the chroot's /, a bind mount can provide access to locations outside of the chroot.
Can we use symbolic link and hard link for directories?
1,286,221,328,000
The reason why I am asking is because I'm using iwatch (not to confuse with a gadget device) to watch for filesystem events (in my case - file creation/renaming). What I cannot explain is this log: /path/to/file.ext.filepart 0 IN_MODIFY /path/to/file.ext.filepart 0 IN_MODIFY /path/to/file.ext.filepart 0 IN_MODIFY /path/to/file.ext.filepart 0 IN_MODIFY /path/to/file.ext.filepart 0 IN_CLOSE_WRITE /path/to/file.ext 0 IN_CREATE /path/to/file.ext.filepart 0 IN_DELETE /path/to/file.ext 0 IN_ATTRIB To get it I've copied a file.ext from a remote machine using WinSCP with temporary file creation option turned on (so that it was either no file file.ext at all, in case if transfer was terminated, or the complete file was in the destination). And what confuses me is that the /path/to/file.ext is only created IN_CREATE and its attributes modified IN_ATTRIB (not sure which ones though, but I think that's where all the magic happens). The strangest thing here is that: The file.ext is not a result of moving file.ext.filepart - there would be a different move event The file.ext is not a result of copying file.ext.filepart - there would be a bunch of write events following by IN_CLOSE_WRITE So my question is - what is happening here under the hood: how the file.ext was created with the contents without an explicit rename or data copy?
$ inotifywait -m /tmp Setting up watches. Watches established. /tmp/ CREATE file.ext.filepart /tmp/ OPEN file.ext.filepart /tmp/ MODIFY file.ext.filepart /tmp/ CLOSE_WRITE,CLOSE file.ext.filepart /tmp/ CREATE file.ext /tmp/ DELETE file.ext.filepart Transcript from running $ echo hello >/tmp/file.ext.filepart $ ln /tmp/file.ext.filepart /tmp/file.ext $ rm /tmp/file.ext.filepart Moving a file generates a move event, but creating a hard link generates the same create event as creating a new, empty file (as do mkfifo and other ways to create files). Why does the SCP or SFTP server creates a hard link then remove the temporary file rather than moving the temporary file into place? In the source code of OpenSSH (portable 6.0), in sftp-server.c, in the function process_rename, I see the following code (reformatted and simplified to illustrate the part I want to show): if (S_ISREG(sb.st_mode)) { /* Race-free rename of regular files */ if (link(oldpath, newpath) == -1) { if (errno == EOPNOTSUPP || errno == ENOSYS) { /* fs doesn't support links, so fall back to stat+rename. This is racy. */ if (stat(newpath, &st) == -1) { rename(oldpath, newpath) == -1) } } } else { unlink(newpath); } } That is: try to create a hard link from the temporary file name to the desired file name, then remove the temporary file. If creating the hard link doesn't work because the OS or the filesystem doesn't support that, fall back to a different method: test if the desired file exists, and if doesn't, rename the temporary file. So the point is to rename the temporary file to its final location without risking overwriting a file that may have been created while the copy was in progress. Renaming wouldn't do because rename overwrites the target file if it exists.
Is it possible to create a non-empty file without write_close and rename event?
1,286,221,328,000
A question for ls command. root@cqcloud script]# ls /var/www/html -la total 36 drwxr-xr-x 9 root root 4096 Aug 31 01:12 . drwxr-xr-x 7 root root 4096 Aug 31 01:10 .. drwxr-xr-x 2 root root 4096 Aug 26 04:07 cmd drwxr-xr-x 5 root root 4096 Jul 3 10:07 cn.fnmili.com drwxr-xr-x 7 root root 4096 Aug 30 11:42 internal drwxr-xr-x 3 root root 4096 Jul 25 02:03 node drwxr-xr-x 4 root root 4096 Jul 11 01:26 sandbox drwxr-xr-x 13 root root 4096 Aug 26 03:45 tpshop drwxr-xr-x 2 root root 4096 Aug 31 01:12 trash [root@cqcloud script]# ls /var/www/html/cmd -la total 16 drwxr-xr-x 2 root root 4096 Aug 26 04:07 . drwxr-xr-x 9 root root 4096 Aug 31 01:12 .. -rw-r--r-- 1 root root 52 Aug 26 04:07 .htaccess -rw-r--r-- 1 root root 73 Aug 26 04:02 df.php You can see that the cmd folder has a link count of 2, but it actually has 4 links, including 2 files, . and .. folder. Can anyone explain why?
The link count for a directory is the number of names that directory has (this works just as for regular files). Your cmd directory has two names: cmd in its parent directory. . in the directory itself. The /var/www/html directory has nine names: html in its parent directory. . in itself. .. in each of its (seven) subdirectories. Under normal circumstances, the link count for a directory's . entry should be 2 plus the number of subdirectories that it contains. This is also true for the root directory /, even though it does not have a parent directory and therefore ought to have a link count of 1 plus the number of subdirectories. What it does have is a .. directory, which takes you back to /. So that solves that riddle; it's /.. that provides the "extra" link to /. This is the only directory whose .. directory is a link back to ..
The number of links for a folder doesn't reflect the real status?