date int64 1,220B 1,719B | question_description stringlengths 28 29.9k | accepted_answer stringlengths 12 26.4k | question_title stringlengths 14 159 |
|---|---|---|---|
1,542,760,058,000 |
This is my current partition table:
In which /dev/sda8 is the partition on Which I am currently running my primary OS - Trisquel GNU/Linux (you can see it's mount point as /). The /dev/sda1 is the primary partition containing Windows XP.
I want to resize /dev/sda1 (Size:50GB ; Used 27.97GB) i.e. want to reduce it to 30GB (Split into 30GB + 20GB). So, I've first Unmounted /dev/sda1.
Now When I use Resize/Move option (from right-click menu) The following window appears:
The problem is that it doesn't allow to reduce partition! Why?(Because it is primary partition?)
And Finally How can I resize (reduce/split) /dev/sda1?
Note:- Gparted is running from Trisquel (GNU/Linux).
|
Before you can resize any ntfs based partition, you need to ensure all the files are pushed up to the start of the partition. This is acomplisehd by running the defragmentation process on the partition within windowsXP.
It may also be useful to delete any temporal files or any other stuff you don't want from the windows partition.
In addition, deleting the windows swap file may also be helpfull, as it is normally not moved by the defragmentation tool. You can safely delete the pagefile from linux before resizing the partition, or you may turn off the swap file within windows.
| Gparted : Resize (split) Primary Partition? |
1,542,760,058,000 |
I have a Debian server and I would like to increase the "root" partition from 5GB to 17GB and to diminish the "home" partition from 14GB to 2GB.
Here's the filesystem config:
root@APP05:~# df -T
Sys. fich. Type 1K-blocks Util. Disponible Uti% Monté sur
rootfs rootfs 5354080 1388664 3693444 28% /
udev devtmpfs 10240 0 10240 0% /dev
tmpfs tmpfs 205416 168 205248 1% /run
/dev/mapper/APP05-root ext4 5354080 1388664 3693444 28% /
tmpfs tmpfs 5120 0 5120 0% /run/lock
tmpfs tmpfs 410820 0 410820 0% /run/shm
/dev/sda1 ext2 233191 17794 202956 9% /boot
/dev/mapper/APP05-home ext4 14360944 166712 13464736 2% /home
I googled for some answers, read a couple of Q&A on several forums but I'm not sure what are the right commands to achieve this. From what I understand, "/dev/mapper/APP05-root" is an LVM, so extending it's size needs to be done after extending "rootfs" size, which is a filesystem.
Can you please tell me how I should proceed?
|
So, based on @wurtel's answer and the research I've done, here's the script and the steps I came up with.
1) Unmount the "home" partition
umount /dev/mapper/APP05-home
2) Resize the "home" filesystem to a size of 2G
resize2fs -p /dev/mapper/APP05-home 2G
3) Reduce the size of the "home" logical volume to 2,1G (the volume needs to be a little bit bigger due to filesystem overhead)
lvresize --size 2,1G /dev/mapper/APP05-home
4) Resize the filesystem to match the logical volume's size
resize2fs -p /dev/mapper/APP05-home
5) Mount back the "home" partition
mount /dev/mapper/APP05-home /home
6) Increase the size of the "root" logical volume to 17.2G
lvresize --size 17.2G /dev/mapper/APP05-root
7) Increase the "root" filesystem to a size of 17.2G
resize2fs -p /dev/mapper/APP05-root 17.2G
UPDATE : I actually replaced points 6) and 7) with the followings in order to not have to specify the "root" size exactly, but to extend to all the free space
lvextend -l +100%FREE /dev/mapper/APP05-root
resize2fs -p /dev/mapper/APP05-root
This solution is inspired also from the questions: Repartitioning harddisk and http://pubmem.wordpress.com/2010/09/16/how-to-resize-lvm-logical-volumes-with-ext4-as-filesystem/
UPDATE: This solution worked and the result is the following
root@APP05:~# df -h
Sys. fich. Taille Util. Dispo Uti% Monté sur
rootfs 17G 1,4G 15G 9% /
udev 10M 0 10M 0% /dev
tmpfs 201M 168K 201M 1% /run
/dev/mapper/APP05-root 17G 1,4G 15G 9% /
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 402M 0 402M 0% /run/shm
/dev/sda1 228M 18M 199M 9% /boot
/dev/mapper/APP05-home 2,1G 149M 1,9G 8% /home
Thanks again for all the answers, especially to @wurtel!
| Resize root LVM and FS in Debian 7 |
1,542,760,058,000 |
Moving data from one drive to another is slow.
Copying data on a drive to it's self is slow.
Moving data from one drive to it's self is fast.
If I'm moving data on the SAME drive but a different partition, shouldn't it be fast? I assumed the move would be a fat table change and not an actual move (copy/delete) of the data on the disk. How can I make sure this is what happens?
FYI I'm on mac osx and I'm dealing with two fat32 partitions on the same external.
|
If I'm moving data on the SAME drive but a different partition, shouldn't it be fast? I assumed the move would be a fat table change...
No, because a FAT is part of a file system, and each partition contains one filesystem. So if you move data to a different filesystem, the operating system cannot simply rearrange things in a fat table -- there are two to consider, and they do not map each other arbitarily. The destination must allocate some of its own space, and the source (in a move) frees some.
If it were just a matter of rearranging the tables, you would run into inconsistencies such as:
I have a 100 GB partition and a 2 GB partition. If moving one to the other just involved rearranging tables, I should be able to move a 20 GB file from the former to the latter.
I move files to a partition on a USB stick, then I move the stick: if moving files just involved rearranging tables, where are the files going to be when I stick this in another computer?
I realize the second case is not part of the context you are referring to, but the reason they amount to the same thing is because otherwise you would require another abstraction layer stored on the device. It cannot be something simply invented and juggled by the operating system, because you may move the device and/or use it under a different OS: now where is the information?
Devices may contain meta data indicating the size, type, and offset of their partitions. Fortunately, they do not contain information about the content of these partitions. I say fortunately because this is bound to create more problems than it solves.
Filesystems are intended to be top level, discrete entities, not things that are part of a larger system of storage (although they may be that in some contexts).
Some devices such as SSDs may implement an optimizing feature akin to what you imply on a hardware level, however. In other words, if you move something from one partition to another on an SSD, it may only rearrange some references, in so far as that hardware is doing accounting for itself as a whole irrespective of how it has been broken into different partitions on a higher level of abstraction. This would be totally opaque to the operating system and everything else, but you may notice it as an extremely fast move. It requires that the device run some kind of firmware which presents a virtual set of block addresses to the operating system, then maps them to the physical itself, which traditional drives do not do: they present the actual physical addresses to the operating system so that it may make whatever optimal use of this that it can. Hence, file system implementations (FAT, etc.) must assume they are organizing an actual physical region of a device and there is no layer above the filesystem to try to further organize the contents of the entire device (beyond breaking it into partitions).
| Move Data from Partition to Partition on Same Drive |
1,542,760,058,000 |
I need to get to a partition that is larger than where I'm at to import a project that won't fit where I'm currently. Also I'm running Linux on a VM on Windows.
jack@ubuntu:~$ sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME FSTYPE SIZE MOUNTPOINT LABEL
sda 20G
ââsda1 ext4 15.7G /
ââsda2 1K
ââsda5 swap 4.3G [SWAP]
sdc 40G
ââsdc1 ext4 40G work
sdb 20G
ââsdb1 ext4 20G /mnt/disk
jack@ubuntu:~$ sudo fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders, total 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000750e0
Device Boot Start End Blocks Id System
/dev/sda1 * 2048 32956415 16477184 83 Linux
/dev/sda2 32958462 41940991 4491265 5 Extended
/dev/sda5 32958464 41940991 4491264 82 Linux swap / Solaris
|
If you'd put the / partition at the end of sda, you'd have a trivial upgrade process:
Shut the VM down, and resize the raw disk drive in the VM management interface.
Boot into single user mode, resize the last partition to extend over the new space.
Resize the filesystem.
Doing this to a partition sandwiched between two others is probably more trouble than it's worth.
Therefore, I recommend that you move part of the contents of your / partition to a new disk:
Shut the VM down, and add another virtual disk.
Size it to hold the existing contents of / that you want to move to the new disk, plus however much space you want left over. Say you're using 14 GiB of your 16 GiB /, and you want to move only /home, which is using 10 GiB. If you want double your current space, you'd make the new drive 20 GiB.
You don't want to move any core OS directories: /bin, /boot, /etc, /root, most of /usr... It's safe to move /usr/share and /usr/local to other disks.
Boot back up, preferably into single-user mode. (It will make later things easier if you don't have lots of background stuff running.)
Then, figure out which /dev node your new disk got. There are many ways to do this. It's most likely /dev/sdb, but it might get /dev/sdc to be put after a previously-mapped optical drive, for example.
We'll assume /dev/sdb for the purposes of explanation here.
Use parted to partition this new virtual disk:
# parted /dev/sdb
(parted) mklabel gpt
(parted) mkpart ext2 1 -1
(parted) quit
That takes over the entire virtual disk. This will allow you to use the much simpler resize process above if you run out of disk capacity again in the future.
If you plan on moving lots of unrelated directories (e.g. /home, /var and /usr/local) it's best to create a separate virtual disk for each, rather than partition one big disk. Partitioning is a kind of hack we tolerate in the world of real disks. When you're dealing with VMs, you're freed from the costs of multiple independent hard disks.
Create and mount the new filesystem(s) in temporary locations. I typically call them things like /mnt/newhome:
# mkfs.ext4 /dev/sdb1
# mkdir -m 400 /mnt/newhome
# mount /dev/sdb1 /mnt/newhome
Copy the current contents of the tree you want to transplant, being sure to copy permissions. There are several ways to do this:
# cd /home
# find . -print | cpio -pd /mnt/newhome
OR
# cp -aR * /mnt/newhome
OR
# rsync -a * /mnt/newhome
Check that /mnt/newhome has plausible contents. Does df -h show approximately the same value as du -h /home, for example?
Boot into single-user mode, if you aren't already.
Move the old filesystem out of the way, then lay the new one over it:
# cd /
# mv home oldhome
# mkdir -m 400 home
# umount /mnt/newhome
# mount /dev/sdb1 /home
# chmod 755 home
# chown root.root home
The last two commands are just examples. Give the new mount point the same owner, group, and permissions as the old one. (Don't count on the copy command to get the permissions on this top-level directory right.)
Say exit at the single-user mode prompt to continue booting into multi-user mode. (Or, init 5, if the normal runlevel is 5, for example.) Check that everything seems to be working with the new filesystem.
(Don't reboot to do this test! The new filesystem won't automatically mount yet.)
When you're satisfied that you've successfully moved that partition, adjust /etc/fstab to point to the new partition.
(This is way outside the scope of this answer. The exact details vary even between Linuxes, and vary even more broadly among *ix in general.)
Reboot normally. Check again. Does it all still work? Be certain you've got the new filesystem mounted, not the old one.
When you're certain it's all moved and mounting correctly, free up the space taken by the old copy: rm -rf /oldhome.
If you have multiple filesystems to move, GOTO 2. :) (Or step 1, if you didn't add all the new virtual disks at once.)
If you're using a VM system that knows how to set up a sparse virtual disk (e.g. VMware) you don't have to worry about wasted space. Just follow its normal "shrink" process to reclaim the now-slack space.
There are other refinements. For example, you might want to give something like -L /home to the mkfs.ext4 command if your OS uses disk labels in /etc/fstab instead of partition names or UUIDs.
| How to move to a different drive or partition? |
1,542,760,058,000 |
I first asked this question on SuperUser.com but got no responses. I have found how to align the partition of my SSD using fdisk (SSD article on Gentoo Wiki) but haven't been able to find any resources about aligning the partitions of a HDD. Is this practice necessary, or should I just let something like GPartEd align them as default? If it's something I should do to the HDD as well, where can I find a resource for the size to use for the sector and head portion of the command?
|
If you are using the old fdisk program these days, always use the -uc which will display sectors instead of cylinders, and disable compatibility with MS-DOS.
My opinion, simply make all your partitions start/end on 1MB boundaries. So the starting sector should be evenly divisible be 2048. By simply aligning everything to the nearest 1MB, you are aligned drives with 512, and 4096 physical sectors, you are also properly aligned for typical RAID(5,6) chunk sizes of 32k, 64k, 512, 1mb.
| Align Partition Of HDD Using fdisk? |
1,542,760,058,000 |
I need to increase the logical volume of the var directory, the maximum size of var right now is 10GB, I need to make it 50GB. I have a Centos 6 server.
The output of df -h is:
Filesystem Size Used Avail Use% Mounted on
rootfs 10G 10G 0 100% /
/dev/root 10G 10G 0 100% /
none 991M 312K 990M 1% /dev
/dev/sda2 455G 3.6G 429G 1% /home
tmpfs 991M 0 991M 0% /dev/shm
/dev/root 10G 10G 0 100% /var/named/chroot/etc/named
/dev/root 10G 10G 0 100% /var/named/chroot/var/named
/dev/root 10G 10G 0 100% /var/named/chroot/etc/named.conf
/dev/root 10G 10G 0 100% /var/named/chroot/etc/named.rfc1912.zones
/dev/root 10G 10G 0 100% /var/named/chroot/etc/rndc.key
/dev/root 10G 10G 0 100% /var/named/chroot/usr/lib64/bind
/dev/root 10G 10G 0 100% /var/named/chroot/etc/named.iscdlv.key
/dev/root 10G 10G 0 100% /var/named/chroot/etc/named.root.key
I followed this tutorial. In order to increase the volume you have to do:
lvextend -L +40G /Path/To/var
My problem is simple, I don't know where my var is located.
If i do lvextend -L +40G /dev/root/var I get Volume group "root" not found
If i do lvextend -L +40G /dev/var I get
Path required for Logical Volume "var"
Please provide a volume group name
Run `lvextend --help' for more information.
I tried every possible path, still can't find the right path to var, so where my var is located?
EDIT
If i do lvextend -L +40G /dev/root I get
Path required for Logical Volume "root"
Please provide a volume group name
Run `lvextend --help' for more information.
pvs gives no output at all.
lvs gives this output No volume groups found
|
As I expected from the name /dev/root, you're not using LVM. You have a few options:
Reinstall
Hope that your partitioning scheme allows you to grow the root partition with (g)parted.
Create a new partition as LVM volume, create a vg and and an lv for /var and move /var over
Clean up the current system so you don't need the space
Options 2 and 3 are best done when booting from a rescue cd or rescue netboot.
| increasing a logical volume |
1,542,760,058,000 |
Is there a way to hide an ext4 partition from e.g Thunar?
And the open file/save dialog , I think they come from the same source.
|
Assuming you mean hiding an unmounted partition from thunar, add a row in your /etc/fstab, using none as the mount point and fs type columns ;)
| Hidden ext4 partition? |
1,542,760,058,000 |
I have a usb stick (actually a disk) and it is 5TB in size.
I do a cp rhel-8.10-x86_64-dvd.iso /dev/sde to it and it then works to boot from and install Redhat Linux.
What I would like to do is copy other various data into this disk but maintain the bootable rhel-8.10 install functionality. Is this possible?
A lsblk after the copy of rhel iso to it shows
SIZE FSTYPE NAME
4.6T iso9660 /dev/sde
13.3G iso9660 /dev/sde1
9.5M vfat /dev/sde2
is there a way to add an XFS partition such that it could be /dev/sde3 and mountable to make use of the remaining 4+ tb of space, while still maintaining the bootable functionality to install rhel?
|
You can simply do that with fdisk, you just need to tell it to use the correct it partition table on the USB drive. Distribution ISO images are so called isohybrid images -- mix of iso9660 and multiple partition tables to make sure the system will boot everywhere. Simply use
sudo fdisk --type=dos --wipe=never /dev/sde
and add a new partition normally (use n and the answer the questions about partition type and size).
--type=dos will tell fdisk to use only the DOS partition table on the drive and --wipe=never will tell it to not wipe the iso9660 format when writing the changes.
Example of the result with CentOS ISO and a newly added partition formatted to ext4:
$ lsblk -f /dev/sdb
NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sdb iso9660 Joliet Extension CentOS-Stream-9-BaseOS-x86_64 2024-05-27-03-57-45-00
├─sdb1 iso9660 Joliet Extension CentOS-Stream-9-BaseOS-x86_64 2024-05-27-03-57-45-00
├─sdb2 vfat FAT12 ANACONDA 4B96-9789
└─sdb3 ext4 1.0 321c67bd-817e-4274-a4b1-01103c25a7b9
| add partition and data onto usb stick already having bootable iso9660 |
1,542,760,058,000 |
Im helping a friend with his small, self-hosted Ubuntu server. He wanted to install a larger hard drive. That's why I used Clonezilla to clone the HDD to a larger SSD so that the server doesn't have to be set up from scratch. This worked great, but of course the operating system doesn't use the new storage space just like that. I tried using a bootable gparted USB stick to enlarge the operating system partition from ‘outside’. But somehow the memory in the operating system remains unchanged.
I uploaded to screenshots to imgur: One from gparted (i enlarged the dev/sda3/ Partition)and one from inside Ubuntu, using df -h --total and for some strange Reason it shows completly different partitions. Can you help me and tell me how to enlarge the Partition for the Ubuntu Server?
|
On that system, /dev/sda3 is an LVM PV. After enlarging it, you'll need to extend the LV's too. There's only one LV in your df screenshot: /dev/mapper/ubuntu--vg-ubuntu--lv. If you want to give all the free space in that VG to it, then run lvextend -r -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv as root. If you want to only extend that LV by some smaller amount, then do lvextend -r -L newsize /dev/mapper/ubuntu--vg-ubuntu--lv instead, substituting the actual new size you want for newsize.
| Extend hard drive on Ubuntu (Server) |
1,542,760,058,000 |
I'm encountering some odd behavior while trying to install a bootloader on a disk image. Here's the process I followed:
$ dd if=/dev/zero of=test.img status=progress bs=200M count=1
1+0 records in
1+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.190117 s, 1.1 GB/s
$ mkfs.ext2 test.img
mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 204800 1k blocks and 51200 inodes
Filesystem UUID: f6442813-7b8c-4636-b69e-334696e0840b
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729
Allocating group tables: done
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
$ sudo mount test.img mount-point/ -o loop
$ fdisk -l test.img
Disk test.img: 200 MiB, 209715200 bytes, 409600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
$ sudo extlinux -i mount-point/
mount-point/ is device /dev/loop0
Warning: unable to obtain device geometry (defaulting to 64 heads, 32 sectors)
(on hard disks, this is usually harmless.)
$ fdisk -l test.img
Disk test.img: 200 MiB, 209715200 bytes, 409600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x20ac7dda
Device Boot Start End Sectors Size Id Type
test.img1 3224498923 3657370039 432871117 206.4G 7 HPFS/NTFS/exFAT
test.img2 3272020941 5225480974 1953460034 931.5G 16 Hidden FAT16
test.img3 0 0 0 0B 6f unknown
test.img4 50200576 974536369 924335794 440.8G 0 Empty
Partition table entries are not in disk order.
I can't understand why the extlinux -i command would create new partitions on the disk image. I suspect it might be modifying some filesystem metadata, but I'd appreciate some clarification on the details. Additionally, is it possible to install Syslinux on an unpartitioned disk image?
|
The MBR partition table is a very simple structure at the end of the very first 512-byte block of the disk. It contains no checksums, hashes or other error-protection features.
By running fdisk -l against the filesystem/partition image you've created, you are effectively forcing it to misinterpret its first block (the Partition Boot Record, or PBR for short) as a MBR. This results in nonsensical output, as you demonstrated.
If I recall correctly, the PBR created by extlinux would contain boot code in the locations occupied by the actual partition table in a MBR. So fdiskwould be reading parts of extlinux PBR boot code and trying to display it as MBR contents. It is no wonder the output makes no sense!
| syslinux creating unexpected partitions on disk image |
1,542,760,058,000 |
This question relates to a Windows/Linux dual boot system on a DOS-partitioned SSD (i.e. none with GPT/UEFI). Originally, the computer only had Windows 10 on an HDD. Then I managed to transfer this system to the SSD, resize the partitions and install Xubuntu 20.04 alongside Windows 10, and it all worked fine.
There always was an EFI partition on the drive. I don't know what it is good for since this is not an UEFI device. But I did not change this partition. - There is no swap partition on this system.
I wanted to create more space for my Linux system partition. To be more flexible in case the requirements change later, I moved the Windows home partition between the two Linux partitions. The partition layout looks like this now:
NAME FSTYPE PARTTYPE PARTFLAGS LABEL
sda
├─sda1 ntfs 0x7 0x80 System-reserviert
├─sda2 ntfs 0x7 SSD-Windows-Sys
├─sda3 vfat 0xef SSD-EFI
├─sda4 0x5
├─sda6 ext4 0x83 SSD-Linux-Sys
├─sda5 ntfs 0x7 SSD-Windows-Home
└─sda7 ext4 0x83 SSD-Linux-Home
(sda-numbers not strictly ascending, the partitions are shown in SSD-storage order sda6-sda5-sda7.)
I did not alter any of the partitions 1 to 3. I succeeded to maintain the original UUIDs and LABELs using gparted to move and resize the ntfs partition SSD-Windows-Home. There is no equivalent to tune2fs -U <UUID> for vfat and ntfs and I did not want to exchange the serial numbers by fiddeling with dd as proposed in this discussion, therefore I did it with resizing and moving the ntfs-partition SSD-Windows-Home.
For the Linux partitions, I used gparted to create an new ext4 partition at the end of the SSD, rsync to copy the contents from the old to the new partition, then I deleted the old one and resized the other partitions to fill the gaps. Finally I applied tune2fs to achieve the shown state, especially in maintaining the same LABELs and UUIDs for all partitions as they were before all this.
During the partition change work, I encountered a warning that boot problems might show up after my changes. I did not care, since the LABELs and the UUIDs for each partition remained the same. But this was a keen assumption, as I had to notice, when I tried to reboot:
The boot process stopped in grub rescue> rather than in the GRUB2 menu asking me which operating system to boot.
I succeeded to boot the computer by issuing these commands:
grub rescue> set prefix=(hd0,6)/boot/grub
grub rescue> set root=(hd0,6)/
grub rescue> insmod linux
grub rescue> insmod normal
grub rescue> normal
Then the GRUB2 menu was shown and I could select between Linux and Windows 10. Both operating systems worked as before all my partition changes (of course I had to shut down the computer in between and go through grub rescue> again).
I was advised to run the following command after booting into the Linux system in order to permanently recover from the grub rescue problem:
$LC_ALL=C sudo grub-install --target=/boot/grub/i386-pc /dev/sda
grub-install: error: /usr/lib/grub/i386-pc/modinfo.sh doesn't exist. Please specify --target or --directory.
$
There is a file /boot/grub/i386-pc/modinfo.sh (and there are two more of them in /boot/grub/x86_64-efi and in /usr/lib/grub/x86_64-efi).
Therefore I have tried
$LC_ALL=C sudo grub-install --target=/boot/grub/i386-pc /dev/sda6
grub-install: error: /usr/lib/grub/boot/grub/i386-pc/modinfo.sh doesn't exist. Please specify --target or --directory.
$
It searches in the wrong directory. Therefore I have added --directory=/boot/grub/i386-pc:
$ LC_ALL=C sudo grub-install --directory=/boot/grub/i386-pc /dev/sda6
Installing for i386-pc platform.
grub-install: error: cannot open `/boot/grub/i386-pc/moddep.lst': No such file or directory.
$ ls -l /boot/grub/i386-pc/moddep.lst
-rw-r--r-- 1 root root 5416 2019-12-10 10:34 /boot/grub/i386-pc/moddep.lst
$
As you can see from the ls command, this error message is definitely wrong, because /boot/grub/i386-pc/moddep.lst exists and root can also access it! Now I'm at the end of my wits.
After the computer was able to start with the grub rescue commands (rather than from a live stick and using chroot), it shouldn't be that difficult to permanently apply exactly the information I entered, but without having to enter it on each boot process.
How do you do this correctly?
|
The presence of /usr/lib/grub/x86_64-efi and /usr/lib/grub/x86_64-efi-signed indicates that the Xubuntu was installed as an UEFI-booting system, suggesting that your system is in fact UEFI-capable.
The fact that your sda disk includes a Windows installation and is partitioned in MBR style indicates your Windows must boot using the classic BIOS style. To achieve the ability to easily choose the OS to boot from the GRUB menu, you would want Xubuntu to boot in legacy BIOS style too. But it appears the packages required for legacy-style GRUB are not currently installed in your system.
You said grub-efi is not installed on your system: such a package does not exist on Debian/Ubuntu. The UEFI equivalent of grub-pc is named grub-efi-amd64. The name grub-pc is legacy from when GRUB used to be an x86/BIOS-only bootloader; now it supports many other architectures and firmware types.
The commands suggested in the comments had some typos: the --target option of grub-install does not take a pathname, but a GRUB platform identifier: in your case, i386-pc.
First, install the legacy BIOS support packages for GRUB:
sudo apt install grub-pc grub-pc-bin
Then, install GRUB into the Master Boot Record of the system:
sudo grub-install --target=i386-pc /dev/sda
Check the contents of /etc/default/grub. If there is a line
GRUB_DISABLE_OS_PROBER=true
you may want to change it to false, make sure the os-prober package is installed, and then run sudo update-grub to create a GRUB configuration that includes both your OSs.
Then mount sda3 and inspect its contents:
sudo mount /dev/sda3 /mnt
sudo ls -l /mnt
If it includes an EFI directory, rename it to something else (in case your system firmware has uncontrollable preference towards UEFI booting):
sudo mv /mnt/EFI /mnt/NO-EFI
sudo umount /mnt
Now it's time to reboot. After verifying that you can now successfully boot both your operating systems, you can remove the UEFI boot support packages with:
sudo apt purge grub-efi-amd64 grub-efi-amd64-bin grub-efi-amd64-signed efibootmgr shim-signed shim-helpers-amd64-signed shim-signed-common
| How do I permanently recover from grub rescue? |
1,542,760,058,000 |
Due to an accident specifying a block device, the first 32GB of a 4TB ext4 filesystem on a SATA disk was overwritten by the dd command with the contents of a USB flash drive.
fdisk -l /dev/sda reports the following:
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xeaad24fe
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 8388607 8386560 4G 6 FAT16
/dev/sda2 8388608 73924607 65536000 31.3G 83 Linux
parted shows the following:
GNU Parted 3.5
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: ATA ST4000NM0165 (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 4295MB 4294MB primary ext4
2 4295MB 37.8GB 33.6GB primary
Side Note: I'm not sure why parted thinks the file system is ext4 while fdisk shows it as FAT16, but it could possibly be related to the fsck that I tried to run before understanding what had happened. I attempted to run "e2fsck /dev/sda1", and answered yes to the following question:
Superblock has an invalid journal (inode 8).
Clear<y>? yes
It then came back with the statement that the partition size didn't match the physical size, and it was at that point I stopped without proceeding further. (I apologize I don't have the full text of my aborted attempt with fsck. I retyped the above from memory, and I only answered yes once.)
This is what used to be on the disk:
This disk was originally auto-partitioned+formatted by the installer of Ubuntu 18.04. It was an ext4 filesystem, with a single partition, sda1, that took the entire drive. There is a separate, NVME drive that is the system partition, and this disk was configured as a secondary data disk. The parameters will be whatever the Ubuntu 18.04 installer would have selected as the defaults in this instance.
I understand any data in the first 32GB of this disk is irretrievably lost. But the data on this disk is critically important. Is there any way to recover what was on the remaining 99% of the drive?
Can someone recommend steps that would allow me to recreate the original filesystem?
Edit: gdisk -l /dev/sda shows the following:
GPT fdisk (gdisk) version 1.0.5
Caution: invalid main GPT header, but valid backup; regenerating main header
from backup!
Warning: Invalid CRC on main header data; loaded backup partition table.
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.
Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!
Warning! One or more CRCs don't match. You should repair the disk!
Main header: ERROR
Backup header: OK
Main partition table: ERROR
Backup partition table: OK
Partition table scan:
MBR: MBR only
BSD: not present
APM: not present
GPT: damaged
Found valid MBR and corrupt GPT. Which do you want to use? (Using the
GPT MAY permit recovery of GPT data.)
1 - MBR
2 - GPT
3 - Create blank GPT
|
You should make a full "dd" copy of the partition to another device, just for safekeeping in case something goes wrong.
In general, e2fsck should be able to recover from such an issue, subject to loss of the overwritten metadata. The superblock, root directory, journal, and other metadata would be lost. However, the superblock and other critical metadata have multiple backups later in the partition, so the majority of the data should be intact.
You might need to specify a backup superblock location, like e2fsck -fy -B 4096 -b 11239424 /dev/sda2. The backups are stored in groups numbered 3^n, 5^n, 7^n, with 128MiB group size, so if you clobbered up to 32 GiB that is 256 groups and the next highest group number is 7x7x7 = 343, so the backup superblock is in block 343x32768 = 11239424.
It will put everything into the lost+found directory, so you will have to identify files/directories by their content, age, etc.
| First 32GB of a 4TB ext4 filesystem overwritten. How to recover? |
1,542,760,058,000 |
I've got one physical volume and one volume group:
/dev/mapper/nvme0n1p3_crypt
VG Name ced-vg
I've got five logical volumes:
/dev/ced-vg/root
/dev/ced-vg/var
/dev/ced-vg/swap_1
/dev/ced-vg/tmp
/dev/ced-vg/home
I want to add a sixth logical volume on that same, unique, physical volume (I'm not adding any new SSD), which could maybe look like this:
/dev/ced-vg/vartmp (I don't know if the name is correct)
Which I'll then use for /var/tmp/ to have its own logical volume.
I don't know how to create that. I don't understand if these logical volumes were created, once, when the OS was installed or if they're (re-)created at each boot, for example.
Is it as simple as just adding an entry for /var/tmp in /etc/fstab?
Or do I need to first "create", once, the new logical volume and only then add the entry to /etc/fstab? Or something else?
Do I need to manually create a filesystem for that new /var/tmp I plan to add? Or shall this be done automatically?
P.S: This is on Debian but I take it the procedure is similar on many distros.
|
You need to create a new logical volume of size 32GB with name "vartmp" using
lvcreate -L 32G -n vartmp ced-vg
That's everything about this that's LVM-specific! Then you can format it using your favorite file system, e.g.
mkfs.xfs /dev/ced-vg/vartmp
and add it to your /etc/fstab, so that it gets picked up on next boot, for example
/dev/ced-vg/vartmp /var/tmp xfs noatime
You might want to check the free space of your volume group first, using vgdisplay.
| How to add a Logical Volume (LV) for /var/tmp? |
1,542,760,058,000 |
I have unplugged an older 500 GB HD NTFS external drive without unmounting it first and then it couldn't be mounted in Linux. I have tried on different machines but the same error occurred. I was thinking to apply a "repair" procedure with the partition manager but before that I gave it a try on a Windows machine. It gave some error but then the files were accessed. It could be detached/unmounted and then it was fine in Linux too.
I guess Windows has automatically fixed the error related to the drive not being properly unmounted/disconnected, given Windows might have native better integration/interaction with the NTFS format.
I remember I had a similar problem with usb-stick drives on Mac: when the drive was not properly disconnected on Mac or Linux, it kept working on Linux but not on Mac, I had to re-plug and disconnect properly in Linux in orded to make it accessible on Mac. But this is the first time I have a drive better accessed by other system than Linux.
I might not have a Windows machine at hand when this happens again. Is there a simple way to get the same result - an easy fix like that - in Linux?
|
Easy fix? Don't use NTFS, but instead use a Linux-native filesystem. Useful fix? No, none.
I see in a comment that you "have a few external drives that are supposed to be used on Windows as well as Mac". On that basis you will presumably want to continue using NTFS, and therefore you will need to keep that Windows box around to fix the occasional errors you'll get using that filesystem on non-Windows systems.
If the causal trigger is that you forgot to unmount the filesystem before unplugging the external drive then maybe configuring an automounter with a short mount lifetime might be beneficial.
| How to fix external drive in Linux the way Windows does automatically |
1,542,760,058,000 |
ls -F classifies ALL files on my mounted partitions as an executable (it appends an asterisk to the end of the file name).
The same command performs correctly in other places like my home folder, so I have no idea what is making it misbehave.
~
❯ cd /tmp && mkdir somefolder && cd somefolder
/tmp/somefolder
❯ touch file{0..3}
/tmp/somefolder
❯ ls -F
file0 file1 file2 file3
/tmp/somefolder
❯ cd /mnt/sdd2 && mv /tmp/somefolder . && cd somefolder
/mnt/sdd2/somefolder
❯ ls -F
file0* file1* file2* file3*
Anyone know why this is happening?
|
Not every filesystem type supports file permissions. I suspect yours is one that doesn't, such as FAT32. On these kinds of filesystems, Linux treats everything as executable by default, since executables lacking the execute bit breaks things, but non-executables having the execute bit doesn't. If you don't want any files on it to be executable, you can do that by mounting it with the fmask=0111 mount option. (Or fmask=0155 or fmask=0177 if you don't want group/other to have write or any access to any files.) If you want some files to have the execute bit but not others, then you have to wipe it and reformat it with a different permission type that does support file permissions, such as ext4 (but note that doing so will prevent other operating systems such as Windows from accessing it without third-party drivers).
| ls -F/--classify marks every file as an executable |
1,542,760,058,000 |
I have a laptop with Ubuntu 22.04.2 LTS as an OS and 130 GB SSD and 1 TB HDD storage. I'm looking for partitions to speed up my laptop efficiently. I mainly use my laptop to program stuff in Python and C++ (visual computing) and additonally, some word processing and internet surfing. More systematically, I guess my stuff can be categorized as follows:
IDEs
Compiled libraries
Other low level programming utilties
Programming projects
Docker
data such as images or dataframes
pdf and libreoffice files
internet browser
How should I organize the partitions of my laptop?
I thought of the following
Partitions on SDD
Booting
Actively used compiled libraries, programming projects, IDE and Docker
Actively used data
Partitions on HDD
/home with pdf and libreoffice files
Other applications not required to run fast, e.g. libreoffice and firefox
Not actively used libraries and data
Does this partitioning make sense?
|
I'm looking for partitions to speed up my laptop efficiently.
Partitions don't make anything faster, in themselves.
Aside from the inevitable partitions needed to boot, there's no reason whatsoever to use partitions to manage your data – they are an unnecessarily rigid complication on what you actually want to achieve: Store some data on your SSD and your Hard drive, in a way that reflects your use case.
Does this partitioning make sense?
No, sadly.
Your use case describes three kinds of data:
Data necessary for booting
Data that needs to be accessed fast and frequently
Data that needs to be accessed infrequently
Generally, you can't well tell which system libaries belong in group 2 or 3 just based on their location – your compiler and all the libraries you use are in exactly the same places as the programs and libraries you use infrequently.
Thus, no partitioning/volume management scheme can help you with this!
First off: I don't know your budget. But: a 1 TB SATA SSD costs you about 50 {€,$} (SATA, much faster M.2/NVMe SSDs run roughly the same price, typically), 512 GB can be had for 25€. So maybe this issue is actually better solved by throwing money at the problem and replacing your SSD by a larger one, and reducing your HDD to storage for cat pictures; or by replacing your HDD with another SSD and using both SSDs with LVM as one large SSD with striping (do not forget to do backups regularly).
Hard drives and laptops are kind of a rare combination these days – both for power and for reliability reasons, not even considering the speed aspect.
Now, if your SSD is already a rather fast one (compared to the speeds of the hard drive), and you have no budget to buy another SSD:
You would want to set up your SSD to become a cache in front of your slower but larger HDD. Linux can do that, out of the box: bcache is the mechanism with which you can use a fast block device (e.g., the last one of the four partitions on your SSD, the first still being /boot/efi's vfat/uefisys, the second classically being /boot and the third being SWAP/hibernation data, which you definitely want on a laptop) to "buffer" away the data about to be written to the slower HDD, and to keep frequently read data around, so that it doesn't have to be loaded from HDD every time.
The idea is that instead of in which directory something lies (/home, /usr/lib, /home/oldprojects or something), you let the system detect and manage itself which data are needed frequently.
And then, on the thus "sped up" block device, you would not do partitions (there's no reasons for partitions), you would just set up an LVM physical volume, with an LVM volume group on it. You could just as well just have a single partition on that, for all your data (except for /boot), as putting things into different filesystem is mostly useful for block device backup purposes, and not measurably advantageous for speed these days. However, making the (cached) system an LVM physical volume means a lot more flexibility, at no performance cost. I strongly recommend it, universally. You will not miss not having to deal with partitions any more :)
So, in similarity to the scheme proposed in this guide, Installing Ubuntu 20.04 with bcache support (snapshot:
Let the SSD be /dev/ssd, with partitions /dev/ssd1, ssd2, …, ssdN
Let the HDD be /dev/hdd
SSD:
/dev/ssd1: /boot/efi, VFAT, 1GB for UEFI (plenty large for that
/dev/ssd2: /boot, ext4, 4GB for never-worry-about-this-again
/dev/ssd3: swap partition, swap, 2× RAM size, for hibernation
/dev/ssd4: cache for /dev/bcache0, occupying the rest of your SSD
HDD
/dev/hdd1: backing storage for /dev/bcache0, whole disk
/dev/bcache0: LVM physical volume (only one in new Volume Group, let's call it vg0
/vg0: LVM volume group containing all your data
/dev/vg0/root: System volume for /; ext4 or XFS (or whatever you like), whatever you need GB (can trivially be grown while system is running later on, can be as large as the whole hard drive)
That seems a bit convoluted, but it's really just that you need to go through the bcache layer to get to use your SSD partition as cache for what you store on the hard drive, and the LVM volume group is just there to not shoot you in the foot later on or during backups or when you replace your failing HDD.
The guide linked above says (and I believe that still to be true) that Ubuntu needs to be tricked a bit during installation so that it includes the support for bcache in its boot image. But it really seems to be rather benign.
| Partitions for a programmer computer |
1,542,760,058,000 |
I am not new to Linux but I am new to Linux Mint since I switched from Ubuntu due to reasons out of scope of this question (snapd, update borked my computer)
I selected for FDE (Full Disk Encryption) during the graphical installation process. I then saw the option to encrypt the home folder and I clicked that as well.
I then remembered that from the Ubuntu FDE documentation that only the /home partition is encrypted. However the Mint documentation is much less clearer on that regard:
If you are new to Linux use home directory encryption instead (you can select it later during the installation). source
When I checked the Known Issues page, the wording seemed to imply that both were separate
Benchmarks have demonstrated that, in most cases, home directory encryption is slower than full disk encryption. source
Can they be both enabled at once or will encrypting the home folder remove all other FDE on the system?
|
I have answered the question by myself.
I emailed Clement (project leader of Mint) and this was his response:
You can have one or the other, or both, or none at all. FDE is faster and safer (it doesn't just encrypt your home, but also the entire HDD including swap, temporary files which might be left on the HDD..etc.).
HDE is more convenient since it's tied to your login password and doesn't require entering an extra password. It provided additional security in the past since it unmounted the decrypted home on logout, but this is no longer the case, so if you're using FDE, you don't really need HDE anymore.
In terms of performance, on modern specs, both are pretty good and not noticeable.
Regards,
Clement Lefebvre
Linux Mint
| (Linux Mint) Does selecting "Encrypt Home Folder" after you chose Full Disk Encryption only encrypt the home folder? |
1,542,760,058,000 |
So I wanted to do some performance test with encrypted and normal data storage on my embedded device.
That is not what I was expected to see at all!
Can you please explain it to me what just happend. Why dd comand output was 1843200+0 records but df -h show file system disk space usage as 13TB?
Maybe I explain what I have done. This is my workflow:
dd if=/dev/urandom of=enc_per_test.img bs=512 count=2097152
dd if=/dev/urandom of=normal_per_test.img bs=512 count=2097152
And receive 2 images 1GB each - as I predicted.
losetup /dev/loop1 enc_per_test.img
losetup /dev/loop2 normal_per_test.img
After that I perform:
dmsetup -v create enc_per_test --table "0 $(blockdev --getsz /dev/loop1) crypt <crypt_setup> 0 /dev/loop1 0 1 sector_size:512"
mkfs.ext4 /dev/mapper/enc_per_test
mkdir /mnt/enc_per_test
mount -t ext4 /dev/mapper/enc_per_test /mnt/enc_per_test/
As I expected df-h showed mounted enc_per_test:
Filesystem ############## Size ### Used ## Avail ## Use% ### Mounted on #####
/dev/mapper/enc_per_test ## 976M ## 2.6M ## 907M ## 1% #### /mnt/enc_per_test
I clear cache:
echo 3 > /proc/sys/vm/drop_caches
And finally perform dd comand to fill up the enc_per_test:
time dd if=/tmp/random of=/dev/mapper/enc_per_test conv=fsync
1843200+0 records in
1843200+0 records out
943718400 bytes (944 MB, 900 MiB) copied, 152.098 s, 6.2 MB/s
So I was like, ok that's fine. This is what I wanted. Let's see how it's look like in df -h:
Filesystem ############## Size ### Used ## Avail ## Use% ### Mounted on #####
/dev/mapper/enc_per_test ## 13T ## 13T ## 0 ## 100% #### /mnt/enc_per_test
What happned here? Why df -hshow 13TB of data storage. It is even not possible because my device has ~250GB of hard drive.
Thank you for any answer and hint!
|
You mounted a filesystem existing in /dev/mapper/enc_per_test (the device) to /mnt/enc_per_test/ (the mountpoint).
Then with dd you chose to write to the device, not to a regular file inside the filesystem (i.e. under the mountpoint, e.g. of=/mnt/enc_per_test/blob). Your dd overwrote the majority of the filesystem with the content of /tmp/random while the filesystem was mounted.
df queries mounted filesystems. For a given filesystem the fields Size, Used and such are what the filesystem knows and reports about itself. Probably some data, metadata and information about the filesystem in question was still available as old values in the cache, so it seemed sane enough; but apparently something new had to be read from the device. Some part(s) of the garbage you had written was read, hence the surprising values
The statement in the title is wrong. It's not true that "dd command created 13TB of data". 13T appeared only because df got some random values from what used to be a filesystem.
| Performance test went wrong and the dd command created 13TB of data on /dev/mapper/device. Why system didn't crash? HDD-250GB |
1,542,760,058,000 |
size: 58 GB
contents: unknown
device: /dev/sda4
partition type: basic data
When I ran the format, I selected the type which I expected result in type of Linux.
I found a similar issue from 2019, but fdisk wouldn't run the solution.
The file type may be suitable, but I haven't been able to run 'mkdir' on the partition.
'Files' doesn't see the partition under "Other Locations".
Suggestions welcome.
|
You can't mkdir on a partition.
You need to format (that's not the same as assignment of a partition type) it and mount it first
| /dev/sda4/ on USB stick isn't available after formatting. 'Discs' shows the following: |
1,542,760,058,000 |
I am partitioning an external 1TB HDD for a small embedded Linux system. I want to encrypt the swap partition. According to the cryptsetep FAQ, you need to use kernel device names (/dev/sda, etc) in /etc/crypttab:
Specifying it directly by UUID does not work, unfortunately, as the UUID
is part of the swap signature and that is not visible from the outside
due to the encryption and in addition changes on each reboot.
This may become a problem if I attach/rearrange drives later and the device name changes. For example, say the swap is on /dev/sda3. Then I attach a different drive which becomes /dev/sda, pushing the original drive to /dev/sdb. If there exists a third partition on the new drive (now called sda3), it will try to encrypt that drive and use it as swap.
One option given is to make sure sure the partition number is not present on additional disks. So, finally, my question:
Can I use non-contiguous partition numbers? Will they persist across reboots? In other words, could I do this? Note the gap between sda4 and sda8:
/dev/sda1 primary /boot
/dev/sda2 primary /
/dev/sda3 primary /home
/dev/sda4 extended
/dev/sda8 swap (encrypted)
If so, I would feel pretty safe about never seeing sda8 on any other drive.
|
Partition numbers cannot conflict. Physically cannot.
The partitions are recorded in a Partition Table, special place in the 0-block of the disk. These records are not a named records, they are placed in an array. The index in that array (plus one) later become a number in the list of partitions you see in terminal. Read wiki for example: https://en.wikipedia.org/wiki/Disk_partitioning
And yes, Partition Table can have empty cells. Ot is just an indexed array. Any record in it can have a zero for Partition Type and all tools would know that this record is empty.
| Do MBR partition numbers need to be contiguous? |
1,542,760,058,000 |
For some reason, my U-Boot does not seem to be able to load files from my FAT32 partition:
=> mmc part
Partition Map for MMC device 1 -- Partition Type: DOS
Part Start Sector Num Sectors UUID Type
1 2048 62519296 a1d1165e-01 0b
=> fatls mmc 1:1
52560 file1.bin
1984 file2.bin
456 file3.bin
64 file4.bin
=> fatload mmc 1:1 0x0001FF80 file1.bin
** Reading file would overwrite reserved memory **
Failed to load 'file1.bin'
Why do I get Failed to load and how can I get around it?
|
It's telling you the reason:
** Reading file would overwrite reserved memory **
Based on the first line of the error message, reading the file into memory using the start address you specified would cause some reserved memory area to be overwritten.
You should either use a different start address (and perhaps rebuild your file(s) to match the changed start address), or perhaps change U-Boot (and rebuild it) to place itself into a different location if U-Boot is the one reserving the memory you are trying to use.
You will have to understand the boot-time memory map of the system you're trying to boot. Without knowing the actual hardware you're using, it's kind of difficult to help you there, but the bdinfo command of U-Boot could be a good starting point.
| Why am I not able to load files from a partition with U-Boot? |
1,644,981,691,000 |
I have a volume which I believe to be btrfs, but the partition bearing it has an odd number of blocks and there is a little space left before the next partition.
I'd like to check it's type (btrfs expected) and know the exact space It occupies in my partition (When asking google, I get information about the apparent size vs real size problem related with snapshots, which I don't care about right now)
To make things clearer :
I'm NOT looking for the size/type of the partition itself, but for the size of the filesystem (the data structure) which should normally be smaller or equal the partition size; and
I'm also NOT looking for the free space inside the filesystem.
|
You can use lsblk -f or blkid -p <device> to check for the filesystem type.
To check size of the btrfs filesystem use btrfs filesystem show <mountpoint>. It prints all devices that are part of the btrfs volume and their sizes:
Label: none uuid: 19e516b2-50bb-4130-9b6e-ee245fb45e43
Total devices 1 FS bytes used 144.00KiB
devid 1 size 2.00GiB used 228.75MiB path /dev/sdb
You can see the size of the filesystem on /dev/sdb is 2 GiB. If you are interested in the exact size, use --raw to print sizes in bytes:
Label: none uuid: 19e516b2-50bb-4130-9b6e-ee245fb45e43
Total devices 1 FS bytes used 147456
devid 1 size 2147483648 used 239861760 path /dev/sdb
(Quick check this is really size of the filesystem and not the block device: after shrinking the filesystem with btrfs filesystem resize it now shows devid 1 size 1.90GiB used 228.75MiB path /dev/sdb.)
| How can i make sure my volume is btrfs (or get it's type) and how can I know it's real size in my partition? |
1,644,981,691,000 |
So I just installed Linux mint MATE(install along side windows(7)) on my old laptop. Its an MBR without UEFI BIOS. When I headed over to the disks program after the installation I was shocked to see a /boot/efi partition created, partition type was(W95 FAT32). Is this normal or have I messed up? Till now its just working fine I even ran the update manager and it was happy after a reboot. Will this create problems long term? Should I reinstall or let it be?
Output for some commands which might be useful:
[ -d /sys/firmware/efi ] && echo "Installed in UEFI mode" || echo "Installed in Legacy mode"
installed in Legacy mode
cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda5 during installation
UUID=a4b92403-da07-45b9-a227-e2647a5bb4ab / ext4 errors=remount-ro 0 1
# /boot/efi was on /dev/sda3 during installation
UUID=BDA1-AF68 /boot/efi vfat umask=0077 0 1
/swapfile none swap sw 0 0
And right now its showing a 20.3(una) update will I be fine doing that now?
Sorry for the bad image :(
|
SuSE has decided to set up both legacy and UEFI boot methods on all new installations, in anticipation that any legacy systems may eventually get migrated to newer UEFI-capable, UEFI-default or even UEFI-only hardware. Also, Intel originally planned to start leaving out the legacy BIOS support on new systems in early 2020s, although the COVID pandemic and the worldwide chip shortage may have caused that plan to be re-evaluated.
At first glance, it might seem that Mint might have chosen to do something similar, but it turns out that it is considered to be an installer bug: https://github.com/linuxmint/linuxmint/issues/312
If /boot/efi is an empty partition, there should be no problem in deleting it. If you want to be absolutely sure, first just comment it out of /etc/fstab and reboot to confirm that its removal will have no impact.
| EFI partition in Linux mint MATE on non UEFI BIOS |
1,644,981,691,000 |
I'm looking on my Debian 11 Server for the easiest way to allocate 100GB of extra space after the /dev/sda1 device in command line.
The sda1 partition is almost full and needs to be resize with the unallocated space.
Here is the structure of my hard drive:
Disk: /dev/sda
Size: 200 GiB, 214748364800 bytes, 419430400 sectors
Label: dos, identifier: 0xea1313af
Device Boot Start End Sectors Size Id Type
>> /dev/sda1 * 2048 192940031 192937984 92G 83 Linux
/dev/sda2 192942078 209713151 16771074 8G 5 Extended
└─/dev/sda5 192942080 209713151 16771072 8G 82 Linux swap / Solaris
Free space 209713152 419430399 209717248 100G
Partition type: Linux (83) │
│ Attributes: 80 │
│Filesystem UUID: b4804667-c4f3-4915-a95d-d3b83fac302c │
│ Filesystem: ext4 │
│ Mountpoint: / (mounted)
Could you help me to easily achieve this in command line? Thanks!
Best regards
|
The free space is not directly after the sda1 partition so you can't use it, you need to remove (or move, but removing is easier) the swap partition sda5.
Stop the swap using swapoff /dev/sda5
Remove the sda5 partition and the sda2 extended partition.
Resize the sda1 partition. Don't forget to resize the filesystem too using resize2fs. You can check this question for more details about resizing partitions using fdisk.
Create a new swap partition (optionally a logical one inside a new extended partition if you want setup similar to your current one).
Update your /etc/fstab swap record with the new partition number or UUID.
| Extend 100GB of unallocated space on /dev/sda1 device in command line |
1,644,981,691,000 |
It seems that the device /dev/sda has plenty of space available
root@Vanuatu:~# sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL | grep -v loop
NAME FSTYPE SIZE MOUNTPOINT LABEL
sda 49,5G
├─sda1 vfat 512M /boot/efi
├─sda2 1K
└─sda5 ext4 9,5G /
sr0 iso9660 58,3M /media/pkaramol/VBox_GAs_6.1.26 VBox_GAs_6.1.26
However the partition /dev/sda5 used for the root filesystem, cannot be resized
root@Vanuatu:~# resize2fs /dev/sda5
resize2fs 1.45.5 (07-Jan-2020)
The filesystem is already 2489600 (4k) blocks long. Nothing to do!
Why is that? Is there another action that needs to be taken in between?
|
The filesystem is already 2489600 (4k) blocks long. Nothing to do!
2,489,600 blocks of 4KiB is 9.5GiB. The file system already uses all the room available in the partition.
You need to resize the partition first, using parted, fdisk etc.
| Partition not resized despite empty space on device |
1,644,981,691,000 |
Newbie here. I tried to install arch linux to my laptop which has 2 partitions and windows 10 installation. I could not delete and merge partitions with fdisk, so googled and ran the following command.
dd if=/dev/zero of=/dev/sda bs=512 count=1 conv=notrunc
as per the this question.
Now I don't see my partitions, all I see with fdisk -l is
/dev/sdb1, /dev/sdb2 and /dev/sdb3 partitions only.
/dev/sdb is the installation medium. I don't want to do anything it to that.
How do I make /dev/sda visible again in fdisk and delete all the partitions then merge them into one so I can install arch linux into one single partition with swap and efi?
|
That's normal, since you deleted everything on sda. It has no partitions to be shown. Do:
fdisk /dev/sda
You'll enter fdisk interactively. If something got wrong and sda is really missing from the system, you'll get an error on this step. I'd suggest executing partx or partprobe or a reboot; probably the kernel needs to be informed for the changes on sda. Then try fdisk /dev/sda again.
Type p to print partitions, it should be empty.
You should then create a new label - partition table:
GPT on newer UEFI systems, press g.
For MBR/DOS on older BIOS ones press o.
Next step is to add partitions by pressing n.
You can use m to get help, for the available choices.
| Delete and merge all windows partition from arch linux installation |
1,644,981,691,000 |
Good morning from Australia.
Could someone please help me. I have researched this topic for days but nothing seems to exactly cover my particular question with the space that I have.
I am running Linux mint 20 and have recently unmounted and deleted my windows partition in Gparted, which left me with approx 152G of unallocated space, which I want to add/merge with my Linux partition. I am concerned I may do something wrong. Please keep it simple for this pensioner. Image of Gparted screen below which is where I'm at. TIA
Edit: Partitions after successful resize operation
|
You need to boot from a Linux Live USB/CD with GParted to be able to resize your root partition since it is currently in use (see the key symbol).
You can boot from your Linux Mint USB stick or download a GParted Live CD/USB ISO and write that to a thumbdrive.
Then start GParted, select "/dev/sda7 Linux Mint", right click on "Resize/Move" and resize the partition to take all
unallocated space and confirm the operation. If the result looks as desired, click the checkmark in the top bar to apply all operations.
Since you deleted Windows, you may also delete the leftover "Microsoft reserved partition" and use that space too.
| expanding my linux mint partition to include unallocation partition |
1,644,981,691,000 |
Trying to install arch on my dell using fdisk and MBR (DOS "o")
I created 3 partitions:
sda1 root
sda2 swap
sda3 extended
But lsblk says sda3 is 1KiB even though I specifically selected "+69G" which is the remaining space of my disk. I can even confirm this my typing "p" to print partition table which says 69 GB extended volume.
So far I have tried changing the sda3 type to lvm and writing the changes but i faced the same result with lsblk showing it as 1KiB.
When I use pvcreate it says device too small.
Any ideas?
|
Extended partition provides space for logical partitions, if you want to use the free space for LVM physical volume, you must add a new logical partition for it, extended partition itself is just a "container" and can't be formatted. That's also why lsblk shows it as being 1 KiB, because it is in fact only 2 sectors big (2 * 512 B) -- it only holds metadata (positions of the logical partitions, it really is a second partition table, a hack used to overcome the 4 primary partition limit in the MSDOS partition table) and lsblk doesn't show the free space "inside" the extended partition.
So to use the space, use fdisk /dev/sda to a new logical partition (same way you added the primary and extended partitions, just the type will be "logical"), it will be added "inside" the extended partition and then use pvcreate to create LVM PV on it: # pvcreate /dev/sda5 (first logical partition will always be sda5 (on sda)).
| Fdisk creates 1K extended partition instead of the mentioned size |
1,644,981,691,000 |
I'm new to Linux and trying multiple variants of Ubuntu (standard, Mint, Pop, etc.). Unfortunately, every OS is isolated on different partitions, with separate settings, user groups, etc. and programs have to be installed each time I install a new OS. I would like to have a primary OS (Ubuntu LTS) and then all subsequent OS's refer to the primary OS for user profiles, program installations, etc. - Is this possible?
My purpose is twofold: 1) ease of trying new distros without hassling with setup/maintenance of multiple profiles and programs, and 2) save on disk space by reducing duplicate files.
I know how to access files and mount folders between each distro's partition, but is there a way to trick the OS into thinking the primary partition is where it should be looking for everything?
I don't mind trying things that are experimental, as this is a new system and I have no critical data on it yet.
|
Is this possible?
No. You can share user settings easily buy creating a separate partition for /home and mounting it in all your used OSes. And if you have different /homes you can use symlinks.
however, sharing programs doesn't make any sense whatsoever (different distros may use different versions of applications, so in certain cases configurations files may be incompatible), besides most users never touch anything in /etc, so this advice holds.
| Can multiple operating systems share profiles and programs? |
1,644,981,691,000 |
I have Windows and Arch Linux installed in my system. I plan to increase the size of my root partition by shrinking the home partition using GParted live USB. But there is a swap partition between my root and home partition. I thought of shrinking the home partition and adding space to the swap and then shrink the swap then add it to the root since the unallocated space must be adjacent to the one being resized. I am not sure whether this going to work.
|
You can keep the size of the swap partition, you just need to move it:
Shrink the home partition, the freed space is now between swap and home.
Move the swap partition, so that the freed space is between swap and root partition.
Increase root partition.
| How to increase root partition size by shrinking home partition using gparted live usb? |
1,644,981,691,000 |
So my dilemma is that my Ubuntu installation is on the last partition. Here is a screenshot of GParted.
Also on a hdd. Not sure if it makes a difference.
|
You won’t be able to resize the partition that you are current running in - it appears here that you’re booted into the Linux install that’s running on /dev/sda5? If that’s the case, you can boot a liveCD or a USB installer, so that all of the partitions on /dev/sda are unmounted. Once unmounted and booted from a liveCD you should be able to extend/resize those partitions into the unallocated space.
| I am on Ubuntu. How do I expand last partition in GParted? |
1,644,981,691,000 |
I already have Windows 10 and Manjaro installed on my system. Recently, something messed up my Manjaro install, and I wanted to also have a more stable OS on there like Elementary OS. Is there any way that I can have Manjaro and Elementary share the Swap, EFI, and Home partitions without deleting or ruining any of my current files? If so, how would I do it? Just turn off formatting of the mentioned partitions and add a Root one?
|
Just turn off formatting of the mentioned partitions and add a Root one?
That's pretty much it, yeah. Though there are some caveats:
Be sure neither distro leaves the swap partition in a mess for the other to find. Not sure of defaults, but the swap partition can be configured as the storage location for hibernation data. Though worst case is probably just that the swap partition would fail to mount.
Sharing /home could get messy if you use different versions of the same software under the different distros. Some things could also get confusing if the system-level defaults differ and either are or aren't overridden by the local configs in $HOME. Personally, assuming I had the disk space, I'd create a new home partition, mount the old one somewhere like /old_home or /other_home, and copy/symlink as appropriate, at least for a while. You can always switch around mountpoints later (editing fstab isn't hard).
And of course, insert obligatory reminder to keep backups here.
| Dual boot two Linux Distros and share /home partition? |
1,644,981,691,000 |
I am trying to create a EFI partition on the beginning of disk.
I had created one using what looked to be wrong type so I deleted it. (Wrong approach, I know, but here we are) Now I want to create a new partition in the same place on the SSD.
Using fdisk, I try to 'n' a partition but it will not let me specify 1 as the First Sector which is the sector to which I'd written the first attempt.
How can I insert a new partition into that spot?
|
You cannot specify 1 as the first sector of a partition for a GPT partitioned disk.
See
https://metebalci.com/blog/a-quick-tour-of-guid-partition-table-gpt/
http://ntfs.com/guid-part-table.htm
for more information.
The reason of first usable LBA being 34 is simple. LBA 0 is Protective MBR, LBA 1 is GPT Header and the required space for partition entries are:
128 partition * 128 bytes/partition / 512 bytes/block = 32 blocks
128 partition * 128 bytes/partition / 512 bytes/block = 32 blocks
So 1 + 1 + 32 = 34 blocks are needed to store all GPT information, so first usable LBA can be minimum 34. As you might realize, these numbers change if the logical sector size (LBA block size) is not 512 bytes.
Use should use the lowest value allowed by fdisk.
| create new partition in place of deleted partition |
1,644,981,691,000 |
I'm looking a Linux device where blkid shows an eMMC partition type as ext2:
/dev/mmcblk0p32: UUID="1c48ca57-c9eb-4ed1-a51a-212f7d1fd40e" TYPE="ext2" PARTLABEL="configs" PARTUUID="2214f85a-ce4e-fea2-0613-8c93121f02e1"
but that partition, according to cat /proc/mounts is mounted as ext4:
/dev/mmcblk0p32 /configs ext4 rw,relatime,block_validity,barrier,user_xattr 0 0
What file system type is actually in use? Why is a partition in this case allowed to have a different type than its mount target?
|
blkid determines the type of the device's content based on the content metadata. In your case, /dev/mmcblk0p32 is actually formatted as an ext2 file system.
On the other hand, the file system type in /proc/mounts has the same semantics of mount's -t option: the type from the kernel's point of view (i.e. the driver to use).
The ext4 file system driver can, and is apparently commonly used to, mount ext2 and ext3 file systems too. From man 5 ext4:
... They are general purpose file systems that have been designed for extensibility and backwards compatibility. In particular, file systems previously intended for use with the ext2 and ext3 file systems can be mounted using the ext4 file system driver, and indeed in many modern Linux distributions, the ext4 file system driver has been configured to handle mount requests for ext2 and ext3 file systems.
Your /dev/mmcblk0p32 have likely been mounted passing -t ext4 to mount on the command line or using ext4 as the type in fstab.
| Linux partitioning vs. mount file system declaration |
1,644,981,691,000 |
I am trying to create a single partition using parted command in Linux with ext4 file system using below command non interactively
parted /dev/sdc --script -- mkpart primary ext4 0% 100%
Could someone please tell me do I need to run below commands after this or will it be taken care of automatically by parted command itself.
partprobe
mkfs.ext4 /dev/sdc
|
According to parted documentation, the mkpart command creates a partition without creating a filesystem on it.
You might or might not need to run partprobe afterwards, depending on the versions of the kernel and parted used. Older versions might need it, newer ones generally won't. However, running it should not be harmful in any case.
But if you want to keep the partition you just created, your mkfs command should then be:
mkfs.ext4 /dev/sdc1 # not /dev/sdc
If you wanted to use the disk in a so-called "superfloppy" configuration, it is certainly possible to just run mkfs on the whole-disk device /dev/sdc and use it like that. But then there would be no point in partitioning it first, as creating the filesystem like that will overwrite the freshly created partition table.
Having a partition table on the disk that is recognizable on most common operating systems makes it safer to move disks between systems: it avoids the possibility that another operating system (I'm looking towards Redmond...) would not recognize the disk as already containing data, and might offer to helpfully format it.
| Do I need to use partpobe and mkfs or can parted format the partition as ext4 automatically? |
1,644,981,691,000 |
How do I move/merge LVM on sda3 to sda2? The problem is vg1-var and vg1-opt reside on both PVs, so a pvmove won't do the trick.
sda 8:0 0 80G 0 disk
├─sda1 8:1 0 300M 0 part /boot
├─sda2 8:2 0 37G 0 part
│ ├─vg1-root 253:0 0 5G 0 lvm /
│ ├─vg1-swap 253:1 0 4G 0 lvm [SWAP]
│ ├─vg1-usr 253:2 0 8G 0 lvm /usr
│ ├─vg1-home 253:4 0 2G 0 lvm /home
│ ├─vg1-var 253:5 0 6.7G 0 lvm /var
│ ├─vg1-tmp 253:6 0 10G 0 lvm /tmp
│ └─vg1-opt 253:7 0 5G 0 lvm /opt
└─sda3 8:3 0 42.7G 0 part
├─vg1-var 253:5 0 6.7G 0 lvm /var
└─vg1-opt 253:7 0 5G 0 lvm /opt
PV VG Fmt Attr PSize PFree
/dev/sda2 vg1 lvm2 a-- 37.00g 0
/dev/sda3 vg1 lvm2 a-- <42.70g 39.00g
|
The simplest method would be to attach a new drive temporarily and pvmove /dev/sda3 over. That would allow you to grow /dev/sda2 and move everything back.
If that's not possible, you'll have to shuffle the data within the drive you have. Due to alignment issues, adjacent PVs can usually not be merged directly, so you'll still be moving everything twice.
Your /dev/sda3 is 42.7G large and has 39G free, so ~3.7G used. So you should be able to shrink /dev/sda3 by 4G, in order to make a new /dev/sda4 size 4G at end of disk:
pvresize /dev/sda3 --setphysicalvolumesize 37G
parted /dev/sda -- resizepart 3 -4G
parted /dev/sda -- mkpart primary -4G -1
Move all data to the new partition:
vgextend vg1 /dev/sda4
pvmove /dev/sda3
vgreduce vg1 /dev/sda3
Delete the now free /dev/sda3 and grow /dev/sda2 accordingly:
parted /dev/sda -- rm 3
parted /dev/sda -- resizepart 2 -4G
pvresize /dev/sda2
Move everything from /dev/sda4 to the now large enough /dev/sda2:
pvmove /dev/sda4
vgreduce vg1 /dev/sda4
At this point, your VG only uses a single PV, /dev/sda2.
Delete the now free /dev/sda4 and grow /dev/sda2 again:
parted /dev/sda -- rm 4
parted /dev/sda -- resizepart 2 -1
pvresize /dev/sda2
Note these steps are very rough and might fail at some points, you'll have to adapt accordingly. The parted commands in particular should not be run blindly, always verify what's going on with parted /dev/sda print free, vgs, lvs, pvs, lsblk, ...
If there are no strong reasons to shuffle things around, I'd just leave as is. It's a lot of trouble for little benefit. Having multiple PVs also has advantages, such as additional metadata copies.
| Move LVM of PV1 to PV2 |
1,644,981,691,000 |
I'm trying to install Ubuntu Server on my laptop where Windows is installed.
Via Gparted I've downsized Windows partition and created 3 partitions for boot, fs root, and home.
But when I try to setup partitions during installation wizard it keeps asking to select a boot disk despite I've chosen one already.
I've tried to select a Windows boot partition (I've seen somewhere that if boot partition is to far away, like > 100Gb then it might not boot), still same situation.
If I try to delete a partition then I got "Can't delete a single partition from a device that already has partitions"
Creating new partitions even if there is some space also not available in the menu of the disk.
Any thoughts or suggestion?
Thank you!
Laptop: ThinkPad W520
UEFI/Legacy loading
Legacy first
Windows is installed and I don't want to delete it for now
Ubuntu Desktop installer don't offer to install Ubuntu together with Windows(like cant see Windows). Windows partitions can be mounted in live Ubuntu
2Mb grub partition added during installing Ubuntu Desktop but cant see it in loading options list of the system. Can't load Ubuntu Desktop that I've just installed. What else I'm missing?
UPD. I've added 2Mb grub partition but cant see it in loading options list of the system
|
What is needed is to select UEFI only mode in BIOS. Basically it was loading in legacy mode before.
So thanks to oldfred and Freddy!
| Installing Ubuntu Server: Selecting boot partition failure |
1,644,981,691,000 |
Given the current Debian installer hd-media boot image files, how do I find out how much free space is remaining within the contained FAT32-formatted partition?
Here's what I have so far:
$ curl -fsSLO https://deb.debian.org/debian/dists/stable/main/installer-amd64/current/images/hd-media/boot.img.gz
$ gzip -fdk boot.img.gz
$ stat boot.img
File: boot.img
Size: 999997440 Blocks: 1953120 IO Block: 4096 regular file
Device: fd01h/64769d Inode: 7998443 Links: 1
Access: (0644/-rw-r--r--) Uid: ( 1000/ neil) Gid: ( 1000/ neil)
Access: 2020-07-23 16:42:25.173516535 +0000
Modify: 2020-07-23 16:41:58.025469623 +0000
Change: 2020-07-23 16:42:35.437534306 +0000
Birth: -
$ file boot.img
boot.img: DOS/MBR boot sector, code offset 0x58+2, OEM-ID "SYSLINUX", sectors/cluster 8, Media descriptor 0xf8, sectors/track 63, heads 255, sectors 1953120 (volumes > 32 MB), FAT (32 bit), sectors/FAT 1904, serial number 0xdeb00001, label: "Debian Inst"
$ fdisk -l boot.img
Disk boot.img: 953.7 MiB, 999997440 bytes, 1953120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x20ac7dda
Device Boot Start End Sectors Size Id Type
boot.img1 3224498923 3657370039 432871117 206.4G 7 HPFS/NTFS/exFAT
boot.img2 3272020941 5225480974 1953460034 931.5G 16 Hidden FAT16
boot.img3 0 0 0 0B 6f unknown
boot.img4 50200576 974536369 924335794 440.8G 0 Empty
Partition table entries are not in disk order.
$ fatresize -i boot.img
fatresize 1.0.2 (10/15/17)
FAT: fat32
Size: 999997440
Min size: 536870912
Max size: 999997440
Is any of the aforementioned numbers the one I want?
|
Use mdir (from mtools):
$ mdir -i boot.img ::
...
g2ldr mbr 8192 2020-05-04 19:14
WIN32-~1 INI 178 2020-05-04 19:14 win32-loader.ini
43 files 76 373 022 bytes
921 333 760 bytes free
As you can see, none of the numbers you have match the remaining free space.
| How do I discover the remaining space on disk image FAT32 partition? |
1,644,981,691,000 |
I am trying to better understand disks, partitions, and partition tables (mbr vs gpt). In the process I checked the disks on one of my machines (single boot ubuntu 20.04), and found that all of my disks are gpt. However I also found out that some of the partitions have a partition table too. What got me even more confused is that the /boot/efi partition is mbr even though it is on a disk that is gpt. At that point I wasn't sure whether the disk was gpt or otherwise so I tried to convert the /boot/efi to gpt but in the process ended up rendering my machine unbootable. When I looked online for how to convert between the two styles I found that the conversion is done on the disk rather than the partition, but I already have my disk partitioned as gpt. So my question can be divided into three parts (and it all depends on my understanding as laid out above being correct):
1- What does it even mean to have a partition table inside a partition?
2- How can a disk be gpt but the /boot/efi partition on it be mbr? And why?
3- If a system has two hard disks, but one operating system, can each one of these disks have their own different partition tables (one mbr the other gpt)?
|
Yes, its possible.
"Instead of a primary partition, you can also define (exactly) one extended partition in the primary boot sector that contains all the disk space that is not allocated to any primary partition. In the extended partition further logical partitions can be set up, which in principle are structured in the same way as the primary partitions, with the difference that only the primary partitions can be booted directly. With a utility like Linux Loader (LILO), the operating system can be booted from any partition, even on other hard disks, but LILO itself must always be installed on a primary partition of the first hard disk."
Source: https://www.nextop.de/lhb/node231.html
Annotation:
The above example is not limited to the use of LiLo as boot manager.
It is to be noted that on a GPT hard disk no extended or logical partitions must be created, since the historical restriction of the number of primary partitions with old DOS and WINDOWS versions, do not exist here.
| Can a disk partition have another nested inside it? |
1,644,981,691,000 |
I have formatted (using GParted) an external HDD for use as it was NTFS, but the writing permissions were denied, I then tried fat32 but the file size is too small, there is no exfat option, and I would like a password protect if possible when plugged in etc.
How can I get the drive to allow me to write and have a password if possible? What is wrong in my ext4 process? Primary extension ext4 external hdd, all is seen, mounted and open-able but not writeable?
I have just tried this using commands below, and it did not work either.
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 625141759 625139712 298.1G b W95 FAT32
$ sudo wipefs -a /dev/sdb1
wipefs: error: /dev/sdb1: probing initialisation failed: Device or resource busy
david@david-HP-15-Notebook-PC:~$ sudo wipefs -a /dev/sdb1
/dev/sdb1: 8 bytes were erased at offset 0x00000052 (vfat): 46 41 54 33 32 20 20 20
/dev/sdb1: 1 byte was erased at offset 0x00000000 (vfat): eb
/dev/sdb1: 2 bytes were erased at offset 0x000001fe (vfat): 55 aa
david@david-HP-15-Notebook-PC:~$ sudo fdisk /dev/sdb1
Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognised partition table.
Created a new DOS disklabel with disk identifier 0x5fd1458f.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p):
Using default response p.
Partition number (1-4, default 1):
First sector (2048-625139711, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-625139711, default 625139711):
Created a new partition 1 of type 'Linux' and of size 298.1 GiB.
Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 7
Changed type of partition 'Linux' to 'HPFS/NTFS/exFAT'.
**Command (m for help): w
The partition table has been altered.
Failed to add partition 1 to system: Invalid argument**
The kernel still uses the old partitions. The new table will be used at the next reboot.
Synching disks.
It says w is an invalid argument, so what do I do with that, is this option for exfat partitioning not available on linux mint? Or has the command changed?
Now this issue is listed with the previously used command =
$ sudo wipefs -a /dev/sdb1
wipefs: /dev/sdb1: ignoring nested "dos" partition table on non-whole disk device
wipefs: Use the --force option to force erase.
What I type now, was from a website.
(exfat is apparently what is appropriate to use on multiple systems, so is preferred, ntfs is on gparted already, ext4 is last option preferred) unless other issue I don't know of with exfat? or is this not doable in linux?
|
Unmount every partition and the disk as a whole:
sudo umount /dev/sdb? /dev/sdb
Wipe the old partition scheme:
sudo wipefs --all --force /dev/sdb? /dev/sdb; sync; partprobe
Create partition:
sudo gdisk /dev/sdb
o Enter for new empty GUID partition table (GPT)
y Enter to confirm your decision
n Enter for new partition
Enter for default of first partition
Enter for default of the first sector
Enter for default of the last sector
Enter for Ext4 or 0700 for NTFS
w Enter to write changes
y Enter to confirm your decision
Create Ext4 filesystem:
sudo mkfs.ext4 -L Some_Label -m 0 -b 512 -E lazy_itable_init=0,lazy_journal_init=0 /dev/sdb1
or
Create NTFS filesystem:
sudo mkfs.ntfs --no-indexing --verbose --with-uuid --label Some_Label --quick --sector-size 512 /dev/sdb1
| External HDD not permitting writing ext4/ fat32 too small? |
1,644,981,691,000 |
I want to delete the partition /dev/sdb5 from my hard disk and then distribute the resulting space to other partitions. Now what would be the consequences in doing so? Would the partition numbers greater than 5 be renumbered? E.g. would /dev/sdb7 become /dev/sdb6 upon deletion? This would create problems for fstab entries.
|
Yes, the kernel will show your partitions with different numbers.
You should change your fstab to rely on UUID (or labels) – it is more robust anyway. Use blkid to find your partitions UUIDs.
On a side note: I see sdb6 is located after sdb7. The number is derived from the order of the partitions as defined in the partition table, not their position on the disk. Tools like fdisk or gdisk allow sorting the partition definition order by partition position.
| Delete partition - Partition number |
1,644,981,691,000 |
This morning, as usual, I mounted a 622 GB Windows (NTFS) partition
on Linux for use.
When I went on to unmount it through the graphical disks tool on GNOME,
I accidentally formatted it,
and now it’s an unallocated space on my hard drive.
I had loads of important data in there.
Is there any way to recover it?
|
You can use photorec/testdisk for this kind of tasks depending on how you degraded your partition.
photorec is for data recovery while testdisk can be used to recover partitions.
See here and here.
| How to fix or retrieve data from an NTFS partition that was (re-)formatted with gnome-disk? |
1,644,981,691,000 |
For an unknown reason, my system stopped booting.
Now, it fails to find my root partition after 10 sec and drops me in an emergency shell where my keyboard is not detected
What I did so far was to boot from a live CD and ran the following commands.
With sdc7 my root partition
sudo su
mkdir /mnt
mount /dev/sdc7 /mnt
With sdc2 my EFI partition
mkdir /mnt/boot
mount /dev/sdc2 /mnt/boot
modprobe efivars
mount -t proc proc /mnt/proc
mount -t sysfs sys /mnt/sys
mount -o bind /dev /mnt/dev
mount -t devpts pts /mnt/dev/pts/
mount -t efivarfs efivarfs /sys/firmware/efi/efivars
chroot /mnt
pacman -Syu
mkinitcpio -P
grub-mkconfig -o /boot/grub/grub.cfg "$@"
No success
NB: I think the acpi errors on the photo are not new
Thanks
|
This is a bug in systemd's udev (more precisely communication between udevadm and udevd) affecting distributions shipping udev 240 but not systemd 240 in initramfs.
For Archlinux: FS#61328 - udev 240 not recognising keyboard
My answer for Debian there.
The effect is that enumeration in /dev is missing or incomplete. This prevent the /dev/disk/ tree to be filled, including UUID symlinks. It will also prevent keyboard detection etc.
The usual fix is to revert to udev 239 (so after your chroot) and rebuild the initramfs. It's possible that having systemd (not busybox) handling boot during initramfs can fix this problem too (because some settings get a bigger buffer used for communication between udevadm and udevd then), if that's possible on Archlinux.
Upstream bug report, fix proposition and fix commit there. It boils down to allow a bigger buffer for communication (and is probably not the best fix):
udev fails to trigger loading of modules #11314
Set systemd-udevd monitor buffer size to 128MB #11389
sd-device-monitor: fix ordering of setting buffer size
| Arch: root partition not found at boot |
1,644,981,691,000 |
I have an HP laptop with 2 hard drives Windows is installed on one drive, and the other is empty, split into 3 partitions. I'm looking at the Arch Linux installation guide, specifically at this section. My hope is that I can install Arch on one of the three partitions on my second drive. I found this question, but it didn't address my concerns. How do I install Arch Linux on a specific partition? I'm just trying to be cautious, as I need my Windows installation.
|
If you can't recover Windows and the files you need if you have an accident, you shouldn't be installing Linux, because accidents happen. Have a plan to re-install Windows, and if you don't back up your files, they can't be very important to you.
And if you can't recover, you must not install Arch Linux. The process is much more manual which makes too much opportunity for accidents.
Here is how I would reduce the risk, while following the Arch Install guide (and the instructions on the wiki pages that it tells you to follow, and whatever they tell you to do in turn... It's not a straight sequence of instructions, it's a bit of a choose-your-own-adventure book).
I assume UEFI - you should have specified if you use BIOS boot, because it is obsolescent. BIOS boot is not used on Windows 8 logo certified systems and above. There is no risk in following the UEFI instructions if you get it wrong, just stop when you realize you cannot find an ESP partition for you to mount.
Confirm the partitions you use, with lsblk and lsblk -f.
lsblk only shows sizes (and of course the partition order). Often this will uniquely identify your partition :-).
lsblk -f does not show sizes, but it's very useful because it shows the filesystem type. It also shows filesystem labels. (On my computer, I get "ESP" (EFI System Partition), "OS" for windows, "WINRETOOLS" for the the windows recovery boot, and "Image" which I think is the disk image data for windows recovery).
In your case, you need to identify which drive includes your Windows, and which drive you want to install Linux to. Don't use the Windows drive for anything except one operation: mounting the ESP.
The Arch "Install Guide" is pretty gnarly. To avoid prevent some experimentation which might increase your risk, I suggest choosing GRUB when it the Install Guide asks you to choose a boot loader. This is the boot loader used by all the main PC Linux distributions.
This is sub-optimal for your case. It would be much cleaner if you could create the ESP partition on your second drive. Partly because it avoids the need to touch your Windows disk at all, but also because it means your Linux disk is self-contained. UEFI firmware is absolutely designed to support this for external drives. I believe this is very likely to work for you even with an internal drive. However UEFI implementations may vary, and I did not find any information in the Arch Install Guide. You would have to try it and see.
https://superuser.com/questions/879165/uefi-esps-and-multiple-disk-drives
If you like the idea that a disk can be "self-contained" and be moved to another machine, be aware of the following caveat. According to UEFI, if you move a disk between machines, and the ESP does not use the boot/bootx64.efi hack, the machine will not know how to boot the new disk because it does not have a UEFI boot menu entry for it. The fact that it often works anyway is due to hacks in specific UEFI implementations which detect known OS's and generate boot menu entries for them. This is not and cannot be 100% reliable.
| Installing Arch Linux on a preexisting partition |
1,644,981,691,000 |
I have opensuse Leap 15.0, while installing it, I didn't specify the formatting for each partition. Now I want to change my /home partition (I give a separate partition for the home to my data and configuration files) from XFS to ext4.
So is there a way to do that without formatting it ?
|
No you'll need to copy the data elsewhere, create the ext4 filesystem and copy the data back.
I think Leap uses LVM by default, so you could add an extra disk to the LVM, create a new logical volume, create a ext4 file system and copy the data. You could then remove the XFS filesystem and finally remove the additional disk from LVM.
In case you don't know, you cannot shrink a XFS filesystem, so you can't shrink the current volume to copy to a new ext4 volume.
| How can I change /home formatting from XFS to ext4 without loss of data |
1,644,981,691,000 |
I have just moved a 3TB disk from an external USB enclosure to inside a computer and I cannot see the only one ext4 partition which is supposed to be there. The disk has extremely important data that I cannot lose. Please advise how to proceed, here are some details:
$ sudo mount -vvv -t ext4 /dev/sdb1 /mnt/
mount: /mnt: /dev/sdb1 is not a valid block device.
$ sudo fdisk -l /dev/sdb
GPT PMBR size mismatch (732566645 != 5860533167) will be corrected by w(rite).
Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sdb1 1 732566645 732566645 349.3G ee GPT
Partition 1 does not start on physical sector boundary.
$ sudo parted /dev/sdb print
Error: /dev/sdb: unrecognised disk label
Model: ATA WDC WD30EZRX-00D (scsi)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:
lshw output (excerpt):
*-scsi:1
physical id: 2
logical name: scsi1
capabilities: emulated
*-disk
description: ATA Disk
product: WDC WD30EZRX-00D
vendor: Western Digital
physical id: 0.0.0
bus info: scsi@1:0.0.0
logical name: /dev/sdb
version: 0A80
serial: WD-WCC1T1561951
size: 2794GiB (3TB)
capabilities: partitioned partitioned:dos
configuration: ansiversion=5 logicalsectorsize=512 sectorsize=4096
*-volume UNCLAIMED
description: EFI GPT partition
physical id: 1
bus info: scsi@1:0.0.0,1
capacity: 349GiB
capabilities: primary nofs
|
The comment answerers are not reading the output in your question. The output tells us this:
GPT PMBR size mismatch (732566645 != 5860533167) will be corrected by w(rite). fdisk is telling you that you have an EFI partition table with a so-called "protective" old-style MBR partition record. But the protective partition record does not correctly protect the contents of your disc, because it ends way before the actual end of the disc, leaving a couple of TiB of free space unaccounted for. fdisk says that it will fix this for you. Do not attempt to use fdisk to do so. fdisk is wrong.
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/sdb1 1 732566645 732566645 349.3G ee GPT So fdisk has decided to not show you the EFI partition table at all. It is showing you the "protective" old-style MBR partition table instead, as if that were how you had partitioned your disc. That contains one entry, which is really (since it is type ee) a dummy entry that is supposed to encompass the entire disc, including the EFI partition table. But it is only 732566645 sectors long, which is roughly 349GiB, not 2.7TiB.This is one of several reasons why it is wrong to run fsck against this. It is not a disc volume containing a formatted filesystem. It is a dummy old-style partition that is supposed to span the entire disc.
Partition 1 does not start on physical sector boundary. This is a red herring. Your dummy protective partition is supposed to begin at sector 1. Sector 1 is where the EFI partition table begins. It is the alignment of the real partitions, recorded in the new EFI partition table that fdisk isn't reading, that matters, and that for performance reasons. You should be able to mount misaligned volumes. But you haven't even got as far as using the right partition table, so whether this is even a problem in the first place is unknown.However, it is likely that it is not. Alignment is likely entirely a red herring here. Because what you are experiencing is well known, and is something else.
$ sudo parted /dev/sdb print
Error: /dev/sdb: unrecognised disk label parted is failing to read your EFI partition table, too. Unlike fdisk, it isn't falling back to treating your disc as being partitioned in the old style, and reporting one big dummy partition. It is failing outright.
size: 2794GiB (3TB)
…
description: EFI GPT partition
physical id: 1
bus info: scsi@1:0.0.0,1
capacity: 349GiB lshw is seeing a 3TB (2.7TiB) disc. It is also seeing the EFI partition table. But your EFI partition table claims that this is a 349GiB disc.
Why did 2.7TiB become 349GiB?
Well, notice what you get when you multiply 349GiB by 8.
When it is in your USB disc enclosure, the system thinks that your disc has 4KiB sectors, and everything has been accessing it using that as the sector size. In the USB enclosure, the rest of the system sees your disc with its native, true, sector size.
Moreover, with 4KiB sectors 732566645 sectors really does encompass the entire 2.7TiB of your disc, and both the old-style protective partition and the actual EFI partition table have the right numbers.
Outwith your USB disc enclosure, your disc is being read in "512e" compatibility mode, where most of the system pretends that your disc has 0.5KiB sectors. (There is a more complex explanation to do with a second inverse transformation undoing the first when the USB enclosure is involved, but I am glossing over that here, as it is beyond the scope of this answer.) The partition start and size numbers in your partition tables, and indeed anything else that points to a logical block address on your disc, are all wrong.
4KiB is 8 times 0.5KiB.
Downgrading from native 4KiB sector sizes to "512e" is possible, but it is not for the fainthearted. I recommend as the far simpler course of action that you put the disc back into the enclosure to read it, where it will be seen with its true 4KiB sector size by the rest of the system and the numbers will come out right.
Further reading
https://superuser.com/questions/719844/
https://superuser.com/questions/985305/
https://superuser.com/questions/1271871/
https://superuser.com/questions/852475/
Jonathan de Boyne Pollard (2011). The gen on disc partition alignment.
. Frequently Given Answers.
https://superuser.com/questions/339288/
https://superuser.com/questions/331446/
| Cannot mount partition - does not start on physical sector boundary? |
1,644,981,691,000 |
I was wondering if there is any way that you can make a folder invisible or inaccessible under Linux.
PS. I don't mean in unaccessible that you can't access it because you don't have the privileges. I mean when you try to access it, it tells you something like "Directory or file does not exist" even though you do have the access privileges to access
|
you can hide it from normal ls but a couple of arguments would show it, just like 'hidden' file in windows. if just in case you missed this.
just as @kusalananda said, you could unmount the partition on it, but then it is a problem if the volume in question can't be unmounted, you could create a small separate partition for this job and then mount/unmount it.
| Any Way To Make file or directory Inaccessible/Unsearchable Under Linux |
1,644,981,691,000 |
I have an ubuntu server. Via cloud-init I make partitions. When I restart my server, it would not come up again. I am sure I miss one command to tell the system which partition should be used for booting.
Before partitioning the sda1 was the boot disk and a mbr.
cat /etc/fstab
root@source ~ # cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=3f234dd2-63e6-4676-8ef3-0cde83e52484 / ext4 discard,errors=remount-ro 0 1
/dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0
parted -l
root@source ~ # parted -l
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 20.5GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:
Number Start End Size Type File system Flags
1 1049kB 20.5GB 20.5GB primary ext4 boot
fdisk -l
root@source ~ # fdisk -l
Disk /dev/sda: 19.1 GiB, 20480786432 bytes, 40001536 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x02d71cad
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 40001502 39999455 19.1G 83 Linux
After partitioning the sda1 should stay the boot disk and should use gpt.
But when I call parted -l or fdisk -l the boot flags wont show up?
parted -l
root@source ~ # parted -l
Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 20.5GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1049kB 5121MB 5120MB ext4
2 5121MB 20.5GB 15.4GB xfs
fdisk -l
root@source ~ # fdisk -l
Disk /dev/sda: 19.1 GiB, 20480786432 bytes, 40001536 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 8D6B03D7-1A3B-4BFC-8F8F-64EEF049CB9E
Device Start End Sectors Size Type
/dev/sda1 2048 10002431 10000384 4.8G Linux filesystem
/dev/sda2 10002432 40001502 29999071 14.3G Linux filesystem
cat /etc/fstab
root@source ~ # cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
UUID=3f234dd2-63e6-4676-8ef3-0cde83e52484 / ext4 discard,errors=remount-ro 0 1
/dev/fd0 /media/floppy0 auto rw,user,noauto,exec,utf8 0 0
/dev/sda1 / auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 0 2
/dev/sda2 /data_disk auto defaults,nofail,x-systemd.requires=cloud-init.service,comment=cloudconfig 02
Here is my cloud-config which works:
#cloud-config
resize_rootfs: false
disk_setup:
/dev/sda:
table_type: 'gpt'
layout:
- 25
- 75
overwrite: true
fs_setup:
- label: root_fs
filesystem: 'ext4'
device: /dev/sda
partition: sda1
overwrite: true
- label: data_disk
filesystem: 'xfs'
device: /dev/sda
partition: sda2
overwrite: true
runcmd:
- [ partx, --update, /dev/sda ]
- [ partprobe ] # asfaik partx and partprobe commands do the same
- [ parted, /dev/sda, set, 1, on, boot ] # <<-- set boot flag here
- [ mkfs.xfs, /dev/sda2 ] # format second partition with xfs
mounts:
- ["/dev/sda1", "/"] # mount boot disk on /
- ["/dev/sda2", "/data_disk"] # mount data_disk
What I am missing?
Do I have to tell fstab something more?
|
I see you have changed the partitioning type from MBR to GPT. Is your firmware in legacy/CSM/BIOS mode, or did you also change the firmware type to UEFI? In any case, you will need to reinstall your bootloader. If you are using BIOS mode (not UEFI), you will need to add a GRUB BIOS boot partition, because the sectors that were used for storing GRUB Stage 1.5 are now occupied by the GPT. If you are using UEFI firmware, you will need to add a FAT formatted EFI System Partition (ESP) from the firmware to boot from.
| How to boot after partitioning |
1,644,981,691,000 |
I have a problem while trying to extend the size of one of the partitions that i found on a old VM on the cloud, this is the output of the lsblk so you can have an idea on it:
enter code here
root@Desktop:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 1,9G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 18,1G 0 part
├─vg_root-lv_swap 254:0 0 1,9G 0 lvm [SWAP]
├─vg_root-lv_usr 254:1 0 4,8G 0 lvm /usr
├─vg_root-lv_home 254:2 0 244M 0 lvm /home
├─vg_root-lv_opt 254:3 0 488M 0 lvm /opt
├─vg_root-lv_XXXXXXX 254:4 0 244M 0 lvm /XXXXXXX
├─vg_root-lv_tmp 254:5 0 1,4G 0 lvm /tmp
├─vg_root-lv_var 254:6 0 976M 0 lvm /var
└─vg_root-lv_varlog 254:7 0 976M 0 lvm /var/log
sdb 8:16 0 20G 0 disk
sdc 8:32 0 20G 0 disk
sr0 11:0 1 1024M 0 rom
The output of df -h is as follow:
root@Desktop# df -h
file systel Size Used Free Used% Monted on
/dev/sda1 1,9G 800M 976M 46% /
udev 10M 0 10M 0% /dev
tmpfs 1,6G 77M 1,5G 5% /run
/dev/dm-1 4,6G 654M 3,7G 15% /usr
tmpfs 4,0G 0 4,0G 0% /dev/shm
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 4,0G 0 4,0G 0% /sys/fs/YYYYYY
/dev/mapper/vg_root-lv_XXXXXXXX 233M 2,1M 215M 1% /XXXXXXX
/dev/mapper/vg_root-lv_opt 465M 72M 365M 17% /opt
/dev/mapper/vg_root-lv_home 233M 2,1M 215M 1% /home
/dev/mapper/vg_root-lv_tmp 1,4G 2,2M 1,3G 1% /tmp
[[/dev/mapper/vg_root-lv_var 945M 928M 0 100% /var]]
/dev/mapper/vg_root-lv_varlog 945M 29M 852M 4% /var/log
The partition between the [[ ]] is the one i am aiming to extend ? is there any way to shrink the other partitions and make it bigger ?
|
You have to unmount the filesystems before you do this e.g. by switching to emergency mode (or booting into it):
umount /var/log
umount /var/
lvresize --resizefs --size -500G vg_root/lv_var
lvresize --resizefs --size +500G vg_root/lv_varlog
| Cannot extend the size of a partition: |
1,644,981,691,000 |
I have purchased a dedicated server from nocix - AMD Quadcore 120SSD + 2TB Preconfigured
It is running CentOS 6.8 and I have installed apache 2.2, mysql 5.6 and php 7.0.14.
I have run out of space on the main 120GB SSD and I am trying to mount the 2TB drive on to a new folder. I am completely new to dedicated servers and have only ever used shared hosting with cpanel.
Basically I have partitioned the 2TB drive as a single partition so it was /dev/sdb and I now have /dev/sdb1, after formatting I created the dir /newdrive and tried this command: mount /dev/sdb1 /newdrive
I then got the error: error writing /etc/mtab.tmp: No space left on device
I have no idea what is going on or how to fix it..
|
Basically, you are completely out of space on your first drive, so not even mounting the new drive will help. You need to make some space available on the old drive by deleting files from somewhere - the /tmp directory and other temp space (e.g. /var/tmp, /var/spool) would be the first place I would look to clean up. Once you have even just a little bit of space available, you can then mount the new drive with the same command you're trying to use.
Please note that if you're using multiple partitions, you will need to free up space in the partition containing /etc, which may be different from the partition(s) containing the temp spaces I referenced above.
| Make use of second hard drive provided by host |
1,477,418,346,000 |
I got a weird hardlink at centos 6.5 vps server. It's man made, I assume, but I'm not the one who did it.
df tells some info.
[root@root]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/simfs 209715200 128660820 81054380 62% /
none 4194304 4 4194300 1% /dev
none 4194304 0 4194304 0% /dev/shm
/dev/simfs 209715200 128660820 81054380 62% /var/www/username/data/www/test.site.biz/photo
ls -li tells nothing useful
[root@vz65646 test.site.biz]# ls -li
total 7952
79435160 drwxr-xr-x 2 someuser someuser 8130560 Oct 25 20:52 photo
Hardlinked folder is photo. By mistake I rm -rf test.site.biz which led to bad stuff happen. Namely, photo directory in other place went clean.
I assume restoring data is not possible. Yet, I'd like to figure out what happened out here so I won't repeat the same mistake twice.
Any hints are much appreciated.
|
You have two mounted filesystems with similar characteristics: same device name, same disk usage. These are very likely to be, in fact, the same device. This can happen if you mount the same network filesystem in different locations, for example. Given that this is a local filesystem, as sourcejedi identified in a comment, this is very likely to be a bind mount, created by a command like mount --bind /origin /var/www/username/data/www/test.site.biz/photo.
If your system is recent enough, you can use findmnt to confirm that it's a bind mount. But anyhow, most filesystem types can't be mounted at the same time at different locations, so having the same device is proof enough that this is a bind mount.
A bind mount provides a view of a directory tree in a different location. In terms of accessing the files under the bind mount, it's similar to having a symbolic link in the tree, i.e. /var/www/username/data/www/test.site.biz/photo/somefile is the same file as /origin/somefile, as if /var/www/username/data/www/test.site.biz/photo was a symbolic link to /origin. But /var/www/username/data/www/test.site.biz/photo is not a symbolic link, it's a directory.
Since /var/www/username/data/www/test.site.biz/photo is a directory, a recursive traversal descends into it. So rm -rf deleted the files under /original, because /original and /var/www/username/data/www/test.site.biz/photo are the same directory that just happen to be shown in different locations.
| Simfs hardlinks whereabouts |
1,477,418,346,000 |
Context:
I'm currently adding an SSD to my old laptop, in order to boost it. I'd use this opportunity to fresh re-install my both OS, and give a try to a new distro.
I've now two drives of 250Gio each: /dev/sda (SSD) and /dev/sdb (physical drive).
I plan to use the solid state drive for the 3 OS (Win7, Ubuntu 16.04 and Fedora 24), and the physical one for my (shared) files.
Question: How many partitions (physical or logical) are needed on my SSD, assuming that I start from a void disk?
(All my OS and data are currently on the physical drive. I'll empty, format, and then refill it with files once OS install is done.)
My current guess is:
- Primary 1 | 1 Gio | ? | MBR, Grub, etc.
- Primary 2 | 80 Gio | ntfs | Win 7
- Primary 3 | 130 Gio | ext4 | ...
* secondary 3.a. | 60 Gio | Ubuntu
* secondary 3.b. | 60 Gio | Fedora
- Primary 4 | 10 Gio | swap | swap
(I've 8Gio RAM, and will add some empty space between paritions in order to be able to extend this or that later if space is needed.)
|
Assuming legacy MBR; at a minimum, 3; one for each OS.
Linux only requires 1 partition, you can have the /boot and / on the same partition when using MBR, grub and ext3/4 or a few other filesystems.
Windows I believe only requires one partition as well, but generally creates some recovery partitions if you allow it to format the entire disk. I am not sure, offhand, how much control you have over this with the various different version of windows.
You do not need a separate /boot (in your example what looks like Primary1), in fact I would let each distro have/maintain their own /boot (this will stop one overwriting anothers kernel). Swap partitions are optional, you may not need one at all or you can use swap files instead (which are generally more flexible as they are easier to resize) or you can keep is as a separate partition.
Also, there is no practical difference between a primary and extended partition. So if you are worried about the number of partitions you can simple create 1 primary partition, and use as many extended partitions as you need within that.
You are still free to create as many additional logical partitions as you require.
If you are willing/are able to move to UEFI you can make use of the more modern GPT partition table which does not have the 4 primary partition limit allowing you to effectively create as many partitions as you realistically require. You require one EFI partition, and then at a minimum one per OS but have the freedom to create as many more as you need.
| How many partitions are needed to install 3 OS (2 distro + Win7)? |
1,477,418,346,000 |
I run a dual-boot system, and the Linux distro is Ubuntu 14.04.
I have used GParted to enlarge a logical partition, named /dev/sda6, on which the /home directory is normally mounted. According to the GParted report the operation has been completed successfully. The partition is 85 GiB large, of which 83GiB used and 2 GiB free(d), as intended.
However this comes about with two oddities:
This gain is not recognized after logging in.
As I check the disk usage with df -h, the report says that the partition /dev/sda6, duly mounted on /home, has a size of 85 GiB, of which 83 are used and 0 are available. The use is claimed to be 100%.
Another oddity is that I can regularly log into my user profile through the graphical user interface. After the credentials are recognized, though, the system stalls and it doesn't splash into the desktop environment.
In order to get the df -h information, I need log in either with my regular identity in any text terminal or with the guest status in the graphic user interface. As a side remark, it doesn't look like data have been corrupted.
How can I fix this situation? The aim is to get the size increase of the partition fully available to the operating system. Thanks for helping me out.
|
You are probably using one of the ext filesystems (the default linux filesystem, usually ext4). Most of the time, when created, it will be created with a specific buffer called reserved blocks.
This reserved space is meant to be only writable by system processes and root and therefor protect the OS from the disk filling by users.
The main purpose of df is to show the amount of disk space available out of a grand total. While it also shows the space used (by user), it doesn't do so with this reserved space.
This buffer is by default 5% of the whole disk. You can check if you have such a buffer with sudo tune2fs -l /dev/sda6 | grep Reserved.
By typing sudo tune2fs -l /dev/sda6 | grep [bB]locks one can also read both the number of reserved blocks and the block size (in B), hence determine the space of the partition taken up by this construction.
This would explain the system seeing 85GiB, but only 83 used and 0 free.
If you really want, you can set the buffer to a lower value with sudo tune2fs -m 2 /dev/sda6 (2 being an example value in percentage, which by default would be 5).
The better option would be to actually resize so enough disk space is free to be safe. 2GiB of 85GiB is only 2,35%, which isn't a lot and in most cases would fill up relatively fast. If you are sure your space usage will stay stable at 83GiB, then you can use tune2fs to reserve 0% of space for safety, but as soon as your disk fills up then (to 85GiB), you will not be able to log in at all and the machine will probably crash and be harder to repair.
The 5% safety margin is a relatively sane one. So, in this case, I'd make the partition at least 90GiB, but probably even 100 or more, just to have some space to spare for emergencies. Disk space is cheap, your time repairing the problems stemming from a filled-up disk is probably more expensive.
The answers to this question give some more insight into the reasoning.
| Reserved-blocks issue: partition size successfully changed but not recognized by the OS |
1,477,418,346,000 |
I finished the Arch Linux install successfully and installed GNOME. But I think I made my partitions too small:
Disk /dev/sda: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xd96cc977
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 125831167 125829120 60G 83 Linux
/dev/sda2 125831168 134219775 8388608 4G 83 Linux
/dev/sda3 134219776 201328639 67108864 32G 83 Linux
My HDD is 250Gb, but that's just advertised, so I'm going to dumb it down to 230Gb. That being said I can add a lot more data to my SDA's. I installed GParted on a DVD so I can do so.
I never used this program before, so I want to know is, is it possible to grow the partitions, and if so, how can I do it?
|
Resizing partitions into the free space after them works pretty well with gparted. Of course you should have a backup for safety, especially when you're not experienced with the procedure. As far as I remember, gparted offers to resize the filesystem after you've resized the partition, that would be the easiest way. If it doesn't, "resize2fs" is the command that you'd use.
| Can I Grow A Linux Partition After Making It & Not Lose Data? |
1,477,418,346,000 |
I have a hard drive that came out of an old Windows PC, which I'm using as a second hard drive. Because it was already formatted as ntfs, I didn't want to reformat it and lose all the data on it, so I just used it as one would a USB drive. I have to get nautilus to mount it every time I log in.
However, the partition with my stuff on it is only using about 330GB of the 700gb on the drive -- the rest is unallocated.
Screenshot of the drive in gparted:
Is there a way to expand this partition to fill the drive without wiping the drive? I have a hunch that I won't be able to get it to use the 200-odd mb before the partition; only expand to fill the space after. Is this correct?
I found mention of a program called Ease US that claims to be able to do this, but it only runs on Windows. It looks pretty similar to GParted, though, so I'm hoping GParted can do the same thing. I'm just too afraid to play around with it for fear of nuking my drive.
|
You can definitely expand an ntfs partition with gparted. Ensure you have ntfsprogs installed (yum/dnf/apt-get install) first.
You can right click /dev/sda2 in that list and click rezize/move and go from there. Move the slider as needed to fill in the space.
You might be able to grab up the first 200MB of the drive too with that option, but I've never had to do it myself, so I can't speak from experience.
| Expanding a hard drive partition to fill the drive without wiping the partition? |
1,477,418,346,000 |
I have 3 partitions sda4 10 Gb ; sda5 15 Gb and sda6 20 Gb . How to create and mount a virtual hard drive to join several filesystems together under debian8?
|
pvcreate /dev/sda4
pvcreate /dev/sda5
pvcreate /dev/sda6
vgcreate bigvolgrp /dev/sda4 /dev/sda5 /dev/sda6
lvcreate -n bigvolume -L 45G bigvolgrp
mkdir /bigstore
mkfs -t ext3 /dev/bigvolgrp/bigvolume
mount /dev/bigvolgrp/bigvolume /bigstore
df -h /bigstore #to verify
| How to combine several partitions into one virtual drive? |
1,477,418,346,000 |
I am running CentOS 6.5 in Hyper-V. It originally started off with a 5GB virtaul hard disk (VHDX) but I later decided I wanted to increase the size. I changed it in Hyper-V settings to 20GB, then booted partition magic live CD and changed /dev/sda to 20GB. I then ran the following to increase the LVM:
lvextend -l +100%FREE /dev/mapper/vg_condor-lv_root
I though this was enough, but it still seems to be limited to the original ~4GB. Here's some details, but please ask for any specific stuff you need as I'm pretty new to this:
[root@condor leonard]# fdisk -l
Disk /dev/sda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0006a679
Device Boot Start End Blocks Id System
/dev/sda1 * 1 64 512000 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 64 2611 20458496 8e Linux LVM
Disk /dev/mapper/vg_condor-lv_root: 20.4 GB, 20409483264 bytes
255 heads, 63 sectors/track, 2481 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Disk /dev/mapper/vg_condor-lv_swap: 536 MB, 536870912 bytes
255 heads, 63 sectors/track, 65 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
[root@condor leonard]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_condor-lv_root
3.9G 1.1G 2.6G 29% /
tmpfs 495M 0 495M 0% /dev/shm
/dev/sda1 477M 82M 370M 19% /boot
[root@condor leonard]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
sda 8:0 0 20G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 19.5G 0 part
├─vg_condor-lv_root (dm-0) 253:0 0 19G 0 lvm /
└─vg_condor-lv_swap (dm-1) 253:1 0 512M 0 lvm [SWAP]
Ultimately, how do I correctly increase the space available to the / folder?
|
You still need to resize the filesystem contained in the LV (assuming it's one of the ext filesystems):
resize2fs /dev/mapper/vg_condor-lv_root
If you want to resize a logical volume and its filesystem in a single operation, use fsadm:
fsadm resize /dev/mapper/vg_condor-lv_root
This supports the ext filesystems as well as ReiserFS and XFS.
| Why hasn't my CentOS partition resized? |
1,477,418,346,000 |
I got the following disk
sda9 contains a new Linux installation which I'd like to keep, while sda5 is the old installation which I'd like to free (and successively merge with sda4, but we don't care).
sda4 contains a Windows installation which I'd like to keep.
The question, made to be sure of avoiding (grub) problems at boot, is:
do I need just to delete sda5 without any other operation, and nothing will break ?
|
Be sure that you run
sudo update-grub
immediately after you make any changes to the partition table. That is not necessary in some cases but it is always safe to do it.
In your case it is better to make all changes from your new Linux installation: delete a partition and then update grub.
If you are going to use Live CD instead you need to update grub by chroot-ing to your new Linux installation (you don't need to reinstall it, only update).
That should work unless you made any unsafe changes to your configuration files (e.g., if you ever made any changes to /etc/fstab, you must check that all partitions there are identified by UUID, not by their names).
Anyway to be on the safe side it is always better to have Live CD in case something goes wrong. Good luck!
| How to safely delete a system partition? |
1,477,418,346,000 |
Sometimes but not often, but many times, after formatting an SD card with Gparted, the SD card becomes unaccessible to normal user, so I go as root and change the permissions, happens especially when changing the file system from say Fat32 to Ext4. Reformatting doesn't help. Does Gparted changes the permissions on disk? If yes why? If no then why it happens to me? I used other tools, only gparted seems to do that.
|
If you are using mkfs.ext4, you have to pass -E root_owner=your_uid:your_gid, this is usually passed in an 'extra options' textbox in gui partition tools. If you dont do this (< mkfs 1.42) then the person running the gui tool will get the permissions. Nowdays, for security, it assigns them to root:root (0:0). If you ever go back to fat32 or ntfs, you should be aware that their mkfs tools work differently*
fat32 partitions need the option uid=your_uid,gid=your_gid (usually 1000) when mounted if you want to access them. You can use permissions with NTFS, but it requires you to specify 'permissions' as a mount option (the -o switch).
| Does Gparted change permissions? |
1,477,418,346,000 |
I have an USB disk. When I insert it into my PC, the USB interface displays two partition:sdb1 and sdb4.
root@debian:~# fdisk -l
Disk /dev/sda: 149.1 GiB, 160041885696 bytes, 312581808 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa350a350
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 206847 204800 100M 7 HPFS/NTFS/exFAT
/dev/sda2 208894 312580095 312371202 149G f W95 Ext'd (LBA)
/dev/sda5 240011264 306278399 66267136 31.6G 7 HPFS/NTFS/exFAT
/dev/sda6 306280448 312580095 6299648 3G 82 Linux swap / Solaris
/dev/sda7 208896 80285695 80076800 38.2G 83 Linux
/dev/sda8 80287744 163878911 83591168 39.9G 83 Linux
/dev/sda9 163880960 240011263 76130304 36.3G b W95 FAT32
Partition table entries are not in disk order.
Disk /dev/sdb: 14.6 GiB, 15610576896 bytes, 30489408 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xc3072e18
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 16128 30489407 30473280 14.5G c W95 FAT32 (LBA)
/dev/sdb4 11580256 13523807 1943552 949M 0 Empty
How to combine sdb1 and sdb4 into sdb ?
It is strange that it is displayed as sdb, only sdb1 can be detected, the sdb4 can't be detected in gparted.
Let us get the mbr in my sdb with the command
dd if=/dev/sdb of=/home/mbr.img bs=512 count=1
Output the mbr.img with xxd.
xxd /home/mbr.img
How to fix the mbr?
|
Well, if you want them to be /dev/sdb you'll need to remove the partition table entirely. Maybe the most simple way to do that is just to backup the fs and create it anew.
sudo mount /dev/sdb1 /mnt
tar -C/mnt -cf /tmp/sdb .
sudo umount /mnt
sudo dd if=/dev/zero of=/dev/sdb bs=1kx1k count=4
sudo mkfs.vfat /dev/sdb
sudo mount /dev/sdb /mnt
tar -C/mnt -xf /tmp/sdb
...I think that covers it...
There are other solutions - such as fatresize and similar - but they are not as reliable. The real problem is that the FAT filesystem doesn't offer much in the way of intelligence of any kind - whether that be in the way of user-space tools or otherwise. With some research you could figure out how to do this at the byte offset level, but I think you'll find doing the above less hassle.
| How to combine two partitions into one? [closed] |
1,477,418,346,000 |
I installed Kali Linux on my HDD with the installer Option (right after the partition settings)
Use entire disk space(beginners)
After installing Kali, I wanted to install 2 More OSs but now i can't repartition it anymore. What can I do without reinstalling?
|
You have to unmount the drive before you can resize the partition. You will need to use a live CD/usb. Once you boot using the live CD you can repartition the drive.
If one of the OS that you are installing is windows you probably want to start with that and then add linux. While you can certainly do it in either order it tends to be easier to do windows first.
| Make room on my disk to install another OS |
1,477,418,346,000 |
I've have both Ubuntu and Arch installed on my computer, and want to change the labels of the / and /home partitions (four in total), to make it clear which is which. Can this potentially break anything?
The only thing that I can think of is /etc/fstab; this shouldn't be an issue in my case, since it defines partitions by UUID, not label.
|
Hard drives usually don't have labels, it's filesystems that do. Here are the main places where a filesystem label is likely to come up:
In /etc/fstab.
In your bootloader configuration (e.g. /boot/grub/grub.cfg). If your Grub configuration is automatically generated, run update-grub after changing your labels and verify that the result is what you wanted.
Mostly for removable devices: in the configuration of automounting tools (in custom udev rules, as directory name under /media or /run/media/user_name (if not created on the fly), in /etc/pmount.*, in /etc/auto.misc and files referenced from /etc/auto.master, etc.).
| Is there any danger in changing the labels of my hard drives? |
1,477,418,346,000 |
I am using 16GB SD card for building Ubuntu from scratch for BeagleBone Black.
So, I have to make two partition of my SD card, one for Rootfs and second one for Zimage and other stuff.
Initially I was providing 1GB for Rootfs and 15GB for Zimage, but Ubuntu was not working with this partition. So I provide 6GB for Rootfs and 10GB for Zimage.
Now Ubuntu is working fine, but I think 6GB is too much for Rootfs.
What should be the size of partition for Rootfs for best utilization of memory of SD card?
|
I think you have misunderstood something. What zimage commonly refers to is the compiled linux kernel, so this sounds like a boot partition. But that does not need to be very big at all.
Looking at this, it seems that the beaglebone (and I presume the BBB) uses a (small) VFAT partition to boot from. This seems like a common ARM SOC methodology; it will contain a bootloader, some configuration files, and the kernel image -- although that page in fact recommends booting the kernel from the root filesystem, which would require a bootloader that supports FAT and ext. I haven't used uboot but apparently it does.
In any case, if that's what you are referring to as the "Zimage" partition, 100 MB is easily more than enough. This example uses 64 MB. Not GB. MB. Chances are it will still be mostly empty, as even if the kernel zimage is stored there, that won't be anymore than ~5 MB, and it will be the biggest thing on the partition. The kernel also makes use of loadable modules, but those are in the root filesystem, not the boot partition.
That leaves the rest of the card for the root filesystem. There is no need to break that up and you might as well make it as big as possible, so I recommend you use the remaining 15.9 GB for it. That's what's actually used by the system, whereas the boot partition is only used briefly at boot and doesn't even need to stay mounted.
| Recommended partition size of SD card for Ubuntu for BeagleBone black [closed] |
1,477,418,346,000 |
I have one HDD. 2 partitions with Kali and Windows 8. GRUB is installed. Both OS work fine.
I want to remove my Kali partition, and get it as a VM in my Windows 8 partition (I'm using VMware workstation).
Is there a way to virtualize this Kali partition ? without damaging my windows 8 one ? Windows does not see the linux partition at all (at least with the Disk Manager, maybe another software is able to see it as GParted or EasyPart).
Another solution is to erase my Kali partition and re-extend my windows 8 one, but I'm afraid the windows 8 partition would not boot after it. And it's less fun doing this way.
But the Kali partition does not have important stuff that I want to keep/save. So if it's the only way to do it, I can format the kali partition.
Hope I'm understable enough and at the right place to ask this kind of question.
Many thanks.
|
You can use VMware Converter to convert the partition to a VM. After that, you would still need to remove the Kali partition and extend your Windows partition.
If Windows doesn't see the Windows partition, try QTParted from a Knoppix LiveCD. When the partition is removed, you should be able to extend your Windows partition. I have done this several times, and I don't think extending a Windows partition has big risks associated with it. If you want to be really sure, take a backup first.
| Dual boot win/kali - virtualize the linux partition |
1,477,418,346,000 |
I tried to install Linux (Mint distro) on my laptop computer after recieving a secondary disk. After installing Mint, I just rebooted the system.
Nothing was happening (Actually, I was getting a message, from BIOS I guess, saying "No bootable disk" or something similar).
So then I tried to boot on an Arch pendrive, but I was getting a blackscreen. I changed BIOS mode to legacy (from UEFI), and added nomodeset to boot cl in Grub (and vga=0x37F for comfort). I was finally booting on something.
I did a fdisk -l and found that there was no Windows partition table. However, I have no idea why Mint wasn't booting as I had Linux Partitions.
I then did testdisk /dev/sda (where sda is my Windows disk), chose EFI GPT, Analyze. My partition seemed like retrieved, so I chose each of it and wrote. After rebooting, on testdisk, only ESP and recovery partition were there.
I then started to re partition my /dev/sdb to install Linux.As I wanted to format my partitions, I did a mkfs.ext4 on sda1 and sda2 instead of sdb1 and sb2.
Is there anyway to retrieve my Windows system (At least my data, as I didn't format data partition, only ESP and recovery).
|
There seem to be multiple things going on here. But basically it sounds like you want to get your partition table back so you can at least get to your data partition that you did not run mkfs on.
So first, the answer is yes--if you know the original layout of the partition table, you can recreate it on the disk and then access those partitions that have not been modified. However knowing how to recreate the partition is kind of a problem unless you planned ahead. For example, this works:
# Save partition table to a file
sfdisk -d /dev/sda > partitions.txt
[Something destroys partition table.]
# Recreate partition table
sfdisk /dev/sda < partitions
The problem is if you didn't do the first step, you might need to be clever about how to find the information you need to recreate the table.
Second potential problem is that whatever you did recreated the boundries for sda1 and sda2 before mkfs. If that is the case, and sda1 or sda2 overlapped your data partition when it did a mkfs, you're going to have to look into data recovery techniques to see if you can get anything back. It could be very very hard.
I haven't used testdisk, but after looking at the page, frankly it doesn't sound good. If testdisk created the other partitions for you, and the only remaining space should have been for the one you want, you can try to just make a partition there and see if you can mount it under a Live CD.
| Wiped out Windows partition table |
1,477,418,346,000 |
I did a pvresize (decrease) on /dev/sda2 so that I can have about 48 GBytes free... and I wanted to create a partition on that free space, but the /dev/sda3 device didn't created.. why? Do I need a reboot for it? (didn't rebooted after the reducing of the PV...)
[root@SERVER ~]# parted -s /dev/sda print free
Model: ATA Hitachi HTS72503 (scsi)
Disk /dev/sda: 320GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Number Start End Size Type File system Flags
32,3kB 1049kB 1016kB Free Space
1 1049kB 538MB 537MB primary ext4 boot
2 538MB 272GB 271GB primary lvm
272GB 320GB 48,3GB Free Space
[root@SERVER ~]#
[root@SERVER ~]#
[root@SERVER ~]# parted /dev/sda print
Model: ATA Hitachi HTS72503 (scsi)
Disk /dev/sda: 320GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 538MB 537MB primary ext4 boot
2 538MB 272GB 271GB primary lvm
[root@SERVER ~]#
[root@SERVER ~]#
[root@SERVER ~]# parted /dev/sda mkpart primary 272GB 320GB
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Az eszköz vagy erőforrás foglalt). As a result, it may not reflect all of
your changes until after reboot.
[root@SERVER ~]#
[root@SERVER ~]#
[root@SERVER ~]#
[root@SERVER ~]# parted /dev/sda print
Model: ATA Hitachi HTS72503 (scsi)
Disk /dev/sda: 320GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Number Start End Size Type File system Flags
1 1049kB 538MB 537MB primary ext4 boot
2 538MB 272GB 271GB primary lvm
3 272GB 320GB 48,3GB primary
[root@SERVER ~]#
[root@SERVER ~]# parted -s /dev/sda print free
Model: ATA Hitachi HTS72503 (scsi)
Disk /dev/sda: 320GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Number Start End Size Type File system Flags
32,3kB 1049kB 1016kB Free Space
1 1049kB 538MB 537MB primary ext4 boot
2 538MB 272GB 271GB primary lvm
3 272GB 320GB 48,3GB primary
320GB 320GB 352kB Free Space
[root@SERVER ~]#
[root@SERVER ~]#
[root@SERVER ~]#
[root@SERVER ~]# env LC_MESSAGES=EN ls -la /dev/sda*
brw-rw----. 1 root disk 8, 0 Jun 29 18:53 /dev/sda
brw-rw----. 1 root disk 8, 1 Jun 28 12:56 /dev/sda1
brw-rw----. 1 root disk 8, 2 Jun 28 12:56 /dev/sda2
[root@SERVER ~]#
[root@SERVER ~]#
[root@SERVER ~]#
[root@SERVER ~]# partprobe
Warning: WARNING: the kernel failed to re-read the partition table on /dev/sda (Az eszköz vagy erőforrás foglalt). As a result, it may not reflect all of your changes until after reboot.
[root@SERVER ~]#
[root@SERVER ~]#
[root@SERVER ~]# env LC_MESSAGES=EN ls -la /dev/sda*
brw-rw----. 1 root disk 8, 0 Jun 29 18:55 /dev/sda
brw-rw----. 1 root disk 8, 1 Jun 28 12:56 /dev/sda1
brw-rw----. 1 root disk 8, 2 Jun 28 12:56 /dev/sda2
[root@SERVER ~]#
[root@SERVER ~]# env LC_MESSAGES=EN fdisk -l
Disk /dev/sda: 320.1 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x0007e24d
Device Boot Start End Blocks Id System
/dev/sda1 * 1 66 524288 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 66 33039 264859648 8e Linux LVM
/dev/sda3 33039 38914 47185920 83 Linux
[root@SERVER ~]# head -1 /etc/issue
Scientific Linux release 6.4 (Carbon)
[root@SERVER ~]#
UPDATE:
[root@SERVER ~]# kpartx -av /dev/sda
device-mapper: reload ioctl on sda1 failed: Invalid argument
create/reload failed on sda1
add map sda1 (0:0): 0 1048576 linear /dev/sda 2048
device-mapper: reload ioctl on sda2 failed: Invalid argument
create/reload failed on sda2
add map sda2 (0:0): 0 529719296 linear /dev/sda 1050624
device-mapper: reload ioctl on sda3 failed: Invalid argument
create/reload failed on sda3
add map sda3 (0:0): 0 94371840 linear /dev/sda 530769920
[root@SERVER ~]#
[root@SERVER ~]# env LC_MESSAGES=EN ls -la /dev/sda*
brw-rw----. 1 root disk 8, 0 Jun 29 22:05 /dev/sda
brw-rw----. 1 root disk 8, 1 Jun 28 12:56 /dev/sda1
brw-rw----. 1 root disk 8, 2 Jun 28 12:56 /dev/sda2
[root@SERVER ~]#
[root@SERVER ~]#
[root@SERVER ~]#
[root@SERVER ~]# sh rescan-scsi-bus.sh
WARN: /usr/bin/sg_inq not present -- please install sg3_utils
or rescan-scsi-bus.sh might not fully work.
Host adapter 0 (ata_piix) found.
Host adapter 1 (ata_piix) found.
Host adapter 2 (ahci) found.
Host adapter 3 (ahci) found.
Host adapter 4 (ahci) found.
Scanning SCSI subsystem for new devices
Scanning host 0 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning for device 0 0 0 0 ...
OLD: Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: MATSHITA Model: DVD/CDRW UJDA775 Rev: CB03
Type: CD-ROM ANSI SCSI revision: 05
Scanning host 1 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning host 2 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning for device 2 0 0 0 ...
OLD: Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: Hitachi HTS72503 Rev: GHBO
Type: Direct-Access ANSI SCSI revision: 05
Scanning host 3 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
Scanning host 4 for SCSI target IDs 0 1 2 3 4 5 6 7, all LUNs
0 new device(s) found.
0 device(s) removed.
[root@SERVER ~]#
[root@SERVER ~]#
[root@SERVER ~]# env LC_MESSAGES=EN ls -la /dev/sda*
brw-rw----. 1 root disk 8, 0 Jun 29 22:05 /dev/sda
brw-rw----. 1 root disk 8, 1 Jun 28 12:56 /dev/sda1
brw-rw----. 1 root disk 8, 2 Jun 28 12:56 /dev/sda2
[root@SERVER ~]#
|
You are running an old version of parted which still uses the BLKRRPART ioctl to have the kernel reload the partition table, instead of the newer BLKPG ioctl. BLKRRPART only works on a disk that does not have any partitions in use, hence, the error about informing the kernel of the changes, and suggesting you reboot.
Update to a recent version of parted and you won't get this error, or just reboot for the changes to take affect, as the message said. Depending on how old the util-linux package is on your system, you may be able to use partx -a or for more recent releases, partx -u to add the new partition without rebooting.
| Why doesn't /dev/sda3 created? |
1,477,418,346,000 |
I couldn't find a way to customize the partition schema during installation of CentOS 6.4, all these 3 options lead me directly to the "confirm disk change" screen.
Was it configured to be so? 'caue I saw a line says The default layout is suitable for most users.
If not, how can I setup my own partitioning schema?
|
The TUI installer does not has such option to customize partition layout, you can only customize partition layout in GUI installer. See the following note in RHEL installation guide.
Important — Installing in text mode
If you install Red Hat Enterprise Linux in text mode, you can only use the default partitioning schemes described in this section. You cannot add or remove partitions or file systems beyond those that the installer automatically adds or removes. If you require a customized layout at installation time, you should perform a graphical installation over a VNC connection or a kickstart installation.
Furthermore, advanced options such as LVM, encrypted filesystems, and resizable filesystems are available only in graphical mode and kickstart.
RHEL 6 Installation Guide - 16.15. Disk Partitioning Setup
| netinstall media of CentOS 6.4 is broken? |
1,477,418,346,000 |
I want to install Ubuntu 12 and I would like to partition my hard drive so that the OS lives in its own partition and the data in a different one. Thus if the system breaks for some reason, the data will remain intact. Is there a better way of doing this?
|
I would recommend, especially since you're not an advanced user, a separate /home, as above, and periodical backup to optical media: cheap, easy and somewhat reliable. If you really have important data, I would think of something else, though.
| Ubuntu 12 installation - partitions |
1,477,418,346,000 |
I'm running Debian (wheezy) on my Raspberry Pi, and I apparently allocated something incorrectly when setting it up awhile back. I have a 16GB SD card, and I'm only using a couple GB for the root partition.
When I plug in the card to my Mac card reader, and this is what I see.
It looks like I have a large chunk of unused space. How do I go about reclaiming this without wiping out my existing data/settings? Looking for command-line instructions.
|
You can use the raspi-config command and select "expand_root fs".
| I'm running out of room on my root partition. How do I expand it with free space? |
1,477,418,346,000 |
Xubuntu 12.10
XFCE with Greybird theme
Hello,
this is my GParted screenshot:
Could anyone please give my any advice on how to make changes to have more space available for use in Linux? Let's say I manage to take 10 GB out of the /dev/sda3 partition which is currently formatted in ntfs, how would I proceed then?
|
To have more space for your Linux installation you need to expand sda6. Having freed up 10GB by shrinking sda3 you would then expand sda4 by 10GB and expand sda6 to fill up all of sda4.
However, resizing existing partions, especially NTFS ones, always bares the risk of loosing all data on that partition! I don't know anybody who ever experienced loss of data, but there is always that risk, so better prepare a backup first.
| How should I partition my hard drive? |
1,477,418,346,000 |
I was trying to Linux Mint from live-usb and made a stupid mistake.
I created master boot record and my HDD partition become unallocated.
After rebooting from live OS, I'm unable to get to boot menu.
Is there any way to recover all my data? Right now I can only boot to live-usb.
|
First, to avoid messing up, you should backup an entire image of the disk (provided you have a bigger disk to store it). For this, several solutions are proposed on this question, last time i did it, I used dd. Once you are sure you can restore the image in case of problem, you can use testdisk to redetect the partition table and fix it.
This question for instance provides a solution.
| gparted partition master boot record corrupt |
1,477,418,346,000 |
This question is following Unable to mount /home/ partition after reinstalling grub after reinstalling windows 7 where the diagnostic was that installing windows 7 deleted my /home partition, lovingly called /dev/sda3.
Since almost nothing have been done with this computer since the incident, we can expect that the content of the partition is still intact and that it is only unusable for the moment.
The mission is to try to rescue the files that were inside this partition by restoring it to its original ext4 format.
Does anyone know how to proceed?
|
Right off the bat make a dd disk image of the drive, and work with that instead of the drive itself. That lets you experiment.
dd if=/dev/sda3 bs=1M > sda3.img
Beyond that I'm not sure. I'd hit google. Might look at it later.
edit; http://www.cgsecurity.org/wiki/TestDisk looks promising.
| how to restore a logical partition to its original ext4 format |
1,477,418,346,000 |
I would like to know
What is the exact meaning of primary partitions? Why it is named so? and why it is restricted to 4?
What is meant by extended partitions? Why it is named so? and what is the possible number of extended partitions in the hard disk?
What is mean by logical partitions? Why it is named so? How it is calculated?
What are the advantages of these software partitioning?
Is it it possible to install OS(Linux/windows) in all partitions ? If no, why?
|
Hard drives have a built-in partition table on the MBR. Due to the structure of that table the drive is limited to four partitions. These are called primary partitions.
You can have more partitions by creating virtual ( called logical ) partitions on one of the four primary partitions. There is a limit of 24 logical partitions.
The partition you choose to split into logical partitions is called the extended partition, and as far I understand you can have only one.
The advantage to logical partitions is quite simply that you can have more than 4 partitions on a disk.
You should be able to install any OS on all of the partitions.
See this page for more details
In the current IBM PC architecture,
there is a partition table in the
drive's Master Boot Record (section of
the hard dirve that contains the
commands necessary to start the
operating system), or MBR, that lists
information about the partitions on
the hard drive. This partition table
is then further split into 4 partition
table entries, with each entries
corresponding to a partition. Due to
this it is only possible to have four
partitions. These 4 partitions are
typically known as primary partitions.
To overcome this restriction, system
developers decided to add a new type
of partition called the extended
partition. By replacing one of the
four primary partitions with an
extended partition, you can then make
an additional 24 logical partitions
within the extended one.
| Meaning of hard disk drive software partitions? [closed] |
1,477,418,346,000 |
I have to update some outdated embedded systems. But the RAUC Update contains four partitions, while the old systems have only three partitions.
The additional Partition is at the start of the disk and I cannot flash the devices with an external Adapter.
What I have, is SSH access to the existing Linux on the device.
Could I change the partition table somehow from within the running system and thereby move the system partition?
Or could I somehow dd the whole disk with a new image?
I just cannot get my head around this problem and I am not sure, if I am missing a good solution here.
|
I did something similar on an embedded system. What saved me was that the compressed image of the new disk (with all its partitions) was small enough to keep in memory.
What I did was to patch the initramfs to include a custom script. At boot, before mounting anything, it copied the (compressed) disk image into a ramfs filesystem, and decompressed it to dd of=/dev/<disk>, completely overwriting everything, including new partitioning.
(I had to struggle a bit to retain certain files. In the end I did a tarball of what I wanted to retain, put this as well in the tmpfs, and untarred this onto the new filesystem. It's working pretty well.)
I'm sure that there are prettier solutions, but this worked for me.
[Edited to add:]
Another option would be to add a small script in the initramfs that would pull in the disk image over network. You'd have to figure out IP settings etc without the benefit of the full system, which can be awkward. But I think that putting a script in the initramfs is probably your best option as it can run from ram, without mounting any disks, so that you can overwrite the lot.
| Handle Partition Changes Embedded System |
1,477,418,346,000 |
I have a 4 TB Seagate Drive. Apart from 22 GiB of other partitions, there rest is for sda1.
The Problem? I miss about 240 GiB of sda1! I can't see where it is.
whole sda1 partition has a filesystem in it:
root@nas2:/disk1# resize2fs /dev/sda1
resize2fs 1.46.6 (1-Feb-2023)
The filesystem is already 970893568 (4k) blocks long. Nothing to do!
And here is the output of tune2fs:
root@nas2:/disk1# tune2fs -l /dev/sda1
tune2fs 1.46.6 (1-Feb-2023)
Filesystem volume name: media
Filesystem revision #: 1 (dynamic)
Inode count: 242728960
Block count: 970893568
Reserved block count: 48544677
Overhead clusters: 15514514
Free blocks: 90623795
Free inodes: 242586218
First block: 0
Block size: 4096
Fragment size: 4096
fdisk -l shows total size of sda1 as 3703 GiB.
root@nas2:/disk1# fdisk -l /dev/sda1
Disk /dev/sda1: 3.62 TiB, 3976780054528 bytes, 7767148544 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
but df -h -BK shows 3459959268K used + 167361856K available for sda1. which equals to 3459 GiB:
root@nas2:/disk1# df -h -BK
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 3821516216K 3459959268K 167361856K 96% /srv/dev-disk...
That's a difference of 244 GiB on a 3726 GiB Hard drive. 6 percent of the disk is missing!
where do you think is it?
|
You have 970893568 blocks of 4096 bytes on the partition, which are the 3976780054528 bytes ; 3703,67 GiB
You need space for the inode table, read about inodes where the meta data (i.e access rights) of a file is stored.
you have an inode count of -> Inode count: 242728960
242728960 possible inodes for the partition.
each inode uses 256 byte, so you have 242728960*256/4096= 15170560 blocks needed for inodes.
inode table: ~ 57,87 GiB
than you have reserved blocks for root:
Reserved block count: 48544677 * 4096 /1024^3 = 185GiB
Do you have a journal? how many blocks for the journal size?
sudo dumpe2fs -h /dev/sda1
sorry cant comment.
| Partition size differ in fdisk and df by 240 GB. why? |
1,477,418,346,000 |
Beware, noob question here... Whilst installing Arch Linux for a second time, I forgot to set the first partition with the bootable flag. I do not want to lose any data, but how can I add the bootable flag to the partition? So far, I've used fdisk, but it seems to bring an error upon trying to add the bootable flag.
[ arch /boot ]# fdisk /dev/sda1
Welcome to fdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
This disk is currently in use - repartitioning is probably a bad idea.
It's recommended to umount all file systems, and swapoff all swap
partitions on this disk.
The device contains 'ext4' signature and it will be removed by a write command. See fdisk(8) man page and --wipe option for more details.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xfaeee4ac.
Command (m for help): print
Disk /dev/sda1: 500 MiB, 524288000 bytes, 1024000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xfaeee4ac
Command (m for help): a
No partition is defined yet!
Command (m for help):
Then using parted, I also receive a similar error (thinking that it never/incorrectly partitioned /dev/sda1):
[ blackarch /boot ]# parted /dev/sda1
GNU Parted 3.5
Using /dev/sda1
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: Unknown (unknown)
Disk /dev/sda1: 524MB
Sector size (logical/physical): 512B/512B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 524MB 524MB ext4
(parted) set /dev/sda1 boot
align-check disk_toggle mklabel mktable print rescue resizepart select toggle version
disk_set help mkpart name quit resize rm set unit
(parted) set /dev/sda1 set boot on
Error: Partition doesn't exist.
(parted)
What's the fix here? I'm sure it's very simple...
|
Whenever you run a partitioning tool, you should use a whole-disk device like /dev/sda instead of a device referring to a particular partition, like /dev/sda1.
Otherwise, you will effectively be asking the tool to build a second partition table inside a partition. You certainly can do that if you wish, but it's not very useful except in very specific circumstances, e.g. when you are planning to use a partition as a virtual disk in a Virtual Machine, and want to pre-partition the virtual disk before the VM is fully installed.
A partition table usually lies outside the actual partitions it controls. So, to set the bootable flag on /dev/sda1, you'll need to do either:
fdisk /dev/sda
or:
parted /dev/sda
If you are planning to use the GRUB bootloader and install it into the Master Boot Record, note that the code that normally checks for a partition's boot flag is in the boot code of the standard Windows Master Boot Record. When you install GRUB into the MBR, that code is replaced with GRUB's code, which will completely ignore any boot flags anyway.
So if you plan to use GRUB, setting the boot flag will not be necessary and you can skip that step entirely, unless your system's BIOS specifically checks for that flag (which would be somewhat unusual).
| Fix Primary Parition to Become Bootable |
1,477,418,346,000 |
I'm trying to remove the dedicated /boot partition and merge it into root partition /.
I found https://askubuntu.com/questions/741672/how-do-i-merge-my-boot-partition-to-be-a-part-of-the-partition already, but that doesn't seem to help.
What's the situation:
# uname -a
Linux c02 6.1.0-12-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.52-1 (2023-09-07) x86_64 GNU/Linux
# fdisk -l /dev/sda
Disk /dev/sda: 4 TiB, 4398046511104 bytes, 8589934592 sectors
Disk model: QEMU HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 1673AA12-2A54-4718-AF1E-58FE670A87E3
Device Start End Sectors Size Type
/dev/sda1 2048 4095 2048 1M BIOS boot
/dev/sda2 4096 2007039 2002944 978M Linux filesystem
/dev/sda3 2007040 10008575 8001536 3.8G Linux swap
/dev/sda4 10008576 8589932543 8579923968 4T Linux filesystem
The following works and let me reboot the machine successfully:
# copy content of boot partition to root
cp -a /boot /boot.bak
umount /boot
rm -rf /boot
mv /boot.bak /boot
# update-grub
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-6.1.0-12-amd64
Found initrd image: /boot/initrd.img-6.1.0-12-amd64
Found linux image: /boot/vmlinuz-6.1.0-10-amd64
Found initrd image: /boot/initrd.img-6.1.0-10-amd64
Warning: os-prober will not be executed to detect other bootable partitions.
Systems on them will not be added to the GRUB boot configuration.
Check GRUB_DISABLE_OS_PROBER documentation entry.
done
But when deleting the physical boot partition additionally, reboot fails:
# fdisk /dev/sda
Welcome to fdisk (util-linux 2.38.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
This disk is currently in use - repartitioning is probably a bad idea.
It's recommended to umount all file systems, and swapoff all swap
partitions on this disk.
Command (m for help): d
Partition number (1-4, default 4): 2
Partition 2 has been deleted.
Command (m for help): w
The partition table has been altered.
Syncing disks.
Which results in a grub error:
error: no such partition.
grub rescue>
When re-installing grub via apt-get install --reinstall grub-pc I get the following grub error then:
error: attempt to read or write outside of disk `hd0`.
grub rescue>
What's wrong here?
|
update-grub will actually only update the GRUB configuration file at /boot/grub/grub.cfg.
The GRUB core image (which is mostly within your BIOS boot partition) contains a partition number and a pathname. Those tell GRUB how to find the /boot/grub directory, and thus the GRUB configuration file and the GRUB modules like normal.mod.
Unmounting the /boot partition had no effect on GRUB, as GRUB does its job before the Linux kernel even starts. Since you did not update the information encoded into the GRUB core image (by re-running grub-install /dev/sda after unmounting the old /boot filesystem and moving /boot.bak in its place), GRUB simply kept using the old /boot partition until you removed it. If you had made any changes to the GRUB configuration before running update-grub in your procedure, you would have noticed that the changes did not actually take effect.
So, the thing to do after mv /boot.bak /boot should not be update-grub, but:
grub-install /dev/sda
You achieved effectively the same by running apt-get install --reinstall grub-pc. But now the error message is attempt to read or write outside of disk 'hd0'. That tells me GRUB is using BIOS routines to access the disk, and apparently QEMU's BIOS emulation routines have not been updated to handle >2TB disks, so you are hitting the 2 TB limit.
(And yes, I've analyzed a vanilla Debian 12 grub-pc installation. The only modules embedded within the GRUB core image in a simple installation like this are fshelp.mod, the appropriate filesystem driver module (e.g. ext2.mod), the appropriate partition table type module (e.g. part_msdos.mod or part_gpt.mod) and the biosdisk.mod. Although there would be a BIOS-independent ahci.mod or ata.mod available, they are not used by the default installation.)
Unfortunately, when your root filesystem is sized at ~4 TB, you really should switch to booting in UEFI mode. To do that with Debian's QEMU (without Secure Boot support), you'll need to have the ovmf package installed, and to change the <os> section of your VM's XML configuration file to something like:
<os>
<type arch='x86_64' machine='pc-q35-5.2'>hvm</type>
<loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE_4M.fd</loader>
<nvram template='/usr/share/OVMF/OVMF_VARS_4M.fd'>/wherever/you/keep/your/VMs/vmname.nvram.fd</nvram>
<boot dev='hd'/>
</os>
Then you could
reuse the space of your BIOS boot and former /boot partitions to create an EFI System Partition,
add a /etc/fstab line to mount it to /boot/efi,
replace grub-pc and grub-pc-bin packages with grub-efi-amd64 and grub-efi-amd64-bin,
run grub-install --target=x86_64-efi --force-extra-removable /dev/sda
and once successfully booted in UEFI mode, run grub-install /dev/sda again to ensure the UEFI NVRAM boot variable is set correctly. Then install and learn to use efibootmgr to manage your boot settings from within a running OS in a standardized way.
If you are not yet running in UEFI mode when running grub-install --target=x86_64-efi, the --force-extra-removable will install a second copy of the UEFI GRUB at /boot/efi/EFI/boot/bootx64.efi: the removable media/fallback location.
In case you want Secure Boot too, replace OVMF_CODE_4M.fd with OVMF_CODE_4M.ms.fd and the NVRAM template OVMF_VARS_4M.fd with OVMF_VARS_4M.ms.fd. Add the grub-efi-amd64-signed and shim-signed packages, and re-run grub-install --target=x86_64-efi --force-extra-removable /dev/sda.
| How to merge /boot partition into root partition |
1,477,418,346,000 |
I am trying to automate the install of Debian 12. It is working correctly when I wipe the disk off using a live cd but I want the preseed iso to wipe the disk off instead and continue with installation. When there is already an OS (like debian 9) installed on disk,
I am getting an error "Failed to create a file system, the ext4 file system creation in partition #1 of SCSI1 (0,0,0) (sda) failed
If I look at pseudo-terminal:
partman: /dev/sda1 is mounted; will not make a filesystem here.
This is the preseed file I am using:
### Partitioning
## Partitioning example
# If the system has free space you can choose to only partition that space.
# This is only honoured if partman-auto/method (below) is not set.
#d-i partman-auto/init_automatically_partition select biggest_free
# Alternatively, you may specify a disk to partition. If the system has only
# one disk the installer will default to using that, but otherwise the device
# name must be given in traditional, non-devfs format (so e.g. /dev/sda
# and not e.g. /dev/discs/disc0/disc).
# For example, to use the first SCSI/SATA hard disk:
d-i partman-auto/disk string /dev/sda
# In addition, you'll need to specify the method to use.
# The presently available methods are:
# - regular: use the usual partition types for your architecture
# - lvm: use LVM to partition the disk
# - crypto: use LVM within an encrypted partition
d-i partman-auto/method string regular
# You can define the amount of space that will be used for the LVM volume
# group. It can either be a size with its unit (eg. 20 GB), a percentage of
# free space or the 'max' keyword.
#d-i partman-auto-lvm/guided_size string max
# If one of the disks that are going to be automatically partitioned
# contains an old LVM configuration, the user will normally receive a
# warning. This can be preseeded away...
d-i partman-lvm/device_remove_lvm boolean true
# The same applies to pre-existing software RAID array:
d-i partman-md/device_remove_md boolean true
# And the same goes for the confirmation to write the lvm partitions.
d-i partman-lvm/confirm boolean true
d-i partman-lvm/confirm_nooverwrite boolean true
# You can choose one of the three predefined partitioning recipes:
# - atomic: all files in one partition
# - home: separate /home partition
# - multi: separate /home, /var, and /tmp partitions
d-i partman-auto/choose_recipe select atomic
# This makes partman automatically partition without confirmation, provided
# that you told it what to do using one of the methods above.
d-i partman-partitioning/confirm_write_new_label boolean true
d-i partman/choose_partition select finish
d-i partman/confirm boolean true
d-i partman/confirm_nooverwrite boolean true
## Controlling how partitions are mounted
# The default is to mount by UUID, but you can also choose "traditional" to
# use traditional device names, or "label" to try filesystem labels before
# falling back to UUIDs.
d-i partman/mount_style select uuid
|
There is a bug in the preseed process where /dev/sda1 was mounted on /media automatically. Adding an early command umount /media fixes the issue on a server that has previously installed OS. On a server that doesn’t already have an OS, the command will fail because there is no /media to unmount. We can just continue from that and the installation will proceed.
Added this early command in the preseed file:
d-i preseed/early_command string umount /media
| Debian preseed installation is failing if there a previous OS installed |
1,477,418,346,000 |
I have a drive that is about 1TB big. It is mostly free space. When you add up the size of all the partitions, it is less than 256GB.
I have another drive that is 256GB.
I would like to clone the data from the 1TB drive to the 256GB drive.
Is this possible? Obviously the source is bigger than the destination, but I'm wondering since the 1TB drive is mostly free space and the total of the partitions is less than 256GB.
Right now, I am just getting an error that the drive is out of space.
I have tried it with "conv=sparse" as well as with multiple "bs" sizes including as small as 512.
Source:
Disk /dev/nvme0n1: 953,87 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: SAMSUNG MZVL21T0HCLR-00BL2
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 206847 204800 100M EFI System
/dev/nvme0n1p2 206848 239615 32768 16M Microsoft reserved
/dev/nvme0n1p3 239616 411406335 411166720 196,1G Microsoft basic data
/dev/nvme0n1p4 1999026176 2000406527 1380352 674M Windows recovery environment
/dev/nvme0n1p5 1997025280 1999026175 2000896 977M Linux swap
Partition table entries are not in disk order.
Destination:
Disk /dev/sdd: 238,5 GiB, 256087425024 bytes, 500170752 sectors
Disk model: Extreme Pro
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Command Used:
(Note: I have tried many different sets of parameters)
sudo dd status=progress bs=512 if=/dev/nvme0n1 of=/dev/sdd
|
Thank you for adding the details.
As you have Windows installed, be sure you have a current Windows Recovery Media. This USB stick can be created by your Windows. Search for Recovery media in the Windows start menu or just run
C:\Windows\System32\RecoveryDrive.exe
This will help if your Windows should not boot anymore after the disk change.
You have a Linux SWAP partition. There are no data of value inside. So this 1 GB can be freed and recreated later on the new media.
You are copying the whole SSD (1TB) to a smaller one. The amount of contained data would fit as the partitions in total have a smaller size than the destination SSD.
The problem are the last partitions. They are located at the end while the other partitions reside at the beginning, followed by a huge gap.
I would suggest to
first move the partitions from the end just tight to the other ones. Then all partitions are below the 250 GB and the whole structure will fit in the space of the new disk.
After having moved all partitions to the beginning of the disk you can do your dd. you might want to limit the block count to the size of the destination disk to prohibit an error message.
dd bs=512 count=500170752 if=/dev/nvme0n1 of=/dev/sdd status=progress
Faster would be to limit the count to just behind the last partition. But as you might want to resize/delete some of the partitions, I can't calculate it.
Still missing is the Backup GPT Table at the end of the disk. A partition editor will issue an error like
The backup GPT table is corrupt, but the primary appears OK, so that will be used.
Create the Backup GPT Table by
sudo sgdisk -e /dev/sdd
or just open any partition editor and save the partition table.
Now you can move the recovery partition back to the end of the disk and you are done.
If you are unsure how to move the partitions, just have a look over to this answer at SuperUser.com. I think gparted is a convenient way to manage it safely.
(Thanks @TomYan for the hint with the GPT backup table)
| dd not enough space |
1,477,418,346,000 |
When I boot into my computer (it says "GRUB version 2.something" at the top) , Windows shows as one of the options, yet I have removed the Microsoft folder from /boot/efi/EFI.
I tried to run grub/grub-update but I don't have those binaries in my path.
Fedora Linux 38 (Workstation Edition) x86_64
6.3.8-200.fc38.x86_64
|
With the message GRUB version 2.something (!) at the top of the boot screen that shows Windows, it should be easy to remove the final vestiges of that OS.
What's happened is that when grub was last run (or installed) it found Windows as an alternate bootable OS, and so included it in the list of options. You simply need to re-run its configuration/installation phase and Windows will no longer be listed.
Instructions are at What is the equivalent of 'update-grub' for RHEL, Fedora, and CentOS systems, which appears to simplify down to running one command as root,
sudo -s # Or otherwise become root
grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg # Rebuild the grub menu
Unfortunately I cannot test this command, so check that /boot/efi/EFI/fedora/grub.cfg exists before running grub2-mkconfig.
| How can I remove Windows from the GRUB menu (Fedora)? |
1,477,418,346,000 |
This seems to be a problem regarding the sector size of the mapped device for a logical volume between different machines.
More specifically, I'd like to know if and how the sector size of a mapped device corresponding to a logical volume can be configured.
Here is a description of the problem, comparing two machines.
Machine 1
I have an entire disk image in a logical volume mytestlv in a volume group named MyVolumeGroup.
This disk image has its own partition table (completely independent of the actual disk it's stored on).
For example, fdisk /dev/mapper/MyVolumeGroup-mytestlv shows this:
Disk /dev/mapper/MyVolumeGroup-mytestlv: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: ...
Device Start End Sectors Size Type
/dev/mapper/MyVolumeGroup-mytestlv-part1 2048 4095 2048 1M BIOS boot
/dev/mapper/MyVolumeGroup-mytestlv-part2 4096 4198399 4194304 2G Linux swap
/dev/mapper/MyVolumeGroup-mytestlv-part3 4198400 62914526 58716127 28G Linux filesystem
If I need access to the data on a partition of that disk image, I can use kpartx and mount the partition.
kpartx -a /dev/mapper/MyVolumeGroup-mytestlv creates these devices files, which can be used to mount a partition within that disk image, for example:
/dev/mapper/MyVolumeGroup-mytestlvl
/dev/mapper/MyVolumeGroup-mytestlv2
/dev/mapper/MyVolumeGroup-mytestlv3
Machine 2
This was now copied onto a different machine (content is exactly the same, checksums of both entire /dev/mapper/MyVolumeGroup-mytestlv are identical).
Using the configured block size, fdisk /dev/mapper/MyVolumeGroup-mytestlv shows this:
/dev/mapper/MyVolumeGroup-mytestlv: 30 GiB, 32212254720 bytes, 7864320 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x00000000
Device Boot Start End Sectors Size Id Type
/dev/mapper/MyVolumeGroup-mytestlv-part1 1 7864319 7864319 30G ee GPT
Differences
The logical sector size is 512 on Machine 1, but 4096 on Machine 2.
On Machine 1, blockdev --getss /dev/mapper/MyVolumeGroup-mytestlv returns 512.
On Machine 2, blockdev --getss /dev/mapper/MyVolumeGroup-mytestlv returns 4096.
Workaround
On Machine 2, forcing the sector size to 512 with fdisk helps it see the partition table correctly.
fdisk --sector-size 512 /dev/mapper/MyVolumeGroup-mytestlv
Disk /dev/mapper/MyVolumeGroup-mytestlv: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: ...
Device Start End Sectors Size Type
/dev/mapper/MyVolumeGroup-mytestlv-part1 2048 4095 2048 1M BIOS boot
/dev/mapper/MyVolumeGroup-mytestlv-part2 4096 4198399 4194304 2G Linux swap
/dev/mapper/MyVolumeGroup-mytestlv-part3 4198400 62914526 58716127 28G Linux filesystem
Unfortunately, other tools such as sfdisk or kpartx don't seem to have those options.
If I create a loopback device explicitly from that LV, then that loopback device has a sector size of 512, and everything works (in fact, versions of losetup that are not too old have an explicit sector size option).
After losetup --show -f /dev/mapper/MyVolumeGroup-mytestlv:
blockdev --getss /dev/loop0 returns 512
kpartx -a /dev/loop0 creates /dev/mapper/loop0p{1,2,3}
Where is that sector size configured?
I'd like to be able to use fdisk, sfdisk, kpartx without relying on an additional losetup.
On Machine 2, cat /sys/block/dm-2/queue/hw_sector_size returns 4096 (/dev/mapper/MyVolumeGroup-mytestlv is actually a soft-link to /dev/dm-2). I've tried to change this using echo 512 > /sys/block/dm-2/queue/hw_sector_size, but this doesn't seem to be possible.
Is there a way to tell LVM that I'd like the sector size for the device corresponding to /dev/mapper/MyVolumeGroup-mytestlv to be 512?
Is that something that affects all the logical volumes in the group or on the system?
I can't see any options with lvcreate or lvdisplay related to this.
|
The default sector size comes from the underlying hardware, as far as I've understood it.
So on Machine 1, the underlying disks is/are apparently in Advanced Format 512e mode, probably because Machine 1's disk controller is not capable of using 4k sectors natively. Another possibility is that on Machine 1, one of the physical volumes of MyVolumeGroup cannot support 4k sector size, and so the entire volume group must use the sector size that its least capable disk can support.
On the other hand, Machine 2 and all the physical volumes of its MyVolumeGroup seem to be fully capable of using 4k sector size natively (Advanced Format 4Kn). It defaults to doing so because that is way more efficient than emulating the old 512-byte sector size.
| LVM and device mapper: Logical Volume device sector size |
1,477,418,346,000 |
This is what my partitions look like. The first one is the /root partition that I need to extend, second one is the /home partition and the third is the unallocated space that I want to add to the /root. How can I do that?
|
It’s a good idea to take backups of any data on your computer that you cannot afford to lose before doing this. It’s not dangerous but it is possible to mess up and lose data.
You need to boot your system into a live environment that has GParted included. An Ubuntu installation iso is ideal or there is also a GParted live iso available. Once booted run GParted
Once you have done that you need to move partition 2 all the way to the right and click apply. This will take some time so be patient. That step moves the unallocated space between partitions 1 and 2.
Next you will be able to expand partition 1 and click apply, making use of all the unallocated space. This should be quite quick.
That’s it! Good luck
Link to similar question: https://askubuntu.com/questions/126153/how-to-resize-partitions
| How can I extend the root/filesystem to take unallocated space? |
1,477,418,346,000 |
I installed Linux Mint in a dual boot alongside Windows 11, however now I am trying to get completely rid off Windows 11, however I haven't been able to combine the two partitions that I used for personal data storage (one of them is empty, so no need of keeping files).
This is what my partitions look like on lsblk:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 238,5G 0 disk
├─nvme0n1o1
│ 259:1 0 260M 0 part /boot/efi
├─nvme0n1p5
│ 259:2 0 27,9G 0 part /
├─nvme0n1p6
│ 259:3 0 5,7G 0 part
├─nvme0n1p7
│ 259:4 0 65G 0 part /media/user/
262f
└─nvme0n1p2
259:5 0 139,6G 0 part
I would like to combine the 'nvme0n1p2' and 'nvme0n1p7', I don't know if that could be possible as I haven't been able to do it on GParted.
|
You can merge two partitions only if they are adjacent. If they are not adjacent, you can always use LVM to initialize each partition as a Physical Volume, add the two Physical Volumes to a Volume Group, and create a Logical Volume from the Volume Group; then format the Logical Volume as usual.
The commands would be more or less this:
pvcreate /dev/nvme0n1p7
pvcreate /dev/nvme0n1p2
vgcreate myvg /dev/nvme0n1p7
vgextend myvg /dev/nvme0n1p2
lvcreate -L 204G -n mylv myvg
mkfs -t xfs /dev/myvg/mylv
Note that this will wipe the content of both partitions, so if they contain any data you want to keep, backup the data onto another device before proceeding.
| How to combine two partitions? |
1,477,418,346,000 |
I need to move a Pop-OS installation from a 250GB HDD to a 128GB SSD. So far I have been trying to use GParted (which worked for moving my Ubuntu installation between drives of the same size).
The recovery and boot partitions copied properly, but to copy the main (root) partition I need to shrink it first (there is enough space). Using GParted to try and shrink it seems to do something for a while, but then errors at the same point (judging by the progress bar) each time. (The title is not related to this problem to try and avoid A/B problem).
I have tried running the e2fsck command written in the GParted details file, and rebooting the machine. None of these have made the shrink work. Without the partition shrink, I don't know how I can move the installation to the smaller drive.
Below is the gparted_details.htm contents generated by the error.
Any and all ideas on how I can move the OS are appreciated.
GParted 1.3.1
configuration --enable-libparted-dmraid --enable-online-resize
libparted 3.4
========================================
Device: /dev/nvme0n1
Model: CT1000P5PSSD8
Serial:
Sector size: 512
Total sectors: 1953525168
Heads: 255
Sectors/track: 2
Cylinders: 3830441
Partition table: gpt
Partition Type Start End Flags Partition Name Filesystem Label Mount Point
/dev/nvme0n1p1 Primary 34 32767 msftres Microsoft reserved partition unknown
/dev/nvme0n1p2 Primary 32768 819232767 msftdata Basic data partition ntfs New Volume
========================================
Device: /dev/nvme1n1
Model: RPFTJ128PDD2EWX
Serial:
Sector size: 512
Total sectors: 250069680
Heads: 255
Sectors/track: 2
Cylinders: 490332
Partition table: gpt
Partition Type Start End Flags Partition Name Filesystem Label Mount Point
/dev/nvme1n1p1 Primary 2048 250068991 ext4 /
========================================
Device: /dev/sda
Model: ATA CT250MX500SSD1
Serial: 2013E298798B
Sector size: 512
Total sectors: 488397168
Heads: 255
Sectors/track: 2
Cylinders: 957641
Partition table: gpt
Partition Type Start End Flags Partition Name Filesystem Label Mount Point
/dev/sda1 Primary 2048 1050623 boot, esp EFI System Partition fat32 /boot/efi
/dev/sda2 Primary 1050624 1083391 msftres Microsoft reserved partition ext4
/dev/sda3 Primary 1083392 487322748 msftdata Basic data partition ntfs
/dev/sda4 Primary 487323648 488394751 hidden, diag ntfs
========================================
Device: /dev/sdb
Model: ATA ST31000528AS
Serial: 5VP2CLXV
Sector size: 512
Total sectors: 1953525168
Heads: 255
Sectors/track: 2
Cylinders: 3830441
Partition table: msdos
Partition Type Start End Flags Partition Name Filesystem Label Mount Point
/dev/sdb1 Primary 63 1953520127 boot ntfs ExtDisk
========================================
Device: /dev/sdc
Model: ATA ST500DM002-1BD14
Serial: Z2AXE6DG
Sector size: 512
Total sectors: 976773168
Heads: 255
Sectors/track: 2
Cylinders: 1915241
Partition table: msdos
Partition Type Start End Flags Partition Name Filesystem Label Mount Point
/dev/sdc1 Primary 2048 976769023 ntfs stuff
========================================
Device: /dev/sdd
Model: ATA WDC WD2500BEVT-7
Serial: WD-WXR1A60R1236
Sector size: 512
Total sectors: 488397168
Heads: 255
Sectors/track: 2
Cylinders: 957641
Partition table: gpt
Partition Type Start End Flags Partition Name Filesystem Label Mount Point
/dev/sdd1 Primary 4096 2097150 boot, esp fat32
/dev/sdd2 Primary 2097152 10485758 msftdata recovery fat32
/dev/sdd3 Primary 10485760 480004462 ext4
/dev/sdd4 Primary 480004464 488393070 swap linux-swap
========================================
Device: /dev/sde
Model: USB DISK
Serial:
Sector size: 512
Total sectors: 15730688
Heads: 255
Sectors/track: 2
Cylinders: 30844
Partition table: msdos
Partition Type Start End Flags Partition Name Filesystem Label Mount Point
/dev/sde1 Primary 8192 15728639 ntfs NTFS /media/yee/NTFS
/dev/sde2 Primary 15728640 15730687 lba fat16 UEFI_NTFS /media/yee/UEFI_NTFS
========================================
Shrink /dev/sdd3 from 223.88 GiB to 107.42 GiB 00:11:10 ( ERROR )
calibrate /dev/sdd3 00:00:02 ( SUCCESS )
path: /dev/sdd3 (partition)
start: 10485760
end: 480004462
size: 469518703 (223.88 GiB)
check filesystem on /dev/sdd3 for errors and (if possible) fix them 00:00:15 ( SUCCESS )
e2fsck -f -y -v -C 0 '/dev/sdd3' 00:00:15 ( SUCCESS )
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
527061 inodes used (3.59%, out of 14680064)
962 non-contiguous files (0.2%)
411 non-contiguous directories (0.1%)
# of inodes with ind/dind/tind blocks: 0/0/0
Extent depth histogram: 502974/140
24348903 blocks used (41.49%, out of 58689837)
0 bad blocks
15 large files
454992 regular files
45072 directories
15 character device files
1 block device file
7 fifos
4994 links
26959 symbolic links (23910 fast symbolic links)
6 sockets
------------
532046 files
e2fsck 1.46.5 (30-Dec-2021)
shrink filesystem 00:10:53 ( ERROR )
resize2fs -p '/dev/sdd3' 112640000K 00:10:53 ( ERROR )
Resizing the filesystem on /dev/sdd3 to 28160000 (4k) blocks.
Begin pass 2 (max = 10272100)
Relocating blocks XXXXXXXX--------------------------------
resize2fs 1.46.5 (30-Dec-2021)
resize2fs: Attempt to read block from filesystem resulted in short read while trying to resize /dev/sdd3
Please run 'e2fsck -fy /dev/sdd3' to fix the filesystem
after the aborted resize operation.
|
Solution is simple:
Don't shrink the partition and copy it.
Instead, make a new partition on the target SSD, and copy over the files from the old partion. There's no reason why you couldn't do that – and it's both easier and safer.
| Moving Pop-OS installation to a smaller drive (using GParted?) |
1,477,418,346,000 |
Trying to hibernate failed because the swap is too small. Debian wiki pages do in my view not clearly explain how to fix this. 2 3
gparted allows to manage the partitions. But it does not allow to decrease the size of the boot, esp partition. This means the swap partition size cannot be increased. The partition would need to be unmounted, which worries me a bit.
rootuser.com guides one to boot from another medium than HD such as USB to be able to configure the partitions. A blog refers to this procedure stating "Doing so is quite easy." But is it? I see others have failed to ultimately increase swap size.
others have described How to configure swap space after system installation?
$ sudo fdisk -l /dev/nvme0n1
Disk /dev/nvme0n1: 476.94 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: SAMSUNG MZVLB512HAJQ-000L7
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: AC1BEA07-5209-41FE-AF1A-79C3D68B3FE4
Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 1050623 1048576 512M EFI System
/dev/nvme0n1p2 1050624 998215679 997165056 475.5G Linux filesystem
/dev/nvme0n1p3 998215680 1000214527 1998848 976M Linux swap
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 7.7G 0 7.7G 0% /dev
tmpfs 1.6G 2.0M 1.6G 1% /run
/dev/nvme0n1p2 467G 95G 349G 22% /
tmpfs 7.7G 13M 7.7G 1% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
/dev/nvme0n1p1 511M 23M 489M 5% /boot/efi
tmpfs 1.6G 216K 1.6G 1% /run/user/1000
$ mount # edited
/dev/nvme0n1p2 on / type ext4 (rw,relatime,errors=remount-ro)
/dev/nvme0n1p1 on /boot/efi type vfat (rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro)
|
This worked out
Backup with CloneZilla
Prepare offline configuration of partitions
Configure partitions
Final swap setup
Check partitions
Test hibernation
1. Backup with CloneZilla
Requires at least one usb drive, one of which must have enough space for the backup.
optionally a second to install CloneZilla Live on
I decided to follow the strategy of making CloneZilla available as an iso image on the disk with an entry to the grub boot menu, making it easy to backup stuff in future without relying on two usb's
Gnome-disk-utility failed so I was led to use CloneZilla Live, the ISO, not zip
CloneZilla Live example at the bottom, and see original guide
I'm running Debian Bullseye and it was necessary to solve the bug of a slightly prior version by inserting rmmod tpm before the menuentry
when I got to running the actual program it was a bit buggy but worked out somewhat like described in the original guide
2. Prepare offline configuration of partitions (no mounted filesystem)
reboot
edit BIOs to prioritize (and what not) boot from usb
boot from usb with Debian bullseye install to enter rescue mode where filesystem is not in use.
3. Configure partitions
I could not use a procedure described by another Debian bullseye user, because there were no logical volumes or volume groups, only partitions, as also explained here
I deactivated swap of current swap partion, went to gparted and removed it, so I did not backup the existing small swap partion uuid, so I had to setup swap at the end
follow rootuser.com advice to first check filesystem e2fsck -fy /dev/partition-here, shrink it to slightly less than intended resize2fs /dev/partition-here desired-minus-~10-G, then shrink partition, however, using parted, and then resize2fs to actually desired size, then parted to create the linux-swap from end of filesystem to 100%.
for parted commands:
select device (it defaulted to the usb device that booted rescue mode)
unit GiB (as explained at archlinux this unit corresponds to the unit of resize2fs xxxG)
quit parted and use mkswap /dev/swap-partion-name
4. Final swap setup
reboot was buggy (I hit the keyboard a few times in the hope of causing some progress) and it eventually entered Gnome :-)
swapon /dev/swap-partition-name
update:
fstab (remember # Please run 'systemctl daemon-reload' after making changes here.),
RESUME,
and initrd/initramfs
free -m
cat /proc/sys/vm/swappiness
5. check filesystem, disk partition table, partitions, and block devices
e2fsck -fy /dev/filesystem-partition-name-here
mount
fdisk -l /dev/encompassing-partition-name-here
df -h
lsblk
6. Test hibernation
systemctl hibernate
sudo journalctl -r -u hibernate.target
sudo systemctl list-dependencies -a hibernate.target
sudo systemctl status systemd-hibernate
| How to increase swap partition size without reinstall? |
1,682,406,551,000 |
I have a block device that I'm trying to erase using dd. Seems like all my portion has been deleted. However, dd command is returning No space left on device. Block device information is as follows,
fdisk -l /dev/xxxx
Disk /dev/xxxx: 7876 MB, 7876902912 bytes
4 heads, 16 sectors/track, 240384 cylinders
Units = cylinders of 64 * 512 = 32768 bytes
Disk /dev/xxxx doesn't contain a valid partition table
I use the following dd command to erase,
~ # dd if=/dev/zero of=/dev/xxxx bs=1M count=7876
And I get the following output,
dd: writing '/dev/xxxx': No space left on device
7513+0 records in
7512+0 records out
7876902912 bytes (7.3GB) copied, 355.751103 seconds, 21.1MB/s
Can someone help me understand the output here, please? Output shows 7876902912 bytes (7.3GB) copied. This is the entire size of the device. Then does it mean that the entire device has been erased and since there is no space left, thus 'No space left on device'? or does it mean something else?
|
Yep.
Also, if this is an SSD, a quick blkdiscard /dev/xxxx would have had the same effect of making the device return all zeros when you read anything from it, as writing zeros. (I'd recommend you still run blkdiscard on your device, to let the wear leveling know all your blocks can be put into the pool of blocks to be reused for new data and don't have to be kept intact, data wise.)
And if you go for overwriting with zeros, you don't need the slightly awkward dd program for writing zeros. cat /dev/zero > /dev/xxxxx would have worked just as well. pv < /dev/zero > /dev/xxxx does effectively the same, but gives you speed and progress information on the way.
| How to properly erase entire block device /dev/xxxx? |
1,682,406,551,000 |
I got a 1.5TB datacenter-grade SAS SSD, and I am now installing a new install of CentOS 7 on it (CentOS 7 will be changed to CloudLinux later).
I am setting up the partition scheme and I have plenty of space to work with. My server has 256GB of RAM, so I'm not going to make SWAP 1.5x that obviously.
A lot of web activity from a huge amount of users simultaneously will be happening on this drive.
Here is what I came up with. What would you change?
/boot – 2 GB
/ = 25 GB
/tmp = 10 GB
Swap = 16 GB **
/home = remaining storage
Redhat recommends (at least) 4GB SWAP for a system with 64GB of RAM (source).
So their recommendation comes to 1/16th for a system with large ram.
** Maybe they would also recommend 4GB SWAP for 256GB RAM, but I didn't see that, so the calculation of 256GB RAM / 16 = 16GB SWAP. If you have another recommendation, I would like to hear it.
|
Here is what I came up with. What would you change?
my recommendation would be to do like this
/boot 1gb (or 2gb would be fine)
/boot/efi 100mb (or 200mb would be fine)
/ max (remaining space of your N tb ssd)
here is why I say this, take it fwiw
Been running work servers since RHEL 7.6, now on 7.9, over last ~5 years: my 1gb /boot partition currently at 44% full and my 100mb /boot/efi is 11%. Based on this I see no good reason to make them any larger.
caveat: if you're not doing EFI and doing the old BIOS way and do not have a boot/efi partition, then with everything lumped under just /boot I don't have any data or experience to tell you what to expect over time for that way vs like I do for EFI; so go with 2gb, 4gb at most; you won't miss < 10gb on a 1.5tb ssd
question for the ages: do we still need to make a swap disk partition when you have > ~64gb of RAM? My servers have 512gb and larger of RAM, I never make a swap disk partition and never had a problem. Same regarding my 32gb home pc with rhel/centos 7+ linux, no disk swap partition and never a problem.
at least in RHEL 7 storage admin guide chapter 15 states 8gb to 64gb swap = 1.5 x ram; > 64gb ram swap = at least 4gb.
wtf does at least mean? better make it 500gb to be safe ! ?
yes, I hate disk swap partitions. Someone (redhat?) provide evidence detailing how, when, why disk swap is beneficial when you have 256gb of RAM.
for partitioning out your disk:
/home or /var/log/audit or /opt or anything else is first subjective. But the big problem with doing this is you shortchange yourself in the long run, if you decide for /home for example just 25gb out of a 1000gb disk, you'll fill /home and wish you did it 50gb, and then wish it was 100gb, and so on. I've experienced this, when the mentality was *we have to partition out /home and /var and /opt and /usr. Well how big do we make each and guarantee there's never a problem. This is just a stupid mentality.
the only pro of partitioning out I am aware of is if you know you want to take advantage of some mount level option, such as noexec for example. Otherwise you typically do more harm than good
can anyone provide a good reason other than mount level options for partitioning out a disk? If not then why do it and set yourself up for failure.
you said making / just 25gb. This is not good, do NOT do that.
just have / as the entire disk minus the boot partitions and that way you'll never run out of space because of not having being able to predict what folders would have grown in size however they have... /home, /opt, /usr, /var.
the /tmp folder : do systemctl enable tmp.mount to use RAM (i.e. tmpfs) rather than disk; better performance. Otherwise let /tmp just fall under the mounting of / and then there is nothing to worry about until you exceed the physical size limitation of the disk.
nothing worse than doing df -h and some folder on a separate partition at 99% full and being a show stopper and also seeing on the same disk however many other partitions at whatever size less than 50% full and not being helpful or useful. This is wasteful and mismanagement, and not the kind of principles you should be configuring by and operating on.
| How large should I make root, home, and swap partitions with a HUGE SSD? |
1,682,406,551,000 |
When GPT is used partition ID can be set with sgdisk
$ sgdisk --partition-guid=1:"00000000-0000-0000-0000-000000000000" "/dev/vda"
$ readlink -f /dev/disk/by-partuuid/00000000-0000-0000-0000-000000000000
/dev/vda1
How can I use a predefined partition id with MSDOS partition table?
|
$ ID=00000001 # Disk identifier
$
(
echo x # Expert mode
echo i # Change disk indentifier
echo 0x"$ID" # New identifier
echo r # Return
echo w # Write
echo q # Quit
) | fdisk "/dev/vda"
$ readlink -f /dev/disk/by-partuuid/"$ID"-01
/dev/vda1
| sfdisk/parted: predictable/predefined partuuid for msdos partition table |
1,682,406,551,000 |
I have a disk that somehow split into what appear to be logical partitions, but I am unsure. How to revert to the original disk?
Here is the list of drives shown by fdisk -l:
/dev/mmcblk0
/dev/mmcblk0boot0
/dev/mmcblk0boot1
lsblk identifies each of these as disks, but according to blkid, each of them have a PTUUID.
|
The /dev/mmcblk0boot devices are not "normal" partitions, these are special devices, so called MMC boot partitions and are used to store parts of the bootloader on ARM boards for example. Ignore these "partitions" and create new partitions (if you need/want them) on the /dev/mmcblk0 device, these partitions will be named /dev/mmcblk0pX.
| revert logical partitions to original disk |
1,682,406,551,000 |
For major version upgrades, Tails recommends this convoluted upgrade path where you write an image on a fresh USB drive, then clone the OS partition onto your original USB drive. I'm trying to figure out a better way to do it:
I have a .img file that contains a partition table and a single partition:
$ sudo kpartx -av tails-amd64-5.2.img
add map loop12p1 (253:0): 0 2553856 linear 7:12 2048
$ sudo parted tails-amd64-5.2.img UNIT b print
Model: (file)
Disk tails-amd64-5.2.img: 1309671424B
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1048576B 1308622847B 1307574272B fat32 Tails boot, hidden, legacy_boot, esp
And a drive with two partitions:
$ sudo kpartx -av /dev/sdb
add map sdb1 (253:2): 0 16777216 linear 8:16 2048
add map sdb2 (253:3): 0 43655168 linear 8:16 16781312
$ sudo parted /dev/sdb UNIT b print
Model: Kingston DataTraveler 3.0 (scsi)
Disk /dev/sdb: 30943995904B
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:
Number Start End Size File system Name Flags
1 1048576B 8590983167B 8589934592B fat32 Tails boot, hidden, legacy_boot, esp
2 8592031744B 30943477759B 22351446016B TailsData
I would like to restore the image over top of the first partition on that drive and leave the second partition untouched. I tried this command and the result will boot to GRUB but not the OS:
$ sudo dd if=tails-amd64-5.2.img of=/dev/sdb1 bs=16M
According to fsck, that's because the GPT partition is at the start. This command does nothing:
$ sudo kpartx -d tails-amd64-5.2.img
This command results in a drive that doesn't even boot to GRUB:
$ sudo dd if=tails-amd64-5.2.img of=/dev/sdb1 bs=512 count=2
Is there a way to strip the GUID partition table and make this work?
|
You almost provided your own answer. You are already using kpartx but aren't taking advantage of its results:
$ sudo kpartx -av tails-amd64-5.2.img
add map loop12p1 (253:0): 0 2553856 linear 7:12 2048
Note the response add map loop12p1.
This command has just created a /dev/mapper/loop12p1 device for you, which will give you direct access to the partition within the image file, skipping the GUID partition table.
So you can then do this:
sudo dd if=/dev/mapper/loop12p1 of=/dev/sdb1 bs=16M
This command will remove the /dev/mapper/loop12p1 (or whatever) loopback device once you no longer need it:
sudo kpartx -d tails-amd64-5.2.img
| Restore from Full Disk Image to Single Partition |
1,682,406,551,000 |
I am using the Live version of Kali Linux and now I'm realizing that having persistence might be a good idea. There are some tutorials out there showing how it can be done on Windows and it looks easy with tools like Rufus. However the thing is that I have a Multiboot USB containing some other ISOs like one for a live GPARTED distro that allows me to resize all partitions on my computers. For this setup, I am using a tool called Ventoy. Now, Ventoy does support persistence as written here but the instructions were rather difficult for me to follow. I was wondering if I could simply partition the drive myself into two volumes: one for ventoy + GPARTED + Kali and the other for the live Kali persistence? What extra would I need to put in the second (persistence) partition to tell Kali to use that space for its persistence storage?
What would be the best way of going about this. What I finally want is a multiboot USB with two ISOs: one for GPARTED, one for live Kali + persistence, quite possibly in two partitions (or more if required). Any thoughts?
|
Ventoy is a fantastic option, and it sounds like you might be halfway there already with setting up a drive. Have you tried Ventoy's configuration tool, Plugson? It's browser-based and runs on your local machine, providing a point-and-click configuration for some of the trickier stuff, like persistence files.
Ventoy's partition layout is pretty specific, I wouldn't mess with it after the drive is configured. Persistence is stored in separate flat files, not another partition.
I cannot speak to other approaches, but the official Kali linux docs roaima suggested would probably be a good place to start.
| Is it possible to create a MULTIBOOT USB with Kali + persistence? |
1,682,406,551,000 |
I have some disk image, taken with dd if=/dev/somedevice of=filename.img. I was able to shrink them following this tutorial.
Now I would like to script all the procedure, and I managed to perform almost everything, apart the fdisk resize part. I'm trying to resize the partition with this command
echo " , +7506944K," | sfdisk -N 2 /dev/loop14
But independently from the size I use I get an error:
/dev/loop14p2: Failed to resize partition #2.
How can i script the redefinition of the end of a partition? Why is my command failing, can I get some more information somehow?
|
I understood what was wrong:
First, sfdisk accept the size of the partition, not the increment, so the + sign is wrong. One difference from fdisk is that the end is the sector number from the beginning of the partition, not from the beginning of the device.
Then the unit cannot be other than sectors.
So in my case, given the sector size of 512 bytes and a requested final size of approximately 7Gb , I had to launch the command as:
sudo sh -c 'echo " ,14596416" | sfdisk -N 2 /dev/loop14'
| Scripting the partition shrinking |
1,682,406,551,000 |
I have Windows installed in C: drive and the D: drive has my data. I know reinstalling Windows can be done without formatting the non-OS drives but from what I can tell, Linux has a completely different storage protocol.
So, does Linux recognize Windows partitions or does installing Linux for the first time require you to erase all data on the hard drive?
|
I ended up installing Manjaro, and for anybody else having this query, yes the installer did recognize the Windows partitions.
You can do either of these-
Erase the whole drive
Choose an existing (even Windows) partition to format and install it on
Make a new partition to install it on
| Does installing Linux on a windows machine only format the OS partiton or does it format the data partions too? |
1,682,406,551,000 |
Yesterday, a message poped up in Debian, saying that my root partition is full (0 MB free) after I copied a new software under /opt. So I moved the folder back to another partition to temporarily fix the issue.
I freed some space from /dev/nvme0n1p9 using a Debian installation USB, and now try to extend the root partition using this freed space.
The bios of my HP laptop does not have a "legacy" boot option, so I cannot use a bootable GParted USB stick to increase the size of the root partition.
I search a bit and it appears that extending the root partition is tricky.
I would like to confirm a few things:
Does extending the root partition mean pushing partitions located after this one further on the disk, or can I use the unallocated space at the end of the disk and have a root partition split in two?
Can I just move these partitions around without consequences?
In my case, how would you sort this out, if it's even possible?
If it's not, am I doomed to reinstall the OS?
Can I bypass this limit by installing new applications outside of this root partition?
OS: GNU/Linux Debian 11 (bullseye)
Thank you.
Edit - Details of root partition usage
Following comment from @oldfred, here is the biggest folders of the root partition.
The biggest usage is for texlive but I don't want to uninstall it, if possible.
|
Does extending the root partition mean pushing partitions located after this one further on the disk, or can I use the unallocated space at the end of the disk and have a root partition split in two?
Yes, it means exactly that. A partition must always be contiguous from the beginning to the end. A LVM logical volume could use multiple discontinuous pieces of disk, but converting an existing system to LVM is not exactly trivial.
Can I just move these partitions around without consequences?
If your /etc/fstab is written to use partition UUIDs instead of their device names, or if gparted won't rearrange the entries in the table to match their ordering on the disk, yes.
In my case, how would you sort this out, if it's even possible?
(Exactly as you ended up doing, as your own answer appeared while I was writing this one.)
First, move all the partitions that are located "to the right" of the partition you wish to extend as far towards the right as you can.
After that, boot to the installed OS(s) to verify everything still works.
Then boot back to the external media to extend the root partition.
If it's not, am I doomed to reinstall the OS?
Not doomed at all. It just takes a bit of slow and careful work.
Can I bypass this limit by installing new applications outside of this root partition?
That's certainly one way to bypass it, but it might be difficult to achieve for programs installed through the OS's package manager. For third-party software, it might actually be easy.
Another possible way would be to locate some branch of the directory tree on the root filesystem that occupies a fairly large amount of space but is not essential for early boot processes, and move it to another filesystem, then create a symbolic link so that it will still be reachable using original pathnames. For example, you could easily move /usr/share/doc to a different filesystem:
mv /usr/share/doc /new/filesystem/mountpoint/
ln -s /new/filesystem/mountpoint/doc /usr/doc
But the more filesystems you have, the more you'll run the risk of not having the free space in the filesystem you need it. That's why it can be worthwhile to extend partitions if they are clearly too small for your requirements.
| How do I resize root partition with UEFI |
1,682,406,551,000 |
I have 3 partitions as you can see below, 2 Linux type partitions and 1 Extended partition.
I need to create 3 logical drives (sized 200M, 300M, 400M) inside of partition /dev/sdb3.
When I try to fdisk /dev/sdb3 and then enter command 'n', I get the following output:
All space for primary partitions is in use.
Might be a noobie question, but I would greatly appreciate any insight.
|
You need to run fdisk on the whole disk, not the extended partition:
fdisk /dev/sdb
Use n to create a logical partition; fdisk should say something along these lines:
Command (m for help): n
All space for primary partitions is in use.
Adding logical partition 5
First sector (1437744-3490550, default 1437744):
If it asks you what kind of partition to create, ask for a logical partition by entering l.
| Fdisk Ubuntu - How to create logical drives inside an extended partition |
1,682,406,551,000 |
Using CentOS 8.5
I tried this on a new disk with no data or partition tables.
GPT fdisk (gdisk) version 1.0.3
Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present
Creating new GPT entries.
Command (? for help): n
Partition number (1-128, default 1):
First sector (34-4194270, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-4194270, default = 4194270) or {+-}size{KMGTP}: +500M
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/nvme0n7.
The operation has completed successfully.
[root@workstation ~]#
[root@workstation ~]# lsblk -fp
NAME FSTYPE LABEL UUID MOUNTPOINT
/dev/sr0
/dev/nvme0n1
├─/dev/nvme0n1p1 xfs c932c155-e3a9-4852-aad1-d545778b46c6 /boot
└─/dev/nvme0n1p2 LVM2_member KwN1Pf-Jf3R-l7HW-Qed3-DLSl-olMr-rVdD3n
├─/dev/mapper/cl-root xfs e6d63656-cd58-40a8-aadf-b0416e36c8d4 /
├─/dev/mapper/cl-swap swap 3b26e208-9846-4c20-8b90-60bb0ce68869 [SWAP]
└─/dev/mapper/cl-home xfs 7cb2b8a0-fe6b-4c6c-9d8f-124780be79c9 /home
/dev/nvme0n2
/dev/nvme0n3
/dev/nvme0n4
/dev/nvme0n5
/dev/nvme0n6
/dev/nvme0n7
└─/dev/nvme0n7p1 xfs 25552944-4b61-44d6-a1bd-ca0ae3cfc89f
/dev/nvme0n8
/dev/nvme0n9
[root@workstation ~]#
As you can see, the partition already has xfs filesystem created on it. I thought I have to use mkfs.xfs after using fdisk or gdisk.
|
No, gdisk doesn't format newly created partitions to XFS so there are two options:
The disk wasn't empty. If there was a partition table and a partition with XFS previously that wasn't wiped before removing and the newly created partition starts on the same sector the old one started, you'll see the filesystem on the newly created partition.
lsblk uses cached data from udev and udev can be wrong sometimes, you can use blkid -p /dev/nvme0n7p1 which actually reads the data from the partition to check for filesystem.
Btw. I recommend using fdisk which supports GPT too and shows a warning when a newly created partition already contains a filesystem signature and allows you to remove the signature:
Created a new partition 1 of type 'Linux filesystem' and of size 499 MiB.
Partition #1 contains a xfs signature.
Do you want to remove the signature? [Y]es/[N]o:
In general it is a good idea to use wipefs to remove filesystem signatures before removing the partitions with fdisk or parted.
| Does gdisk partition and format it with xfs? |
1,682,406,551,000 |
I have data on a disk that I want to encrypt by cloning the full filesystem of that disk (source) to a virtual block device (devicemapper/cryptsetup) based on an additional disk (target) of identical capacity. I have already setup the LUKS device on the target disk.
The source disk has been initialized as partitionless filesystem. That meas I would need to shrink that filesystem by 2MiB (4096 blocks a 4096 bytes) to account for the additional LUKS2 header, and then dd the data from the filesystem on the source disk to the LUKS device.
I did a
resize2fs /dev/sda <newsize>
with <newsize> being the number of total blocks minus 4096, which seemed to work as expected.
However, since the source disk is partitionless, dd would still copy the full disk - including the 4096 blocks by which the filesystem has been shrunk.
My question now is:
can I safely assume that the free blocks from the resize2fs operation are located at the end of the physical device (source), and thus pass count=<newsize> bs=4096 as argument to dd? Will this clone/copy the complete filesystem? Or any other pitfalls I did not consider?
Bonus question: In order to double check, is there already a tool available that computes the md5sums of a disk block-wise (instead of file-wise of a filesystem)?
|
My question now is: can I safely assume that the free blocks from the resize2fs operation are located at the end of the physical device
Yes, that's the assumption you'd need to make even if it was a partition and you were going to shrink it.
and thus pass count=<newsize> bs=4096 as argument to dd?
Well, probably.
dd is a bit weird in that dd count=N bs=M does not mean that N*M bytes will be copied, just that it will issue N reads of M bytes each, and a corresponding write for each. The reads might return less than the requested number of bytes, in which case the total read and written would be less than what you wanted.
In practice, I've never seen Linux block devices return partial reads, so it should work. You should check the output, it should say something like "N+M records in" where the first number is the amount of full blocks read, and the second the number of partial blocks read. GNU dd should also warn about incomplete reads.
In any case, you might as well use head -c $(( nblocks * 4096 )).
See: dd vs cat -- is dd still relevant these days? and When is dd suitable for copying data? (or, when are read() and write() partial)
(Anyway, double-check your numbers before doing anything based on a stranger's post on the Internet. It's your filesystem, and you don't want to mess it up due to someone else's typo. You probably knew that already, but anyway.)
In order to double check, is there already a tool available that computes the md5sums of a disk block-wise
You should be able to just run md5sum /dev/sdx, or head -c $bytes /dev/sdx | md5sum.
MD5 should work fine for checking accidental corruption or a truncated copy, but note that in general, it's considered broken. Distinct files with the same hash can be created with some ease. For serious use, use the SHA-2 hashes instead, i.e. sha256sum or sha512sum.
| Shrink partitionless filesystem |
1,682,406,551,000 |
I have created CentOS 7 VM in oracle VM box with the disk size of 10 GB.
When I run fdisk -l /dev/sda command, it reports that the disk size is 10.7 GB. Can someone explain why fdisk shows higher disk space than the actual disk space?
|
If you look at the fdisk output, you can see that the disk is reported as being exactly 10*1024*1024*1024 bytes. That suggests that whatever created that disk actually created it to be 10 GiB, although your screenshot shows that VirtualBox calls it GB, but I just take that as (yet another) indication that Oracle sucks!
| Discrepancy of disk size in fdisk command output |
1,682,406,551,000 |
I need to run those commands, but I do not want to use 'shell', is there way to create home partition using ansible tools?
lvcreate -L5G -n home vg0
mkdir /home
mkfs.xfs /dev/mapper/vg0-home
mount /dev/mapper/vg0-home /home
|
Use the community.general.lvol module to manage logical volumes.
- name: Create a logical volume home with 5g
community.general.lvol:
vg: vg0
lv: home
size: 5g
Use ansible.builtin.file to create the directory.
- name: Create /home directory
ansible.builtin.file:
path: /home
state: directory
mode: '0755'
The community.general.filesystem module allows you to create filesystems.
- name: Create xfs filesystem on vg0-home
community.general.filesystem:
fstype: xfs
dev: /dev/mapper/vg0-home
Finally, ansible.posix.mount lets you mount what you created.
- name: Mount home volume
ansible.posix.mount:
path: /home
src: /dev/mapper/vg0-home
fstype: xfs
state: present
This can be generalized by introducing variables for the FS type, mount point, volume size, volume name and volume group name.
| create partition using ansible |
1,682,406,551,000 |
I have a LVM called /dev/data/files on a VM of mine.
I allocated 50 GB on it and wanted to extend this LVM...
So, instead of using lvextend command which i forgot to use i did the following:
fdisk /dev/data/files
Used N option to create a partition
Writed changes...
So then i restarted the server but the size didnt change.
I didnt know how fdisk worked so i believe that i created a new partition on that file system...
My question is:
Is it safe to delete this partition i created and then use lvextend to extend the LVM disk the correct way? Like if i enter fdisk again and choose to delete the partition, will i lose any files? Or is it ok to remove it?
Thanks!
|
The first two sectors in an ext4 filesystem are not used.
If you create a DOS partition table with only primary partitions then those writes affect the first sector only and no filesystem data has been destroyed.
If a logical partition has been created then it is possible that data has been destroyed (depending on the position of the partition).
If a GPT has been created then the filesystem superblock has been destroyed. In that case the device could not be mounted any more.
| How to undo fdisk new partition on ext4 file system? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.